content
stringlengths
86
994k
meta
stringlengths
288
619
Millimeters to Meters c What is a millimeter? A millimeter is a thousandth of a meter (1/1000) which is the SI (International System of Units) unit of length. It is normally used to measure small lengths like the thickness of a sheet of paper or the dimensions of a small object. One millimeter is approximately equal to 0.03937 inches (about 1/25th). Presicely there are 25.4 millimeters in an inch and it is often used in science and engineering. It is used in countries that have adopted the metric system. You may come across millimeters when measuring the size of electronic components, jewelry or even the thickness of a fingernail. What is a meter? A meter is a unit of length in the metric system, and it is equivalent to 100 centimeters or 1,000 millimeters. It is the base unit of length in the International System of Units (SI) and is widely used around the world for measuring distances. The meter was originally defined as one ten-millionth of the distance from the North Pole to the equator along a meridian passing through Paris, France. However, in 1983, the meter was redefined as the distance traveled by light in a vacuum during a specific time interval. How do you convert millimeters to meters? To convert millimeters to meters, you need to divide the number of millimeters by 1000. This is because there are 1000 millimeters in one meter. The millimeter is a unit of length in the metric system, commonly used for measuring small distances or dimensions. On the other hand, the meter is the fundamental unit of length in the metric system, used for larger measurements. To perform the conversion, simply take the number of millimeters and divide it by 1000. For example, if you have 5000 millimeters, you would divide 5000 by 1000 to get 5 meters. Similarly, if you have 250 millimeters, dividing it by 1000 would give you 0.25 meters. This conversion is straightforward and can be done using a calculator or mentally by moving the decimal point three places to the
{"url":"https://live.metric-conversions.org/length/millimeters-to-meters.htm","timestamp":"2024-11-06T13:30:45Z","content_type":"text/html","content_length":"69665","record_id":"<urn:uuid:4d597fa3-2644-4140-a75f-42ab5228b737>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00039.warc.gz"}
Write to serial port with Ruby [SOLVED!] Write to serial port with Ruby [SOLVED!] Hi all, I've made some projects to control 3D objects with an Arduino with connected hardware, like potentiometers and gyroscopes (making mini games Therefore i am using: □ SketchUp 8 □ Sketchup 8 plugin - SketchyPhysics v3.5.6 □ Sketchup 8 plugin - Supy v1.6 for Python v2.5 □ Python 2.5 □ Python 2.5 library – PySerial v2.5 But... does SuPy v1.6 work in SU2014 / SU2015? I know there are Ruby changes with the 2.x version that break scripts, but this was from SU2014 to SU 2015 right? In SketchUp 8 you just could do: Py.exec("import serial") Py.exec("ser = serial.Serial('COM8', 9600)") Py.exec("serialdata = ser.readline()") $serialdata = Py.eval("serialdata") And in the controller box you could put: So...i was wondering if there is any Ruby GEM needed to make serial things happen?, and wat would be a sample code? Please let me know! I found source code for SuPy and I will see if it could be compiled under 2.0.0. If compiling goes with success, I'll post SuPy compatible with SU2014 and SU2015. have either of you had a look at RubySerial it's on my 'long' list but I haven't chased it up yet... Nice find, John! I find it very convincing as it has been kept up to date with latest Ruby version and doesn't require installation of Python. However, it does require Ruby FFI gem, which needs to have some tweaking before it can be used with SU. Hi Anton_S & driven, Wow, i really hope you can make this work!, so that we can use the new SU version to communicate with a serial port, we can make lots of cool things with Arduino & Raspberry. If i need to test something, let me know! Thanks in forward! Hello Anton_S I've tested it with Sketchup 2015... This is the code to read data for the serial port so far... @serial = Serial.new('COM8', 9600) MSketchyPhysics3.closeControlPanel if frame == 1 $serialdata = eval(@serial.read(1024)) logLine("Serial data; " + $serialdata.to_s) Thank you!, i see if i can make a script that SENDS data to the Arduino / Raspberry... I see you tested it with SU2015...this confuses me. Were you using the 64 bit version of sketchup? Which Sketchy Physics download did you use? Thanks. I'm glad it works with SU2014 @mptak said: I see you tested it with SU2015...this confuses me. Were you using the 64 bit version of sketchup? Which Sketchy Physics download did you use? Thanks. No, i used the 32-bit version of SU2015, sketchyphysics does not work on 64-bit (yet) Hi All, This is an example how to SEND data to the Arduino with SU 14/15, SketchyPhysics and Ruby Serial First the Arduino programming, here is the code to lit the integrated LED on digital 13, when there is a serial "1" recieved, the LED goes out when there is a "0" recieved. Arduino code: String readString; void setup() { pinMode(13, OUTPUT); void loop() { while (Serial.available()) { if (Serial.available() >0) { char c = Serial.read(); readString += c; if (readString.length() >0) { if (readString == "1") { digitalWrite(13, HIGH); } if (readString == "0") { digitalWrite(13, LOW); } readString = ""; Now...once programmed, create a sketchyphysics solid and paste this code in it: @serial = Serial.new('COM8', 9600) @m = Sketchup.active_model.materials.add "My material" group.material = @m logLine("LED ON!") @m = Sketchup.active_model.materials.add "My material" group.material = @m logLine("LED OFF!") MSketchyPhysics3.closeControlPanel if frame == 1 How does this work? When you run the script and click on the solid, the color will change to yellow and over the serial port a "1" will be send, de Arduino will recieve this "1" and will put on the LED light, when you release the mouse button, the model changes color back to white and a "0" will be sent, the Arduino recieves this and will put the LED light out! Have fun! Ps. Is there any code to read out a textbox or something (userinput possible?), where the user can specify the COMPORT and BAUDRATE? I think i will start to create the 204/2015 walktrough soon!, this will be the banner for the SU_14_15 - Ruby Serial SEND Example with Sketchup Sketchyphysics.skp Hi Anton_S *EDIT SOLVED!!! SEE BELOW Could you assist on this one? When for example i use a diffrent baudrate (on purpose) i still get this error: The original code was: MSketchyPhysics3.closeControlPanel if frame == 1 string_size = 1024 $serialdata = eval(@serial.read(string_size)) I changed it to this, but that doesn't work: MSketchyPhysics3.closeControlPanel if frame == 1 string_size = 1024 $serialdata = eval(@serial.read(string_size)) rescue RubySerial;;Exception => e @serial.close if @serial How to catch this error? *EDIT SOLVED!!! I just changed the line to: $serialdata = @serial.read(string_size) so it's only: MSketchyPhysics3.closeControlPanel if frame == 1 $serialdata = eval(@serial.read(1024)) Hello Bas9999, I managed to tweak ffi and rubyserial (suggested by driven) into the plugins for SU. Download https://googledrive.com/host/0B3qg8f4WrNdHdHVyLVFuaHRjOTA/ffi + rubyserial.zip ffi + rubyserial.zip and extract into the plugins folder. It should work on Windows with SU2014 and SU2015. The advantage of it is that it doesn't require Python to run. Here is RubySerial usage Wiki: https://github.com/hybridgroup/rubyserial This is how I think your code should look like: onstart { @serial = Serial.new('COM8', 9600) ontick { string_size = 1024 $serialdata = eval( @serial.read(string_size) ) onend { Note: If particular serial port does not exist, an error will be raised causing simulation to reset. If you want ffi and rubyserial to work with SU2013 and prior, you should upgrade SU ruby msvcrt to 1.8.6 or later. Do this by going to SketchUp Program Files, to the path of SketchUp.exe, and replace msvcrt-ruby18.dll with the new one, which can be downloaded [url=https://googledrive.com/host/0B3qg8f4WrNdHdHVyLVFuaHRjOTA/msvcrt-ruby18.zip:196vcuuf]here[/url:196vcuuf]. Tell me how it goes as I don't think I have any serial ports to test myself. That's strange. For some reason SketchyUcation can't parse the url properly. Try using this unparsed link instead: https://googledrive.com/host/0B3qg8f4WrNdHdHVyLVFuaHRjOTA/ffi + rubyserial.zip the download is OK but other pb : i want to test the bas9999 example but it's SU2015 version. On my Vista dsktop only SU2014 supported. Can everybody downgrade bas9999 sketchup model file from SU2015 to SU2014 More thanks NB : maybe the serial port work with ruby without SU-Physics ? if yes , how to ? @picpic020960 said: NB : maybe the serial port work with ruby without SU-Physics ? if yes , how to ? Yes, Ruby Serial works without SketchyPhysics. It's easy to use: # Require the library require 'rubyserial' # Instantiate serial my_serial = Serial.new(address, baude_rate = 9600, data_bits = 8) # Get data and do whatever you want with it size = 1024 data = my_serial.read(size) # Set data data = "" # Close serial once you're done using it my_serial = nil See this link for more info: https://github.com/hybridgroup/rubyserial @anton_s said: Yes, Ruby Serial works without SketchyPhysics. I installed the 'rubyserial' gem, but the initialization of Serial.new("COM3") crashes for me in SU2019 Win 10. I will try to figure out why it does that. Does anyone have had a success with 'rubyserial' in SU2019?
{"url":"https://community.sketchucation.com/topic/149162/write-to-serial-port-with-ruby-solved/7","timestamp":"2024-11-08T04:14:31Z","content_type":"text/html","content_length":"166329","record_id":"<urn:uuid:23108e99-3933-46f0-b284-6e4a536de5fb>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00534.warc.gz"}
Samacheer Kalvi 6th Maths Solutions Term 3 Chapter 4 Geometry Intext Questions You can Download Samacheer Kalvi 6th Maths Book Solutions Guide Pdf, Tamilnadu State Board help you to revise the complete Syllabus and score more marks in your examinations. Tamilnadu Samacheer Kalvi 6th Maths Solutions Term 3 Chapter 4 Geometry Intext Questions Try this (Text book Page No. 56) Question 1. The diagonal of a rectangle divides it into two equal halves but it is not a line of symmetry. why? We get two equal halves. But one half of the figure does not coincide exactly with the other half with the line of symmetry. Therefore it is not a line of symmetry. Try These (Text book Page No. 57) Question 1. Is the dotted line shown in each figure a line of symmetry? If yes put ✓otherwise put ✗ Justify your answer. Question 2. Check the following figures for symmetry ? Write YES or NO. Try these (Text Book Page No. 59) Question 1. Draw the following figures in a paper. Cut out each of them and fold so that the two parts of each figure exactly coincide. i) Which of the above figures have one, two or more lines of symmetry? ii) Which of the above figures do not have any line of symmetry? 2) All have lines of symmetry Question 2. Write the numbers from 0 to 9. (i) Which numbers have a line of symmetry? (ii) List out the numbers which do not have a line of symmetry. i) Numbers 1 and 3 have one line of symmetry. ii) Numbers 2, 4, 5, 6, 7 and 9 do not have a line of symmetry7. Activity (Text Book Page No. 59) Question 1. Complete the other half of the following figures such that the dotted line is the line of symmetry. Activity to be done by the students themselves Try this (Text Book Page No. 60) Question 1. Which other capital letters of English alphabets look like the same in the mirror? A, H, I, M, O, T, U, V, W, X and Z. Activity (Text book Page No. 62) Symmetrical figures by ink blots Step 1 : Take a sheet of paper and fold it into half to make the crease. Step 2 : Put some ink blots on one side of the crease of the paper. Step 3 : Fold the paper along the crease and press it. Step 4 : Open the paper, you will find an imprint of the ink blots on the other part also which is symmetrical about the crease. Activity to be done by the students themselves Try These (Text book Page No. 62) Question 1. Find the password : “Kannukkiniyal has a new game app in her laptop protected with a password. She has decided to challenge her friends with this paragraph which contains that password”. If you follow the steps given below, you will find it. Steps : i) Write the above paragraph in capita! letters. ii) Turn the paper upside down and look at it in the mirror. iii) The word which remains unchanged in the mirror is the password. iii) “DECIDED” is the password. Question 2. Form words using the Setters B, C, D, E,H, I, K, O and X. Write those words in paper in capital letters. Turn it upside down and look at them in the mirror. i) List the letters which have horizontal and vertical line of symmetry ii) Do the words HIKE, DICE, COOK remain unchanged in the mirror? iii) The words which you have found that remain unchanged in the mirror are ______, ______, ______ B, C, D, E, H, I, K, O, X i) Vertical lines of symmetry A, H, I, M, O, T, U, V, W, X, Y Horizontal lines of symmetry B, C, D, E, H, I, K, O, X ii) Yes iii) HOOK, KICK, ICE Try this (Text book Page No. 64) Question 1. Can you identify the object which does not have rotational symmetry in the above situations? Why? 1) Computer mouse 2) Trapezium 3) Chair 4) School bags etc. Question 2. Suppose, the man hole cover of the water sump is in circular shape i) The number of ways to dose that circular lid is ___ ii) What is its order of rotational symmetry? i) many ii) Infinite Activity (Text book Page No. 64) Question 1. Find the order of rotational symmetry by fixing the relevant shape in different ways. Activity to be done by the students themselves Leave a Comment
{"url":"https://samacheerkalvi.guru/samacheer-kalvi-6th-maths-term-3-chapter-4-intext-questions/","timestamp":"2024-11-11T16:47:59Z","content_type":"text/html","content_length":"157683","record_id":"<urn:uuid:317b7f94-3dcf-4e3b-8436-79e1ad65b825>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00630.warc.gz"}
Hilbert Curve Generation With Lookup Tables I've been researching something that requires a Hilbert curve and I thought I'd share how to generate the path and also move between the index and coordinates of points. For those of you unfamiliar with Hilbert curves let me quickly explain what one is and why you would want to use one. The curve covers every point in a square with side length 2^n by moving up, down, left and right, starting at one corner and ending at an adjacent corner. You could accomplish the same requirements by just scanning back and forth across the rows, but the advantage the Hilbert curve has is that nearby points on the 2D grid are generally also near each other on the curve. If you were to scan back and forth you get a lot of points that are directly beside each other in 2D space but very far apart on the curve. So why would you want to use one? They're great for turning 2D areas into 1D streams of data while maintaining locality. This comes in handy in image processing. Depending on what you want to do, this may make the processing a lot easier. Generation of the curve can be done recursively by first selecting an initial shape type and then using tables of sub-types and locations to generate the next stage. In the animation below the initial shape is type 1. When you go to the sub-type table you see that type 1 is replaced with types 2, 1, 1, and 4 in that order. The position of each sub-type is defined by the parent type. As the parent is type 1 the locations of the sub-types from the location table are: 2 at (0, 0) 1 at (0, 1) 1 at (1, 1) 4 at (1, 0) The location table can then be used to find the coordinates of the sub-types. As binary coordinates are used, the x and y coordinates of the sub-type can be appended to its coordinates to find the new sub-coordinates. For example, the coordinates of the type 4 sub-type at position (1, 0) above are (1, 1) (0, 1) (0, 0) (1, 0). Appending these to the coordinates of (1, 0) you find the new sub coordinates (11, 01) (10, 01) (10, 00) (11, 00). Trust me it's confusing at first but after a while it all makes complete sense. I'm still trying to come up with appropriate terminology for the process. These tables are from a paper It's a little confusing at first but I recommend reading that to help understand the process. Recursive Hilbert Curve Generation What if you don't want to generate the full curve and just want to know where along the curve a point lays or where a point on the curve is in 2D space? Let's use the below example to demonstrate 44th Point at location (111, 101) The above image shows the 44th point of the third stage at (111, 101). It's actually the 45th point but we start counting at 0. We add an index row to the tables provided in the above paper to make the process easier. We start by converting the index to a binary number with 2 bits for every stage. In this case it's the third stage, so we get 6 bits. These bits are placed two at a time on rows under the index column. We also define the initial shape as type 1. From this, the sub-type and location for the first row can be found. The sub-type becomes the type for the next row. This process is repeated two more times until we have three locations. To get the final location coordinates, just append the x and y coordinates downwards to get (111, 101). Converting an Index to a location To go from coordinate to index, split the location up by removing the highest bit from the x and y coordinate for each line. As the start type is known, the index can be found. This index is then used to find the sub-type for the next row. This is repeated two more times. The indices are then concatenated to get 101100 in binary which is 44. Converting a location to an index I hope this helped to explain the process. Trust me, even if it's still confusing, having diagrams and a couple of hopefully easy to follow examples will help. No comments:
{"url":"https://www.grant-trebbin.com/2017/02/hilbert-curve-generation-with-lookup.html","timestamp":"2024-11-03T21:51:12Z","content_type":"application/xhtml+xml","content_length":"63239","record_id":"<urn:uuid:842a0417-6a98-429f-bd88-7c187e1dd00b>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00860.warc.gz"}
e Cane Quick and easy calculations for your cane-making. Use these figures to find out how many 2oz. packs your cane will take. The measurements are currently in imperial (inches). Premo & Fimo Volumes A 2oz. pack of Premo has a volume of 2.789 and Fimo (both classic and soft) has one of 2.258. This number gets used in finding how much we need for our canes. If you are mixing and matching, use the smaller value (the Fimo) to be safe. That way you won’t be short. Square Cane Volume Square (or rectangle) canes are simple – length times width times height. So a cane that has a 2″ by 2″ face and is 6″ long goes like this: 2 x 2 x 6 = 24 24 / 2.789 =8.605 To make that cane you would need just about 9 packs of Premo! Round Cane Volume Round or cylinder canes are a little more bother – pi times radius squared times the length. So a cane like the previous one with a diameter (the whole width of the circle – R = 1/2 of the diameter) of 2″ and a length of 6″ goes like: 1 x 1 x 3.141592 x 6 = 18.85 18.85 / 2.789 = 6.789 To make the cane, you would need just about 7 packs of Premo! Figuring It Out Once you have the volume of your cane – which you can estimate by reading a tutorial, often or by deciding ahead of time how much you’d like to have to use – you take the volume and divide it by the volume of a package of clay. In our examples we used the packages of Premo to figure out how many packages of Premo our canes needed. V / 2.789 = how many packages of Premo clay you’ll need V / 2.258 = how many packages of Fimo clay you’ll need All these numbers are based on 2oz packages of Premo, Fimo Soft and Fimo Classic. Premo weighs 57g and measures 2.789″ in volume. Fimo Soft weighs 56g and measures 2.258″ in volume . Fimo Classic weighs 63g but measures the same 2.258″ in volume that Fimo Soft does.
{"url":"https://www.tooaquarius.com/tutorials/how-much-clay-for-the-cane/","timestamp":"2024-11-06T02:35:44Z","content_type":"text/html","content_length":"47734","record_id":"<urn:uuid:9949ed21-6224-4a6a-9755-0f2f6662f6ba>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00489.warc.gz"}
Circumferences, Diameters, and Radii Dwayne Belle Fuller School 4212 S. St. Lawrence Chicago IL 60628 (312) 535-1688 Students will be able to measure circumferences in inches. Students will be able to compute radii and diameters, given different Materials Needed: -notebook paper -tape measures The strategy will incorporate cooperative learning, and direct teaching techniques. The direct teaching method explains the objectives and how to measure the circumference in inches and compute the radii and diameters. Cooperative learning encourages the students to interact to formulate the correct answers. Direct teaching techniques: A short lecture is given on how to measure the circumference in inches and compute the radii and diameters. Cooperative Learning: Students will form three groups. Each group will be given a tape measure and cans. Each group will try and determine which can has the largest circumference, radii and diameter. Students will compute and compare their data. Next, students will be go to five workstations to measure the circumference and compute the radius and diameter of a light pole and four different size trees. Performance Assessment: Students will be given twenty objects to compute their circumferences after measuring the diameters and radii. Return to Mathematics Index
{"url":"https://smileprogram.info/ma9501.html","timestamp":"2024-11-09T03:47:34Z","content_type":"text/html","content_length":"2118","record_id":"<urn:uuid:257be44f-6528-4e72-bc5a-f78712ab2e65>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00785.warc.gz"}
Optimal investment in a large population of competitive and heterogeneous agents This paper studies a stochastic utility maximisation game under relative performance concerns in finite- and infinite-agent settings, where a continuum of agents interact through a graphon (see definition below). We consider an incomplete market model in which agents have CARA utilities, and we obtain characterisations of Nash equilibria in both the finite-agent and graphon paradigms. Under modest assumptions on the denseness of the interaction graph among the agents, we establish convergence results for the Nash equilibria and optimal utilities of the finite-player problem to the infinite-player problem. This result is achieved as an application of a general backward propagation of chaos type result for systems of interacting forward–backward stochastic differential equations, where the interaction is heterogeneous and through the control processes, and the generator is of quadratic growth. In addition, characterising the solution of the graphon game gives rise to a novel form of infinite-dimensional forward–backward stochastic differential equation of McKean–Vlasov type, for which we provide well-posedness results. An interesting consequence of our result is the computation of the competition indifference capital, i.e., the capital making an investor indifferent between whether or not to compete. All Science Journal Classification (ASJC) codes • Statistics and Probability • Finance • Statistics, Probability and Uncertainty • FBSDE • McKean–Vlasov equations • Propagation of chaos • Stochastic graphon games Dive into the research topics of 'Optimal investment in a large population of competitive and heterogeneous agents'. Together they form a unique fingerprint.
{"url":"https://collaborate.princeton.edu/en/publications/optimal-investment-in-a-large-population-of-competitive-and-heter","timestamp":"2024-11-06T21:35:34Z","content_type":"text/html","content_length":"52042","record_id":"<urn:uuid:906092f5-114a-424e-ab61-f0e0cff917e6>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00476.warc.gz"}
Convert Mole per cubic decimeter (mol/dm³) (Molar concentration) Convert Mole per cubic decimeter (mol/dm³) Direct link to this calculator: Convert Mole per cubic decimeter (mol/dm³) (Molar concentration) 1. Choose the right category from the selection list, in this case 'Molar concentration'. 2. Next enter the value you want to convert. The basic operations of arithmetic: addition (+), subtraction (-), multiplication (*, x), division (/, :, ÷), exponent (^), square root (√), brackets and π (pi) are all permitted at this point. 3. From the selection list, choose the unit that corresponds to the value you want to convert, in this case 'Mole per cubic decimeter [mol/dm³]'. 4. The value will then be converted into all units of measurement the calculator is familiar with. 5. Then, when the result appears, there is still the possibility of rounding it to a specific number of decimal places, whenever it makes sense to do so. Utilize the full range of performance for this units calculator With this calculator, it is possible to enter the value to be converted together with the original measurement unit; for example, '564 Mole per cubic decimeter'. In so doing, either the full name of the unit or its abbreviation can be usedas an example, either 'Mole per cubic decimeter' or 'mol/dm3'. Then, the calculator determines the category of the measurement unit of measure that is to be converted, in this case 'Molar concentration'. After that, it converts the entered value into all of the appropriate units known to it. In the resulting list, you will be sure also to find the conversion you originally sought. Regardless which of these possibilities one uses, it saves one the cumbersome search for the appropriate listing in long selection lists with myriad categories and countless supported units. All of that is taken over for us by the calculator and it gets the job done in a fraction of a second. Furthermore, the calculator makes it possible to use mathematical expressions. As a result, not only can numbers be reckoned with one another, such as, for example, '(20 * 90) mol/dm3'. But different units of measurement can also be coupled with one another directly in the conversion. That could, for example, look like this: '78 Mole per cubic decimeter + 49 Mole per cubic decimeter' or '61mm x 32cm x 3dm = ? cm^3'. The units of measure combined in this way naturally have to fit together and make sense in the combination in question. The mathematical functions sin, cos, tan and sqrt can also be used. Example: sin(π/2), cos(pi/2), tan(90°), sin(90) or sqrt(4). If a check mark has been placed next to 'Numbers in scientific notation', the answer will appear as an exponential. For example, 1.613 540 726 057 5×1022. For this form of presentation, the number will be segmented into an exponent, here 22, and the actual number, here 1.613 540 726 057 5. For devices on which the possibilities for displaying numbers are limited, such as for example, pocket calculators, one also finds the way of writing numbers as 1.613 540 726 057 5E+22. In particular, this makes very large and very small numbers easier to read. If a check mark has not been placed at this spot, then the result is given in the customary way of writing numbers. For the above example, it would then look like this: 16 135 407 260 575 000 000 000. Independent of the presentation of the results, the maximum precision of this calculator is 14 places. That should be precise enough for most applications.
{"url":"https://www.convert-measurement-units.com/convert+Mole+per+cubic+decimeter.php","timestamp":"2024-11-04T14:38:34Z","content_type":"text/html","content_length":"56032","record_id":"<urn:uuid:276a9d8f-4d22-40a6-befc-28d58ca7b736>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00252.warc.gz"}
Research On Parallel Interaction Learning Of Artificial Bee Colony Algorithm And BP Neural Network Posted on:2020-07-19 Degree:Master Type:Thesis Country:China Candidate:K Jia Full Text:PDF GTID:2428330578466000 Subject:Management Science and Engineering In the life and production process of human beings,what is often encountered is the optimization problem.Scholars have also studied many methods to solve optimization problems.As an important branch,swarm intelligence optimization algorithm provides a new idea for solving optimization problems.Artificial bee colony(ABC)algorithm is a novel swarm intelligence algorithm,it's a natural calculation method proposed by studying the information exchange and honey collecting behavior of bees in nature.It has a simple algorithm design process,few parameters,and can jump out of the local optimal solution easily,and don't need to calculate the gradient and other characteristics.However,in the real world,there are more discrete optimization problems,so it's urgent to study the discretization of ABC algorithm.BP neural network algorithm(BPNN)is also a method to solve the optimization problems.Since it was proposed,many experts and scholars at home and abroad have carried out a lot of improvements and research on it,and also achieved a lot of achievements.However,the initial weights and thresholds of the traditional BP neural network are random,which leads to inefficient search efficiency,premature convergence and inadequate convergence accuracy.Therefore,how to optimize the defects of BP neural network has become an important research direction.Based on the above problems,this thesis mainly studies the improvement of discrete artificial bee colony algorithm and its parallel integration learning with BP neural network algorithm.The research results are as follows: Firstly,a discrete binary artificial bee colony algorithm(IBABC)using Gauss variation function as probability mapping function is constructed,and proved the effectiveness of the algorithm theoretically,the experiments on Benchmark test functions also prove IBABC algorithm has better effect than other improved schemes.Secondly,the improved binary artificial bee colony algorithm(IBABC)is used to train BP neural network,a parallel integration learning algorithm based on improved binary artificial bee colony and BP neural network is proposed,namely IBABC-BP algorithm.The theoretical analysis and experimental results prove the effectiveness of the parallel integrated learning algorithm.Finally,a haze prediction model which based on IBABC-BP parallel integration learning algorithm is proposed,and then do the comparative experiments with haze data,the results prove the effectiveness of the haze prediction model proposed in this thesis. Keywords/Search Tags: Improved Binary Artificial Bee Colony Algorithm, Back Propagation Neural Network, Gauss Variation Function, Haze Prediction
{"url":"https://globethesis.com/?t=2428330578466000","timestamp":"2024-11-02T11:03:49Z","content_type":"application/xhtml+xml","content_length":"8521","record_id":"<urn:uuid:860a97c6-b17f-45ac-85a8-cc3430c5d358>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00683.warc.gz"}
Basic Guide to Scikit-learn Scikit-learn is a library that is used to perform machine learning in Python. Scikit-learn is an open-source library and is usable in various contexts, increasing academics and commercial use. It is also based on popular libraries such as NumPy, Scipy, and Matplotlib. Also, the best part about Scikit-learn is that it has many tuning parameters, outstanding documentation, and a support community. How to Install Scikit-Learn? Installation Methods 1. Using pip: Pip is the standard package manager for Python. It's the easiest and most common way to install Scikit-Learn. pip install scikit-learn 2. Using conda: Conda is a package manager that's particularly useful for data science projects. It can manage both Python and non-Python dependencies. conda install scikit-learn 3. From source: Installing from source allows you to get the latest development version and customize the installation. It's more complex but offers greater control. git clone https://github.com/scikit-learn/scikit-learn.git cd scikit-learn pip install . 4. Using an operating system package manager: Some operating systems offer Scikit-Learn through their package managers. This method ensures system-wide consistency but may not provide the latest sudo apt-get install python3-sklearn 5. Using Python distribution: Distributions like Anaconda include Scikit-Learn pre-installed. This is convenient for beginners and ensures compatibility with other data science tools. Verifying Installation 6. Check installation: After installation, it's important to verify that Scikit-Learn is correctly installed and accessible in your Python environment. import sklearn Additional Considerations 7. Dependencies: Scikit-Learn requires NumPy and SciPy. Most installation methods will handle these dependencies automatically, but it's good to be aware of them. 8. Version compatibility: Ensure that the Scikit-Learn version is compatible with your Python version and other libraries in your project to avoid conflicts. 9. Virtual environments: It's recommended to install Scikit-Learn in a virtual environment to isolate it from other Python projects and system-wide packages. 10. GPU support: Scikit-Learn doesn't directly support GPU acceleration, but some operations can benefit from libraries like Intel DAAL, which can be installed separately.
{"url":"https://www.naukri.com/code360/library/basic-guide-to-scikit-learn","timestamp":"2024-11-11T01:44:56Z","content_type":"text/html","content_length":"393444","record_id":"<urn:uuid:32647d34-adc2-4795-a692-d400ce1bd2dc>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00554.warc.gz"}
What is Mining and Mining Luck? – Crypto Mining Blog What is Mining? Mining pools compete among themselves to find new cryptocurrency blocks. For each new block, they are rewarded. For example, each Bitcoin block brings 6.25 BTC while each Ethereum block brings 2 ETH. Read more in our post What Is Bitcoin in Simple Terms: From Theory To Practice. If you think that mining is only about solving some complicated math problems, then you are wrong. Miners play a guessing game. They are guessing all the time. How so? Is it possible that all this energy around the world is used for such nonsense? Yes, it is! One of the most critical parameters of any cryptocurrency network is block time; that is the time needed to find a new block. For Bitcoin, it is 10 minutes, while for Ethereum it’s 14 seconds. However, a Bitcoin block may as well be found in 1 minute, or in 20 minutes. Ten minutes is the average block time (if let’s say, we take 1000 blocks in a row). It means that every ±14 seconds all Ethereum mining rigs get a new task (job) and try to find the solution for the new block by merely guessing. It’s wrong to think that a rig or big mining pool spends a lot of time (hours or even days) trying to find a solution to one specific block – the problem to solve is one and the same for everyone, and it is changed every time a new block in cryptocurrency network is found! Different mining algorithms are just different equations. Let’s say; one algorithm is A + B = C, while another one is A x B = C. For each block, you get a constant ‘A’ value, and you are also told that solution ‘C’ is a number with two zeros at the end. So you are randomly going through ‘B’ numbers until you find the right solution. Every cryptocurrency network decides on a one-to-one basis how many zeros number ‘C’ must have at the end in the same way that the average time to find a block – “block time” stays within the range of allotted time. Example: ‘N’ is a cryptocurrency network. You mine on your own. The block time is 1 minute. The network gives you a particular problem, the solution to which (number ‘C’) must have two zeros at the Your friend starts to mine the same cryptocurrency. As a result, network processing power doubles, which means that you will find blocks twice as fast – it will take you just 30 seconds. The block time is a parameter that can’t be fooled, so the network will give you a more difficult problem to solve, and now you must find the solution (number ‘C’) that has three zeros at the end. When another 100 friends start to mine, then the network will require you to submit a solution with ten zeros at the end. If some of these guys leave the network will reduce the difficulty of the problem to keeping the block time constant. What is Mining Luck? We have already talked about luck before. In the article called Solo Mining Pools – How to Catch Your Luck, we gave the following definition of mining luck: Let’s imagine you are rolling the dice and you need to get 6. In the perfect world, if you roll it many times, number 6 should appear in 16,67% of cases, i.e., every sixth time (since the dice has six faces), right? In real life, you can get lucky, and the number 6 will appear a few times in a row if you experiment. The process of solution searching in mining is equivalent to rolling the dice, even though it sounds strange. You are competing with the whole world, but the point doesn’t change. Let’s say you have one video card, and your friend has 6-GPU Mining Rig, this is equivalent to you having one dice, and him having six dices. You roll each dice once and try to get six. Apparently, your friend has much more (six times more) chances of getting six, but it doesn’t mean you can’t win. Let’s suppose that the reward for one block is $70. You can unite with your friend and find the block together, and divide the gainings in a fair way – you get $10, and his part is $60. Or you can search for the block on your own, and then you get the whole $70 for yourself for the found block. In a perfect world, it would take ten times more time, than if you cooperate with your friend, but our world isn’t ideal. To many, this explanation appeared to be challenging to understand, so let’s try and put it differently. So miners play a guessing game. The more hashrate each miner has (MH/s), the more tries it takes to guess the block solution he wants. And it’s the same for all the blocks. For example, you have 100 MH/s, while your friend has 500 MH/s. If you compete with each other, it’s only logical to assume that your friend will outplay you, with you winning once and your friend winning five times. This example is in an ideal world. In reality, if it’s neck-and-neck between you two, you could win. This just means that you’re in luck, while your friend apparently is not. Mining Pool Luck Let’s look at the extract from 2Miners FAQ: Mining is probabilistic by nature: if you find a block earlier than you statistically should on average, you are lucky if it takes longer, you are unlucky. In a perfect World, the pool would find a block on 100% luck value. Less than 100% means the pool was lucky. More than 100% signifies the pool was unlucky. Let’s take Bitcoin Gold (BTG): btg.2miners.com Its block time is 10 minutes, which means that an average of 144 blocks is found daily. At present, the network hashrate is 3 MS/s. If the pool hashrate is 1 MS/s, then on average it should find every 3rd block (48 blocks a day). When the pool finds 55 blocks a day, it’s in luck, when it finds 40 blocks a day, it’s not. No reason to worry though – it will get lucky another day. As a rule, pool luck is always aimed at 100%, that is, at perfection. Let’s look at pool luck statistics of the following cryptocurrencies: Ethereum, Zcoin, and Ravencoin. As you can see, average luck is always around 100%. Patience is essential for the miner. It’s hard to accept, but it’s true. If you patiently continue to mine your Luck will tend to 100%. We often see rookie miners worried about Luck being too high. They change the mining pool; they shift the coin they mine, etc. However, they just need some patience. Solo Mining Pool Luck The same principles apply to a Solo Mining Pool. No matter if you have just 1 GPU, a mining rig, a warehouse full of mining rigs, or if you rent hashpower on Nicehash you deal with Luck. Each miner on 2Miners Solo mining pool has a detailed statistics page. There is not only the pool Luck shown but also the personal Luck value for each block found by the miner. My Luck Value is Reducing. How is That Possible? Luck should always increase until the block solution is found. Then it is dropped to 0%. Sometimes miners see that their luck value is reduced. Let’s say it was 95% and then it turns to 85%. Luck is calculated using the number of shares the miners sent to the pool and the network difficulty. If network difficulty changes, your Luck value changes as well. If difficulty has been increased your Luck value could reduce. However, this doesn’t mean that you’ll suddenly get lucky. Mining Pool’s Myths 1. Everyone becomes so frustrated when pool luck is more than 200-300%, saying something like “pool sucks” or “node is not synced” and believing it is the end of the world. But when the same pool finds twice as many blocks a day than in an ideal world, how come everyone is suddenly so silent? 2. Many beginning miners think that finding pools spends a lot of time calculating block solutions, and getting worried when it takes too long to find a block, blaming it all on the pool. Just remember that all the rigs around the world play a guessing game with the same block. 3. “If I switch off my mining rig today – will the rig continue calculating the solution of my block when it is back online tomorrow?” No. Please remember that as soon as the new block is found in the cryptocurrency network, all the miners in the World start to work on a new block. There is no progress bar. 4. “Pool finds block when Luck is 100%.” That is not true. Pool finds a block with 100% Luck on average if we take into account many blocks, for example, 1000 blocks. It is not possible to predict the Luck of the next block. 5. “It is better to connect to the pool when the current Luck is high. Let’s say 150%. In this case, the pool finds a block soon.” That is false. No matter at which particular moment you connect to the pool. You can not predict when the block is found. 6. “My mining rigs were working for the whole day, but there were no blocks on the pool. Is this work wasted?” The pool was not Lucky. Please be patient. If we take into account several days (on average), the pool Luck always tends to be 100%.
{"url":"https://2miners.com/blog/mining-pool-luck/","timestamp":"2024-11-01T22:22:39Z","content_type":"text/html","content_length":"182718","record_id":"<urn:uuid:4ed62da2-36a5-43dc-a632-0c70ecdf6f7d>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00495.warc.gz"}
Spatial references, coordinate systems, projections, datums, ellipsoids – confusing? People are often mixing the above as if they were one and the same, so here’s a recap of them. One of the things you often find people saying is that “my data is in the WGS84 coordinate system”. This doesn’t really make sense, but I will get back to this later. This is a very confusing subject, and I might have gotten a few things wrong myself, so please add a comment and I’ll update it ASAP. Coordinate systems A coordinate system is simply put a way of describing a spatial property relative to a center. There is more than one way of doing this: • The Geocentric coordinate system is based on a normal (X,Y,Z) coordinate system with the origin at the center of Earth. This is the system that GPS uses internally for doing it calculations, but since this is very unpractical to work with as a human being ( due to the lack of well-known concepts of east, north, up, down) it is rarely displayed to the user but converted to another coordinate system. • The Spherical or Geographic coordinate system is probably the most well-known. It is based on angles relative to a prime meridian and Equator usually as Longitude and Latitude. Heights are usually given relative to either the mean sea level or the datum (I’ll get back to the datum later). • The Cartesian coordinate system is defined as a “flat” coordinate system placed on the surface of Earth. In some projections it’s not flat in the sense that it follows the earth’s curvature in one direction and has a known scale-error in the other direction relative to the distance of the origin. The most well-known coordinate system is the Universal Transverse Mercator (UTM), but surveyors define their own little local flat coordinate systems all the time. It is very easy to work with, fairly accurate over small distances making measurements such as length, angle and area very straightforward. Cartesian coordinate systems are strongly connected to projections that I will cover later. Sidenote: The geocentric coordinate system is strictly speaking a cartesian coordinate system too, but this is the general terms I've seen used the most when talking about world coordinate systems. Datums and ellipsoids Some of the common properties of the above coordinate systems are that they are all relative to the center of Earth and except the Geocentric coordinate system, uses a height system relative to the surface of the earth. This poses two immediate problems: • Where is the center of the earth • What is the shape of the earth? there are still some who doubts it). If we define the surface of Earth as being at the mean sea level (often referred to as the Geoid), we don’t get a spheroid or even an ellipsoid. Because of gravitational changes often caused by large masses such as mountain ranges etc, Earth is actually very irregular with variations of +/- 100 meters. Since this is not very practical to work with as a model of earth, we usually use an ellipsoid for approximation. The ellipsoid is defined by its semi-major axis, and either the flattening of the ellipoid or the semi-minor axis. The center and orientation of the ellipsoid is what we call the datum. So the datum defines an ellipsoid and through the use of a set of points on the ground that we relate to points on the ellipsoid, we define the center of the Earth. This poses another problem, because continental drift moves the points used to define the points around all the time. This is why the name of a datum usually have a year in it, often referring to the position of those points January 1st of that year (although that may vary). There are a vast amount of datums, some used for measurements all over the world, and other local datums defined so they fit very well with a local area. Some common ones are: World Geodetic Datum 1984 (WGS84), European Datum 1950 (ED50) and North American Datum 1983 (NAD83). The most well-known is WGS84 used by the GPS systems today. It is a good approximation of the entire world and with fix-points defined almost all over the world. When it was defined they forgot to include points in Europe though, so the Europeans now have their own ETRS89, which is usually referred to as the “realization of WGS84 in Europe”. The problem here was solely because of continental drift, so they defined some points relative to WGS84 in 1989, and keeps track of the changes. In most use-cases it is of no real importance and you can use one or the other. I mentioned earlier that people often refer to having their data in WGS84, and you see now why this doesn’t make sense. All you know from that is that the data is defined using the WGS84 datum, but you don’t know which coordinate system it uses. Read more on Datums and Spheroids. The earth isn’t flat, and there is no simple way of putting it down on a flat paper map (or these days onto a computer screen), so people have come up with all sorts of ingenious solutions each with their pros and cons. Some preserves area, so all objects have a relative size to each other, others preserve angles (conformal) like the Mercator projection, some try to find a good intermediate mix with only little distortion on several parameters etc. Common to them all is that they transform the world onto a flat Cartesian coordinate system, and which one to choose depends on what you are trying to show. A common statement that I hear in GIS is the following “My map doesn’t have a projection”, but this is simply not possible (unless you have a good old rotating globe). Often people are referring to data that is in longitude/latitude and displayed on a map without having specified any projection. What happens is that the system applies the simplest projection it can: Mapping Longitude directly to X and Latitude to Y. This results in an equirectangular projection, also called the “Plate Carree” projection. It results in very heavy distortion making areas look squashed close to the poles. You can almost say that the “opposite” of the Plate Carree is the Mercator projection which stretches areas close to the poles in the opposite direction, making them look very big. Mercator is the type of projection you see used on Live maps and Google maps, but as many often mistakenly thinks, they do NOT use WGS84 for the projected map, although WGS84 is used when you directly input longitude/latitude values using their API (read more on this here). More on projected coordinate systems Spatial reference The spatial reference is a combination of all the above. It defines an ellipsoid, a datum using that ellipsoid, and either a geocentric, geographic or projection coordinate system. The projection also always has a geographic coordinate system associated with it. The European Petroleum Survey Group (EPSG) has a huge set of predefined spatial references, each given a unique ID. These ID’s are used throughout the industry and you can download an Access database with all them from their website, as well as some very good documents on projection (or see the Spatial References website). So when you hear someone saying they have their data in WGS84, you can often assume that they have longitude/latitude data in WGS84 projected using Plate Carree. The spatial reference ID of this is Spatial References are often defined in a Well-known format defining all these parameters. The Spatial Reference EPSG:4326 can therefore also be written as: As mentioned Live/Google maps use a Mercator projection, but although their datum is based on WGS84, they use a sphere instead of an ellipsoid. This means that they use the same center and orientation as WGS84, but without applying any flattening. The spatial reference string for their projection therefore becomes: PROJCS["Mercator Spheric", GEOGCS["WGS84based_GCS", DATUM["WGS84based_Datum", SPHEROID["WGS84based_Sphere", 6378137, 0], TOWGS84[0, 0, 0, 0, 0, 0, 0]], PRIMEM["Greenwich", 0, AUTHORITY["EPSG", "8901"]], UNIT["degree", 0.0174532925199433, AUTHORITY["EPSG", "9102"]], AXIS["E", EAST], AXIS["N", NORTH]], PROJECTION["Mercator"], PARAMETER["False_Easting", 0], PARAMETER["False_Northing", 0], PARAMETER["Central_Meridian", 0], PARAMETER["Latitude_of_origin", 0], UNIT["metre", 1, AUTHORITY["EPSG", "9001"]], AXIS["East", EAST], AXIS["North", NORTH]] • Excellent article 07 May 2007 - Reply Add comment
{"url":"http://www.sharpgis.net/post/Spatial-references2c-coordinate-systems2c-projections2c-datums2c-ellipsoids-e28093-confusing","timestamp":"2024-11-02T01:24:08Z","content_type":"text/html","content_length":"60816","record_id":"<urn:uuid:d70f47a6-da94-40fe-bb2d-4587cbea8131>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00383.warc.gz"}
NCERT Solutions for Class 1 Maths Joyful Chapter 3 Mango Treat NCERT Solutions for Class 1 Maths Joyful Chapter 3 Mango Treat (Numbers 1 to 9) in Hindi and English Medium revised and updated for academic session 2024-25 CBSE and State board. Students of Class 1 can get here the question answers of Joyful Maths chapter 3 in easy to learn format. Class 1 Maths Joyful Chapter 3 Mango Treat Concept of Class 1 Joyful Maths Chapter 3 Mango Treat The chapter “Mango Treat” from the class 1 Maths textbook named “Joyful” is designed to introduce young learners to basic mathematical concepts through a relatable and engaging story. This chapter, like the others in the textbook, aims to make learning Maths a fun and enjoyable experience for first graders, leveraging familiar subjects and everyday situations to teach foundational Maths skills. The narrative of “Mango Treat” revolves around a group of children and their encounter with a mango tree full of ripe mangoes. The story begins with the children’s excitement upon discovering the mango tree, setting a joyful tone that aligns with the textbook’s mission to make learning delightful. As the story unfolds, the children decide to collect mangoes, providing a natural context for introducing mathematical concepts such as counting, addition, and subtraction. Uses of mangoe to teach Mathematics Class 1 Joyful Maths chapter 3 cleverly uses the mangoes as manipulatives for teaching math. For instance, as the children collect mangoes, the readers are encouraged to count along with them, turning an abstract mathematical concept into a tangible and interactive experience. This hands-on approach helps children grasp the idea of numbers and quantity, making it easier for them to understand and remember. Moreover, “Mango Treat” introduces the concept of addition as the children combine their mangoes to see how many they have in total. This part of the story is designed to teach children how to add numbers in a practical context, emphasizing the usefulness of math in everyday life. The narrative encourages participation, asking readers to add the numbers themselves, which reinforces the learning through practice. Introduction of Subtraction in Natural Way Subtraction is also introduced in a natural and engaging way. After the children decide to eat some of the mangoes, the story guides the readers through the process of subtracting the eaten mangoes from the total count. This segment not only teaches subtraction but also introduces the idea of sharing and fairness among the children, integrating social lessons with mathematical learning. The “Mango Treat” chapter is not just about numbers; it also aims to enhance problem-solving skills. As the children face the challenge of dividing the mangoes equally, the readers are invited to think along and find solutions to the problems presented in the story. This aspect of the chapter fosters critical thinking and encourages children to apply their mathematical knowledge in solving real-life problems. Colorful Illustrations for better Learning To make the learning process even more engaging, the textbook includes colorful illustrations depicting the children, the mango tree, and the mangoes. These visuals not only make the chapter more attractive but also aid in understanding by providing visual cues that complement the text. The illustrations serve as a tool for visual learners, making it easier for them to grasp the mathematical concepts being taught. At the end of the chapter 3, “Mango Treat” includes interactive exercises and games related to the story. These activities are designed to reinforce the mathematical concepts introduced in the chapter, providing children with additional practice in a fun and engaging way. The exercises vary in difficulty, catering to the diverse learning paces of first graders and ensuring that every child can participate and learn at their own pace. “Mango Treat” exemplifies the “Joyful” textbook’s approach to teaching math by intertwining educational content with stories that resonate with young learners. Through its narrative, the chapter effectively introduces basic mathematical concepts such as counting, addition, and subtraction, making these abstract ideas accessible and engaging. The integration of mathematical learning with a story about sharing, teamwork, and problem-solving also instills valuable social skills in the readers. By using familiar scenarios and interactive elements, “Mango Treat” not only teaches Maths but also fosters a love for learning among first graders.
{"url":"https://www.tiwariacademy.com/ncert-solutions/class-1/maths/joyful-chapter-3/","timestamp":"2024-11-11T08:34:57Z","content_type":"text/html","content_length":"300324","record_id":"<urn:uuid:202d196f-90f7-4beb-a043-bb777a69d4fd>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00536.warc.gz"}
Determine the missing value in the table.-Turito Are you sure you want to logout? Determine the missing value in the table. A. 15 B. 3 D. 9 Constant of proportionality is the constant value of the ratio between two proportional quantities. The correct answer is: 15 Here, we have to find the missing number. Clearly, its constant of proportionality. To check constant of proportionality, we use y = kx 6/25 = __/5 = 21/7 = 27/9 . Clearly, k = 3 and x = 5 Y = 5 × 3 . Y = 15. Hence, the correct option is (b). When comparing two or more ratios, the constant of proportionality is a fixed number that indicates the rate at which ratios increase or decrease. Get an Expert Advice From Turito.
{"url":"https://www.turito.com/ask-a-doubt/Mathematics-determine-the-missing-value-in-the-table-9-1-3-3-15-q158a172a","timestamp":"2024-11-02T08:32:31Z","content_type":"application/xhtml+xml","content_length":"334668","record_id":"<urn:uuid:c0a4a263-bab0-45b6-b691-0c2daa26248a>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00008.warc.gz"}
Create constant-velocity extended Kalman filter from detection report filter = initcvekf(detection) creates and initializes a constant-velocity extended Kalman filter from information contained in a detection report. For more information, see Algorithms and trackingEKF Initialize 3-D Constant-Velocity Extended Kalman Filter Create and initialize a 3-D constant-velocity extended Kalman filter object from an initial detection report. Create the detection report from an initial 3-D measurement, (10,20,−5), of the object position. detection = objectDetection(0,[10;20;-5],'MeasurementNoise',1.5*eye(3), ... 'SensorIndex',1,'ObjectClassID',1,'ObjectAttributes',{'Sports Car',5}); Create the new filter from the detection report. filter = initcvekf(detection) filter = trackingEKF with properties: State: [6x1 double] StateCovariance: [6x6 double] StateTransitionFcn: @constvel StateTransitionJacobianFcn: @constveljac ProcessNoise: [3x3 double] HasAdditiveProcessNoise: 0 MeasurementFcn: @cvmeas MeasurementJacobianFcn: @cvmeasjac HasMeasurementWrapping: 1 MeasurementNoise: [3x3 double] HasAdditiveMeasurementNoise: 1 MaxNumOOSMSteps: 0 EnableSmoothing: 0 Show the filter state. Show the state covariance. ans = 6×6 1.5000 0 0 0 0 0 0 100.0000 0 0 0 0 0 0 1.5000 0 0 0 0 0 0 100.0000 0 0 0 0 0 0 1.5000 0 0 0 0 0 0 100.0000 Create 3-D Constant Velocity EKF from Spherical Measurement Initialize a 3-D constant-velocity extended Kalman filter from an initial detection report made from a 3-D measurement in spherical coordinates. If you want to use spherical coordinates, then you must supply a measurement parameter structure as part of the detection report with the Frame field set to 'spherical'. Set the azimuth angle of the target to 45 degrees, the elevation to -10 degrees, the range to 1000 meters, and the range rate to -4.0 m/s. frame = 'spherical'; sensorpos = [25,-40,0].'; sensorvel = [0;5;0]; laxes = eye(3); measparms = struct('Frame',frame,'OriginPosition',sensorpos, ... 'OriginVelocity',sensorvel,'Orientation',laxes,'HasVelocity',true, ... meas = [45;-10;1000;-4]; measnoise = diag([3.0,2.5,2,1.0].^2); detection = objectDetection(0,meas,'MeasurementNoise', ... detection = objectDetection with properties: Time: 0 Measurement: [4x1 double] MeasurementNoise: [4x4 double] SensorIndex: 1 ObjectClassID: 0 ObjectClassParameters: [] MeasurementParameters: [1x1 struct] ObjectAttributes: {} filter = initcvekf(detection); Filter state vector. Input Arguments detection — Detection report objectDetection object Detection report, specified as an objectDetection object. Example: detection = objectDetection(0,[1;4.5;3],'MeasurementNoise', [1.0 0 0; 0 2.0 0; 0 0 1.5]) Output Arguments filter — Extended Kalman filter trackingEKF object Extended Kalman filter, returned as a trackingEKF object. • The function initializes a trackingEKF object with a constvel motion model and a cvmeas measurement model. The state of the filter is defined as [x; v[x]; y; v[y]; z; v[z]], in which x, y, z are position coordinates and v[x], v[y], v[z] are the corresponding velocities. • The detection input can be an objectDetection object of Cartesian or spherical measurement: □ For a Cartesian measurement, ☆ By default, the function assumes the measurement is a 3-D position measurement ([x; y; z]). The function uses the measurement to initialize the position state of the filter and sets the velocity state as 0. Similarly, the function uses the position components of the measurement noise matrix in the detection as the position components of the state error covariance matrix and sets the velocity components of the state error covariance matrix as 100 m^2/s^2. ☆ You can also use a 6-D detection input ([x; y; z; v[x]; v[y]; v[z]]) by specifying the MeasurementParameters property of the objectDetection object. Specify the HasVelocity field of the measurement parameter structure as true so that the initcvekf function can recognize the 6-D measurement. In this case, the state and state error covariance matrix of the filter are the same as the measurement and measurement noise matrix of the detection, respectively. □ For a spherical measurement, you must specify the Frame field in the MeasurementParameters property of the objectDetection object as "Spherical". Also, use the MeasurementParameters property to specify if the detection has azimuth, elevation, range, and range rate. A full spherical measurement has four elements [az, el, r, rr], representing azimuth in degrees, elevation in degrees, range in meters, and range-rate in meters per second, respectively. Some of the four elements can be missing. ☆ If the detection has elevation, the function uses the elevation measurement and its covariance to construct the filter state and state error covariance after performing coordinate transformation from the spherical frame to the Cartesian frame. Without elevation, the function sets the elevation as 0 and set its covariance as 180^2/12 deg^2 before performing the coordinate transformation. ☆ If the detection has range-rate, the function uses the range-rate measurement and its covariance to construct the filter sate and state error covariance. The function also assumes the velocity covariance of the cross-range direction is 100 m^2/s^2. Without range-rate, the function sets the velocity states of the filter as 0 and its corresponding covariances as 100 m^2/ □ The function sets all the cross-components (for example between position and velocity) of the state error covariance matrix as 0. □ You can use other fields of the MeasurementParameters property of an objectDetection object, such as OriginPosition and OriginaVelocity, to further specify the measurement coordinates. • The function models the process noise as non-additive and computes the process noise matrix assuming an acceleration standard deviation of 1 m/s^2. • The measurement noise matrix in the initialized filter is the same as that in the detection. • You can use this function as the FilterInitializationFcn property of a tracker object, such as a trackerGNN object. Extended Capabilities C/C++ Code Generation Generate C and C++ code using MATLAB® Coder™. Version History Introduced in R2018b
{"url":"https://in.mathworks.com/help/fusion/ref/initcvekf.html","timestamp":"2024-11-05T06:32:25Z","content_type":"text/html","content_length":"96206","record_id":"<urn:uuid:d75a86dc-5976-4261-bfe3-a7f335a04f79>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00520.warc.gz"}
Kernel charge equilibration: efficient and accurate prediction of molecular dipole moments with a machine-learning enhanced electron density model (2024) State-of-the-art machine learning (ML) interatomic potentials use local representations of atomic environments to ensure linear scaling and size-extensivity. This implies a neglect of long-range interactions, most prominently related to electrostatics. To overcome this limitation, we herein present a ML framework for predicting charge distributions and their interactions termed kernel charge equilibration (kQEq). This model is based on classical charge equilibration (QEq) models expanded with an environment-dependent electronegativity. In contrast to previously reported neural network models with a similar concept, kQEq takes advantage of the linearity of both QEq and Kernel Ridge Regression to obtain a closed-form linear algebra expression for training the models. Furthermore, we avoid the ambiguity of charge partitioning schemes by using dipole moments as reference data. As a first application, we show that kQEq can be used to generate accurate and highly data-efficient models for molecular dipole moments. Kernel and neural network (NN) based machine-learning (ML) methods have in recent years become established as an essential addition to the toolbox of computational chemistry [1–4]. In particular, ML-based interatomic potentials have had great success in providing energies and forces with quantum mechanical accuracy at a fraction of the cost of first-principles calculations [1, 5–10]. To achieve size-extensivity and a linear computational scaling with system size, these ML potentials typically rely on a local representation of atomic environments and consequently assume that the energy can be decomposed into local atomic contributions [9, 11]. This simple idea has led to a strong focus of chemical ML research on developing sophisticated representations of local atomic environments and, relatedly, NN architectures that directly embed atoms in their neighborhood [11–14, 14–18]. At the same time, it is clear that the assumption of locality does not hold for all systems to the same extent [19]. Indeed, strongly polar or ionic systems display very long-ranged Coulomb interactions. Even for a fairly unpolar (e.g. organic) system, the locality of the energy does not necessarily imply that other electronic properties are similarly local. In particular, electronic properties such as molecular orbital energies or dipole moments can break locality assumptions [20]. Consequently, such properties tend to be more challenging to predict with purely data-driven ML methods [16, 20]. Beyond this methodological challenge, dipole moments are an intrinsically interesting target as they govern the asymptotic decay of interactions of neutral molecules and their absorption cross-sections in vibrational spectroscopy. A promising route to overcome the limitations of local ML models is to include known physical interactions explicitly [19, 21, 22]. For example, a description of long-range electrostatics can be obtained by learning atom-centered charge distributions (e.g. atomic charges, dipoles or partitioned electron densities) [19, 23–28]. A prominent recent example of this is the MuML dipole model of Veit et al, which uses atomic charges and atom-centered dipoles to predict molecular dipole moments [20]. This idea takes advantage of the fact that the charge distributions around atoms can be predicted with reasonable accuracy from local environments, even if their interactions are long-ranged. While this solves some of the issues of local interatomic potentials, there are also significant downsides: firstly, charge conservation of the overall system is generally not ensured, and secondly non-local charge transfer (e.g. through conjugated π-systems) is not captured [29]. These issues can be fundamentally addressed by switching the target of the ML model: instead of predicting the charge distribution directly, one can predict a charge-dependent energy expression. The charge distribution is then obtained by minimizing this energy expression under the constraint that the charge is conserved. This idea is closely related to classical charge equilibration approaches like QEq [30]. In this manner, charge conservation is rigorously ensured, the description of non-local charge transfer is enabled and a simple route to analytical derivatives is provided through a Hellmann-Feynman-like approach. The advantage of this approach, compared to directly predicting the charge density, can perhaps be understood in analogy to the choice of initial guess in density functional theory (DFT) calculations: while it is common practice to construct the initial guess from a superposition of atomic densities, it has been found that the superposition of atomic potentials yields a significantly improved starting point [31]. So far only few examples of ML-based charge equilibration models have been reported, however. Most notably, Goedecker and co-workers applied a NN-based QEq model to ionic crystals [32, 33]. The corresponding models were trained to predict the energies and forces of reference DFT calculations, using the corresponding partial charges merely as an intermediate quantity. More recently, Behler, Goedecker and co-workers combined this approach with local NN potentials for the description of organic molecules and MgO surfaces [29, 34]. Here, the charge equilibration models were trained on partial charges from reference DFT calculations. Finally, Xie, Persson and Small applied a more flexible charge-dependent NN to describe lithium hydride nanoparticles, using training data from constrained DFT calculations [35]. Herein, we present a new kernel-based approach to charge equilibration termed kernel charge equilibration (kQEq). These models are directly trained on molecular dipole moments and thus avoid the ambiguity associated with choosing population analysis or projection approaches required in other methods. A closed-form linear algebra expression for training kQEq models is derived and their accuracy is benchmarked on the prediction of molecular dipole moments. Finally, limitations and possible extensions are discussed. 2.1.Charge equilibration Different conventional (i.e. non-ML) charge equilibration and electronegativity equalization methods have been proposed in the literature [30, 36–45]. In the derivation of the charge equilibration approach we largely follow the formalism of Goedecker and coworkers [29, 32–34], which is in turn based on the QEq method of Rappé and Goddard [30, 46]. In this context, QEq can be understood as a kind of semi-empirical, orbital-free DFT, where the electron density is expanded as: where, is a reference density (here the superposition of isolated atom densities) and is a fluctuation term, which describes charge transfer and polarization in the interacting system. We expand δρ into a linear combination of normalized 1s Gaussians centered at the atomic positions and of width α[A] where N is the number of atoms and are the expansion coefficients. Note that we use the negative of the expansion coefficients q[A ] here, so that these can directly be interpreted as atomic partial charges. With this approximation, the electron density is completely defined via the charges q[A] and it remains to find their optimal values. To this end, a simple form of the charge-dependent electrostatic energy is assumed: Here, E[0] is a charge-independent reference energy, which we set to zero throughout. The second term (labeled 'Site-Energy') is the well-known second-order Taylor expansion of the atomic energy with respect to the charge, with the atomic electronegativity χ[A ] and the hardness η[A ] [47]. It provides the energetic contribution incurred by adding or removing electron density from a given atom. The third term (labeled 'Coulomb-Integral') is the classical Coulomb potential of the fluctuation density δρ. This integral can be computed analytically, using the definition of δρ (see equation (2 with . This allows rewriting equation (3) to: which makes it explicit that only depends on the charges q[A ]. We may therefore equivalently use the notation . Note that this equation has the familiar form of the original QEq formulation, with the slight difference that the hardness parameter in QEq implicitly includes the electrostatic idempotential , whereas here η[A ] only describes the non-classical (e.g. exchange-correlation) contributions to the hardness. Given the definitions of ρ and , we now search for the density that minimizes the energy functional under the constraint that the total number of electrons is conserved. From the definition of δρ, it can be seen that this is equivalent to the constraint that , with the total system charge . This can be achieved by defining the Lagrangian The constrained minimization of the charges can then be performed by setting up a linear system of equations so that: with the elements of the hardness matrix H defined as: In matrix notation, this linear system can be formulated as: with the hardness matrix , the charge vector and the electronegativity vector . Here, the bar-notation is used to indicate that these arrays are extended by one dimension due to the Lagrange multiplier. The corresponding N-dimensional sub-arrays are indicated by H, q and c in the following. 2.2.Kernel charge equilibration The QEq approach as outlined above only requires three parameters, namely the electronegativity (χ[A ]), the non-classical contribution to the hardness (η[A ]) and the atomic size (α[A ]) for each species in the system. As a flipside of this elegant simplicity, the accuracy and transferability of the QEq method is limited, however. In the kQEq method proposed herein, we follow the basic idea of Goedecker and coworkers to overcome this limitation [32]. This is achieved by allowing the electronegativity of an atom to change as a function of its chemical environment. Importantly, taking advantage of the fact that both QEq and kernel ridge regression (KRR) are formulated as linear problems, we obtain a closed-form expression for training these models. The environment-dependent electronegativity in kQEq is defined via a kernel regression Ansatz as: where k is a kernel function, p[A ] is a representation vector that encodes the chemical environment of atom A, w[B ] is a regression coefficient and N[train] is the number of atoms in the training set. We use the SOAP kernel and representation vector, which are widely used in the construction of interatomic potentials and as descriptors of local environments [18]. We refer to the original literature and a recent review for a detailed account of the corresponding theory and implementation [1, 11, 48]. In general, the kernel function measures the similarity of chemical environments and is defined as: To derive an expression for the regression coefficients w, we begin by noting that the prediction of charges with QEq can be expressed as a matrix multiplication of the matrix A with the vector of electronegativities c: where A is the N-dimensional submatrix of . Using equation (11), c can also be written in terms of a matrix-vector multiplication: so that Here, K is the kernel matrix (quantifying the similarity between the atoms in the system of interest and the atoms in the training set) and w is the vector of regression coefficients. There are in principle several options for defining the 'optimal' regression coefficients. One could, e.g. fit them to partial charges obtained from some partitioning of the DFT density. However, the choice of a partial charge model is necessarily somewhat arbitrary and does not guarantee an accurate description of electrostatic interactions. We therefore instead use molecular dipole moments µ as a reference, which are unambiguously defined physical observables for finite systems and define the leading order term of molecular interactions in the long-range limit. In a first-principles calculation, the dipole moment is calculated as: where is the total charge density (obtained from the sum of the electron and nuclear charge distributions). For a charge equilibration model like kQEq, this simplifies to using the matrix R with columns r[A ]. Note that for an unambiguous definition of µ for charged systems, center-of-mass shifted coordinates are used throughout. Combining equations (17) and (15), we obtain: for the kQEq dipole moment. To determine the regression weights w, we then set up a regularized least-squares problem with the loss-function: where σ is a regularization hyperparameter and the term comes from the use of Tikhonov regularization in a kernel regression model. Note that we use the simplest form of regularization, with a single parameter σ to control for overfitting. In principle different regularization strengths could be used for each training point (e.g. proportional to the dipole magnitude). In a final step, we set to minimize L and solve for w, to obtain: The above equations are formulated for a single kQEq problem (i.e. a single molecule or system). In practice we train on multiple systems at once. This can still be achieved with a single linear algebra equation by using blocked matrices for A and R, and by concatenating the dipole vector elements of all training systems into a single vector. It should be noted that dipole moments in principle do not contain sufficient information to obtain a unique set of atomic partial charges. One advantage of the kQEq framework is therefore that it offers a natural way to enforce physical constraints on the partial charges. These constraints come in two forms, namely that atoms with similar chemical environments must display similar electronegativities (enforced via the Kernel and regularization) and that the charges must minimize the kQEq energy expression and sum to the total charge (enforced via the QEq framework). Importantly, the set of charges that minimizes a given kQEq energy expression is unique. Up to now, an environment-dependent description of the atomic electronegativity χ[A ] is defined, which can be learned from data. It remains to specify the non-classical contribution to the atomic hardness η[A ] and the atomic radius α[A ] for each element. Herein, we choose these by very simple heuristics: α[A ] is set to be proportional to the original QEq radius of the element in question. These radii are tabulated for all elements up to Lawrencium (Z = 103) [30]. Empirically, we found that scaling these radii with single global scaling parameter yields satisfactory results. Similarly, the non-classical hardness parameter η[A ] is set to zero throughout, as we found that this yields robust models while keeping the empiricism of the method as low as possible. These choices are quite simplistic and further optimization would certainly be possible. As shown below, already these simple defaults provide highly accurate results for the investigated systems though. The main hyperparameters to be considered for SOAP are the cutoff radius within which the neighborhood is expanded and the broadness of the Gaussians used to smear out the atomic positions (). The choice of these lengthscales governs the range in which the environment of an atom affects its electronegativity and how sensitive it is to changes of the atomic positions. In the following, we keep the ratio between these parameters constant () so that for larger cutoffs, the atomic positions are smeared out more strongly. The idea of keeping this ratio fixed is based on the fact that the expressiveness of a given atom-centered basis set is limited by the number of basis functions, meaning that it can either provide a high-resolution picture that is short-ranged or a lower-resolution picture that is longer-ranged. Alternatively, one could increase the number of basis functions for larger cutoff radii instead, but this would lead to a significantly increased computational cost. The particular constant of proportionality we use was found to work well empirically. Note that the Dscribe implementation of SOAP is used [48]. Full details are provided in the supplementary information (available online at stacks.iop.org/MLST/3/015032/mmedia). The influence of the cutoff parameter is discussed below. 2.4.Error metrics To quantify the performance of the kQEq models for predicting molecular dipole moments, we use two complementary metrics. On one hand, we use the mean absolute error (MAE) of predicted absolute dipole moments. This is a common measure of accuracy, which allows direct comparison with previous models. Additionally, we use the root mean squared regularized relative error (RRMSE) as used by Hait and Head-Gordon in [49]. This metric is defined as , with an arbitrary threshold of 1 Debye that discriminates between small and large dipole moments. In this way, a seamless transition from absolute error (for small dipoles) to relative error (for large dipoles) is achieved, which is necessary since the pure relative error is otherwise severely distorted toward systems with small 3.1.Molecular dipole moments As a first benchmark we trained kQEq models for predicting dipole moments of organic molecules. As reference data, the dipole moments of 7500 random molecules from the QM9 database were calculated at the PBE0/def2-TZVP level, using ORCA (data provided in the SI) [50–52]. This set spans a wide range of small to medium sized molecules containing the elements C, H, N, O and F. From these structures, we randomly selected a validation set (used to optimize the regularization parameter σ) and a test set of 1000 molecules each. The training sets used below were drawn from the remaining 4000 molecules. Figure 1 depicts the learning curves of different kQEq models, using a range of SOAP cutoffs. For comparison, we also fitted a conventional QEq model to the same data. All kQEq models clearly outperform the conventional QEq approach, underscoring the benefit of the additional flexibility obtained by using environment-dependent electronegativities. Furthermore, it can be seen that the kQEq models improve continuously when given more data, whereas the MAE of the conventional approach quickly saturates. The model with the smallest SOAP cutoff used here (1.7 Å) shows the best performance for small training sets but stops improving when training on larger sets. Meanwhile, the larger cutoffs we tested (2.6 and 3.5 Å) continuously improve and reach an excellent accuracy of 0.15 D (compared to an intrinsic standard deviation of ca. 2 D). Following the experience with interatomic potentials based on SOAP, we further tested a 1.7/3.5 Å 'double SOAP' representation [7, 53]. This combines a short-ranged/high-resolution kernel with a longer-range/low-resolution one. The corresponding model reaches an even better accuracy and displays the most robust performance across different training set sizes. The latter is particularly evident when considering the RRMSE, which shows that this model consistently improves the relative error when increasing the training set, whereas some of the other models improve the performance on total dipole moments at the expense of the relative error (i.e. by describing small dipole moments less accurately). Overall, these results show that the physical description of long-range contributions in kQEq allows the use of rather small cutoffs for the ML part, effectively focusing on the nearest neighbors. This is beneficial both in terms of transferability of the models and the cost of computing the representation. Notably, the philosophically similar BpopNN model of Xie, Persson and Smalls uses a much larger cutoff radius of 13.2 Å [35]. Meanwhile, the MuML dipole model of Veit et al (which is also based on SOAP) uses a cutoff of 5 Å [20]. This indicates that kQEq does a good job of partitioning contributions into long-ranged physical terms and a short-ranged ML model. To put this performance into perspective, we compare these models to two recent kernel ML models that are specifically tailored to predicting dipole moments, namely the operator ML approach of Christensen, Faber and Lilienfeld and the aforementioned MuML model of Veit et al (see figure 2) [20, 54]. The former uses a modified variant of the Faber-Christensen-Huang-Lilienfeld (FCHL) representation (FCHL), which can incorporate the response of the ML model to applied electric fields and thus provides a physically rigorous and equivariant route to predicting dipole moments. Meanwhile, the latter uses a decomposition of the total molecular dipole into atomic monopole and dipole contributions, using the equivariant λ-SOAP approach [55]. For reference we also include the learning curve of a naive FCHL model, which simply predicts the total dipole moment as a scalar (taken from [54]). As already discussed in [54], the FCHL model is a significant improvement over the scalar approach. It also significantly outperforms conventional QEq for all but the smallest training sets. Meanwhile the MuML and kQEq models display remarkably similar learning curves and represent a further improvement over FCHL. Note that for consistency the QEq and kQEq models in figure 1 were trained on the same reference data (calculated at the B3LYP level) as the MuML model from [20]. It is also instructive to consider the accuracy of the reference methods themselves. As mentioned above, Hait and Head-Gordon used the RRMSE to benchmark density functional methods against high-level Coupled Cluster [49]. According to this metric, the best kQEq model reaches an accuracy of 8.1% on the dipole moments of QM9. This can be compared with the reported errors of popular hybrid functionals like PBE0 (5.2%) and B3LYP (7%). However, it should be noted that the benchmark in [49] focuses on very small molecules and includes spin-polarized systems, so that this comparison should not be overinterpreted. Nonetheless, it indicates that kQEq models approach hybrid DFT accuracy at a much reduced cost. 3.2.Charge analysis Having established the high accuracy of the kQEq predicted dipole moments, we next turn to the predicted charges themselves. It is well known that the electron density cannot be unambiguously partitioned into atomic partial charges. Consequently, there is no way to objectively measure the quality of such partitionings. Indeed, there is a fundamental tension between describing the local charge density around each atom versus global properties such as dipole moments or electrostatic potentials, when approximating a continuous charge density with atom-centered spherically symmetric charges. Nonetheless, it is worth considering whether the predicted charges are reasonably intuitive and how they compare to standard population analysis schemes like the one of Mulliken [56], restricted electrostatic potential fits like the ChelpG scheme [57], or localized molecular orbital approaches like the Natural Population Analysis [58, 59]. Here, a double SOAP model trained on 2000 training configurations (at the PBE0 level) is used to provide the kQEq charges (see figure 1). Figure 3 depicts the pairwise correlations between these partial charge models. As has been noted previously, different charge models in general only display moderate agreement with each other [27]. In fact, the lowest mean absolute deviation (0.1 elementary charges) is found between NPA and kQEq, the largest between Mulliken and kQEq. The latter can be attributed to the fact that kQEq charges appear in a broader range (−1–2.5) while Mulliken charges lie between −0.5 and 0.5. Furthermore, kQEq charges display a much more pronounced clustering into functional groups. This is particularly evident for oxygen. While the Mulliken analysis assigns similar charges to ether and carbonyl oxygen atoms, kQEq predicts the carbonyl groups to be significantly more polar. Similarly, the polarity of nitrile and fluoride functional groups is much higher in kQEq (see SI for element-wise correlation plots with all charge models). In contrast, ChelpG and NPA charges mostly fall into the same range as the kQEq ones. In particular, the large differences for polar functional groups (e.g nitrile or carbonyl functional groups) are not observed here. Nonetheless, large deviations can be seen in other cases, particularly for Fluoride functional groups. In general, the relatively good agreement with NPA charges is in line with the observations of Bultinck and coworkers for conventional charge equilibration methods [44]. Two illustrative examples of these differences are shown for 1-(2-Methylcyclopropyl)-ethanone and 2-Fluoropyrazine in figure 4 (both of which are not part of the kQEq training set). At first glance, the partitionings are qualitatively similar, meaning that the signs of most charges match. However, kQEq, NPA, and ChelpG predict significantly larger absolute charges than Mulliken. In the case of 1-(2-Methylcyclopropyl)-ethanone, this is particularly evident for the carbonyl group, which is much more strongly polarized. Importantly, these differences have strong consequences for the overall description of the molecular electrostatics. Indeed, the dipole moment calculated from the Mulliken charges points in the opposite direction of the actual dipole. In contrast, the ChelpG and NPA dipoles are well aligned with the reference, but too small in magnitude. Finally, the kQEq charges provide an accurate prediction of both the absolute dipole moment and its direction. The case of 2-Fluoropyrazine is particularly interesting because it illustrates the observed discrepancy between the charges assigned to C-F groups. Here, kQEq predicts the largest absolute charges on the corresponding C and F atoms. Interestingly, the Mulliken, NPA, and ChelpG charges nonetheless overestimate the molecular dipole moment, while the kQEq prediction provides an excellent fit. This is because the large positive charge on the carbon atom stems not only from charge transfer to Fluorine, but also to the adjacent Carbon and Nitrogen atoms, which partially counterbalance the polarity of the C-F group. The generally poor correlation between the charges obtained with different schemes raises some questions about the different roles and interpretations partial charges can have. On one hand, they can reflect a local partitioning of the electron density, as in the case of Mulliken, Hirshfeld or Bader charges. On the other hand, they can reflect the electrostatic potential on the surface of a molecules, as in the case of ChelpG and related schemes. While the latter is arguably less arbitrary (as it is directly tied to a physical observable) it has well-known issues with assigning meaningful charges to atoms that are 'buried' in bulky molecules. The kQEq model proposed herein does not neatly fit into these categories. Firstly, it is not a charge partitioning scheme but an ML model. Mulliken, NPA, and ChelpG charges can only be obtained by running a full DFT calculation, whereas the kQEq prediction is much cheaper. Secondly, while kQEq models are trained to reproduce molecular dipole moments, the charges themselves are obtained by minimizing the electrostatic energy expression in equation (5). The Coulomb interactions between partial charges thus provide an important physical constraint on how the charge distribution is approximated. As a consequence, the kQEq derived molecular electrostatic potentials are in good agreement with the ChelpG and NPA ones (see SI). Nonetheless, the kQEq charges are comparatively large. Since bio-organic forcefields usually rely on ChelpG or scaled Mulliken charges, the current kQEq models may thus not be directly applicable in MD simulations. Indeed, the development of interatomic potentials based on kQEq will likely require more complex loss functions, e.g. incorporating energetic contributions or higher order moments. This will be the subject of future work. 3.3.Extrapolating beyond small molecules A well-known drawback of the conventional QEq approach is that it suffers from an delocalization error akin to that observed for local density functionals. This is e.g. evident in the fact that QEq models incorrectly dissociate molecules into partially charged atoms, since electronegativity differences between isolated atoms lead to spurious charge transfer [39]. While kQEq could in principle cure this particular pathology (by assigning the same electronegativity to all isolated atoms), the more general delocalization tendencies of QEq will likely be inherited by kQEq to an extent. In this section we explore this issue by testing the performance of kQEq for the organic polymer chains dataset introduced by Veit et al [20] (see figure 5). This dataset consists of two types of structures. On one hand, glycine polypeptides in the α-helix and β-strand configurations are considered. On the other hand, polyenoic amino acids and n-amino carboxylic acids are included, which consist of a carboxylic acid and an amine group separated by a conjugated double bond or alkane spacer, respectively. Each of these systems shows characteristic changes of the total dipole moment as the polymer length increases. For the polypeptides, each additional amide bond is itself polar, so that the total dipole increases approximately linearly with the system size. However, the precise behavior depends on the spatial orientation of these bond dipoles and their interactions, so that the α-helix and β-strand configurations show different scalings. The polyeonic amino acid chains also display a linearly increasing dipole moment. In this case this is not due to the addition of polar bonds, however, but due to the polarization of the delocalized π-electrons in the spacer. Finally, the dipole of the n-amino carboxylic acids remains constant upon increasing the chain length, since no polar bonds or delocalized electrons are We again use a 'double SOAP' kQEq model trained on 2000 randomly drawn QM9 molecules for comparison. This model has a mixed performance for this test. For polyenonic and n-amino carboxylic acids, the performance is very good. This is both in terms of the qualitative features (linear increase in dipole moments, vs. quick saturation) and the quantitative agreement. Furthermore, kQEq is a strong improvement over conventional QEq here. In contrast, the performance for the polypeptides is less satisfactory. While the linear trend is correctly captured, the magnitude of the dipoles is significantly underestimated, in particular for the β-strand. This behavior is analogous to the underestimation of dipoles typically observed with local functionals, due to the delocalization error [ 49]. While this shows that kQEq does not automatically cure all pathologies of QEq, a clear advantage of a ML approach is that it can be improved with more data. Indeed, by including the longest chains of each type explicitly in the training set, a retrained kQEq model can be generated that captures these trends more accurately, see figure 5. Potentially, an improved extrapolation behavior could be obtained by using a more flexible expression for the site-energy in equation (3), effectively tackling the problem at its root. Figure 5 also shows the best MuML model from [20] for comparison. This model performs quite well overall, in particular for the polypeptide systems, with slightly worse performance for the carboxylic acids. In this context, it is worth discussing the fundamental differences between MuML and kQEq. The former uses a fully local decomposition of the overall dipole moment. This works very well for situations where the charge distribution of a large system is essentially just a sum of smaller fragments (as for the polypeptides). In contrast, long-range charge transfer and polarization effects cannot be described by such a model. Specifically, in a system where two functional groups A and B are separated by some spacer, the MuML charge on A will be independent of B. This is reflected in the underestimated dipole moments of longer polyenoic amino acid chains. QEq and kQEq are in principle able to describe such non-local effects. The QEq curve for n-amino carboxylic acid reveals that this is not necessarily an advantage however, as large unphysical charge transfer is predicted in this case. In other words, a purely local model will generally lead to more systematic and less dramatic failures than a poor non-local one. Fortunately, kQEq and related methods now provide a framework for sophisticated non-local charge equilibration models. More detailed insight into the charge distributions of the different models can be obtained by comparing the partial charges directly (see figure 6). Here, the MuML and kQEq charges for the longest polyenoic amino acid are shown. For comparison, quantum mechanical NPA charges are also included. This reveals that the kQEq charges display a much better qualitative agreement with NPA. In particular, the oscillatory behavior of the carbon charges (which can be attributed to polarization effects) is completely absent in MuML but clearly observable for kQEq. This is notable, given that both ML models are exclusively trained on dipole moments (i.e. without any atomistic detail) and supports the notion that the kQEq framework introduces useful physical constraints on the charges. More generally, it should be noted that kQEq is not primarily intended as a stand-alone molecular dipole model. Since it is based on an energy functional, it can be used to model long-range electrostatic interactions in combination with local interatomic potentials. Indeed, it remains an open question whether dipole moments alone provide enough information for this purpose. Fortunately, the current approach can easily be expanded to also include higher moments, electrostatic potentials or reference partial charges. This will be explored in future work. In this work, we introduced kQEq, a kernel-based approach to charge equilibration in molecules. In contrast to conventional charge equilibration methods like QEq, a data-driven, environment-dependent description of atomic electronegativities is introduced. kQEq models trained on molecular dipole moments display excellent performance, en par with or better than state-of-the-art kernel models, specifically tuned to predicting dipole moments [20, 54]. The formalism presented herein opens the door toward physics-based kernel ML models for predicting atomic charges, to be used in combination with reactive interatomic potentials. Importantly, the presented approach is quite general and can be extended to other fit targets (e.g. quadrupole moments and electrostatic potentials) and to more flexible density representations (e.g. using atom centered dipoles in addition to partial charges). While this work focuses on molecular systems, the application to inorganic materials is also envisioned. For finite systems (e.g. nanoparticles) this is in principle straightforward. The use of other fit targets may become important for larger systems, however, as the dipole moment alone might contain too little information in this case. For periodic systems multipole moments are in general ill-defined. Here, the current approach will have to be extended to reproduce other electronic properties. It should also be noted that QEq-based frameworks are likely not equally well suited for different kinds of materials. In principle, one would expect the best performance for metallic systems, where charges are strongly delocalized and mobile. In contrast, polar insulators or interfaces may be less well described due to the intrinsic delocalization error of charge equilibration models. More general site-energy expressions could be developed to overcome these tendencies. Ideally, such developments will ultimately converge with recent developments in ML-based DFT, yielding a new generation of orbital-free density functionals [60–63]. Work in this direction is ongoing in our groups. The authors thank the DFG for financial support through Germany's Excellence Strategy—EXC 2089/1-390776260. S W and K R further acknowledge funding from Deutsche Forschungsgemeinschaft (DFG) under project RE1509/18-2. Data availability statement
{"url":"https://rodskidmoreartist.com/article/kernel-charge-equilibration-efficient-and-accurate-prediction-of-molecular-dipole-moments-with-a-machine-learning-enhanced-electron-density-model","timestamp":"2024-11-10T08:28:39Z","content_type":"text/html","content_length":"148703","record_id":"<urn:uuid:5d9ff320-f6f0-485a-a51b-d004543b9e24>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00055.warc.gz"}
SAT Math Shortcuts | SupertutorTV Have you wondered how you can go really fast on the SAT math section? Maybe you’ve run out of time, or you wish you just had 5 more minutes to answer that final question. If so, we’ve compiled some SAT Math shortcuts that cover some of the more challenging types of questions you’ll find on the test. #1: Polynomial Remainder Theorem Sometimes even top students get stuck on polynomial division problems because they don’t know where to start! First, let’s take a look at the question in the video. We’ll notice the expression is divided by (x-3). Then, when we look at each answer choice, we see that every answer also ends with an (x-3) divisor. This means the question is not fully divisible by (x-3), and the number over the divisor is a remainder. Once we establish a remainder, it’s time for one of our SAT math shortcuts: Remainder theorem! You can answer this with long division, but using the Polynomial Remainder theorem is faster. In order for this to work, all remainders in the answer choices must be distinct. Then, the Polynomial Remainder theorem states that if we divide a polynomial by (x-a), the remainder will be the result if we plug in a where x is in the equation. So, for the example in the video, a=3, and to find the remainder, you just have to plug in 3 for every x in the equation. This gives you the remainder -2. Also, if there’s no remainder, plugging in a for x will lead to a result of zero. #2: Interpreting Linear Functions This is one of our SAT math shortcuts that helps when linear equations are mapped onto word problems. Remember linear equations usually take the form of y=mx+b. Our first tip is to remember that added items always share the same label. For example, in our video example we know that that in the equation 0.5c+60p= 315, the 315 stands for dollars. This means each part of the equation represents dollar amount. This can help to eliminate some answer choices. Our second tip for these types of questions is to look for the words: per, for each, or for every. These words will always mean to divide, which almost always indicates some sort of rate. If we look at our y=mx+b equation, we know that m usually represents the rate, and it’s followed by the variable x. Remember though, when you’re looking at your answer choices, not to forget our first tip! You may think an answer is correct because it represents rate, but you need to double check that every part of your equation is still sharing the same label (such as dollars or tons). #3: Special Products- The Difference of Squares For our final SAT math shortcuts, remember your Special Products formulas! Knowing Special Products isn’t just a speed tip. These formulas are necessary to get some of the trickier questions correct on your exam. Special Products equations include: The Square of A Difference, The Square of a Sum, and The Difference of Squares. You can find out more about all of these equations in our books The Best ACT Math Books Ever. For now, we will focus on The Difference of Squares. Here’s the pattern: a^2 – b^2 = (a-b)(a+b) The Difference of Square will be the formula that comes up the most often. Remember that if you foil out a^2 – ba + ba – b^2 , then the two ‘ba’ parts of your equation end up canceling out. That’s how you end up with the a^2 – b^2. The Difference of Squares formula can be the most difficult to see, which is why it’s important for you to memorize! Whenever you see a tricky polynomial formula, remember to go through the Special Products formulas to help you find your answer faster!
{"url":"https://supertutortv.com/videos/sat-math-shortcuts/","timestamp":"2024-11-07T00:58:27Z","content_type":"text/html","content_length":"104114","record_id":"<urn:uuid:c1e0edc4-62a2-4897-9151-ffae5fd42676>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00428.warc.gz"}
A command to compute the Confusion matrix from two selected Categories objects. A confusion matrix is constructed from both Categories objects in the following way: The first Categories object is considered the stimulus Categories and its unique (sorted) categories form the row indices of the confusion matrix, the unique (sorted) categories of the second object form the column indices of this matrix. Next, each element in the first Categories object is compared with the corresponding object in the second object and the element in the confusion matrix addressed by this pair is incremented by 1. Links to this page © djmw 19960918
{"url":"https://www.fon.hum.uva.nl/praat/manual/Categories__To_Confusion.html","timestamp":"2024-11-08T10:46:52Z","content_type":"text/html","content_length":"2048","record_id":"<urn:uuid:568842f7-c818-4c87-b668-5bbcd58a55b9>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00784.warc.gz"}
Convert Decimal to Fractions: Easy Steps & Tips - Factor Calculator Convert Decimal to Fractions: Easy Steps & Tips Are you tired of struggling with decimal to fraction conversions? Do you wish there was a simple guide to master this skill? This article has all the answers you need! In this guide, we’ll show you how to convert decimal numbers to fractions. Whether you’re a student, professional, or just someone who wants to improve their math, you’ll learn to tackle any conversion with ease. Key Takeaways • Understand the relationship between decimals and fractions • Learn the step-by-step process to convert decimals to fractions • Discover common decimal to fraction conversions and their applications • Explore helpful tips and resources to master decimal to fraction conversion • Gain the advantages of proficiency in decimal to fraction conversion So, are you ready to unlock the secrets of decimal to fraction conversion and improve your math skills? Let’s dive in and learn the easy steps and powerful tips to convert with confidence! Understanding Decimals and Fractions Decimals and fractions are key in math and are closely tied. A decimal shows a number not whole, with digits after a decimal point. For example, 0.5 is half of a whole. A fraction shows a part of a whole, like 1/2, which is also half. Defining Decimals and Fractions Let’s dive deeper into what decimals and fractions are: • Decimal: A decimal is a way to show a number not whole, with digits after a decimal point. It can show fractions, percentages, and more. • Fraction: A fraction is a way to show a part of a whole, with a numerator over a denominator. It can show parts of a whole, ratios, and more. Visualizing Decimals and Fractions Visual aids like number lines or pie charts can show the link between decimals and fractions. These tools help see how they both show the same idea, just differently. Knowing what decimals and fractions are and how to visualize them helps understand their connection. This knowledge is key as we learn to convert between them. Why Convert Decimals to Fractions? Converting decimals to fractions is very important. It helps in many ways, like improving math skills and solving problems better. Being able to switch between decimals and fractions is a big One key benefit of converting decimal to fraction is clearer understanding. Fractions make numbers easier to see and understand. This helps with math, science, and engineering, leading to better The importance of converting decimal to fraction also shows up in everyday life. It’s useful for budgeting, finance, cooking, and measuring. This skill makes calculations simpler and reduces mistakes in real-life situations. Also, learning to convert decimals to fractions boosts brain skills. It improves math knowledge, pattern recognition, and thinking. This skill sharpens problem-solving and critical thinking. In short, knowing how to convert decimals to fractions is very useful. It helps in school and in real life. Spending time to learn this skill can pay off a lot in education and work. Step-by-Step Guide to Convert Decimal to Fraction Turning a decimal into a fraction might seem hard, but it’s easy once you know how. First, find any repeating numbers in the decimal. Then, use that pattern to make the fraction. Let’s go through the steps together. Identifying the Repeating Pattern Begin by looking at the decimal you want to change. Find any numbers that keep showing up in the same order. For example, 0.3333… has a pattern of 3s. Finding this pattern is the first step to making a fraction. Writing the Fraction Equivalent After spotting the repeating pattern, you can write the fraction. The top number of the fraction is the repeating digits. The bottom number is as many 9s as there are repeating digits. For example, 0.3333… becomes 1/3. The top number is 3, and the bottom is 3 because of the three 9s. Let’s try another one. The decimal 0.142857142857… has a pattern of 142857. To make a fraction, the top number is 142857, and the bottom is 999,999. This is because there are six 9s for the six-digit By following these steps, you can easily turn any decimal into a fraction. With a bit of practice, you’ll get really good at it. Common Decimal to Fraction Conversions Working with decimals, some values convert easily to fractions. Others need more complex steps. Knowing these common conversions is key to mastering decimal-to-fraction changes. Converting Terminating Decimals Terminating decimals stop after a few digits. These decimals convert to fractions by spotting the pattern. For instance: • 0.25 becomes 25/100 or 1/4. • 0.375 turns into 375/1000 or 3/8. • 0.625 is 625/1000 or 5/8. Converting Non-Terminating Decimals Non-terminating decimals repeat forever. To convert them, find the repeating pattern and apply a formula. Here are examples: 1. 0.333… equals 1/3. 2. 0.666… is 2/3. 3. 0.142857142857… is 1/7. Learning these common conversions helps you solve more complex problems. It also prepares you for real-world applications. Decimal Fraction 0.25 1/4 0.375 3/8 0.625 5/8 0.333… 1/3 0.666… 2/3 0.142857142857… 1/7 How to Convert Decimal to Fractions Changing decimals to fractions is easy once you know the right steps. Learning the methods for converting decimal to fraction makes it simple. We’ll look at the main ways to convert decimal to fraction and decimal to fraction conversion techniques you can use. Identifying the Repeating Pattern One common way to convert a decimal to a fraction is to find any repeating pattern. If the decimal repeats, you can make a fraction. Just put the repeating digits in the numerator and the number of digits in the pattern in the denominator. Multiplying by Powers of 10 Another method is to multiply the decimal by a power of 10. This moves the decimal point, making it a whole number. Then, write the fraction with the whole number as the numerator and the power of 10 as the denominator. Decimal Fraction 0.25 25/100 0.375 375/1000 0.6666… 2/3 Knowing these decimal to fraction conversion techniques helps you convert decimals to fractions easily. You’ll be able to express decimals as simplified fractions with confidence. Tips for Easy Decimal to Fraction Conversion Converting decimals to fractions is easy with the right tools and strategies. Here are some tips to make it simple: Utilize Fractions Calculators or Apps Using fractions calculators or apps is a quick way to convert decimals. Just enter the decimal, and the tool will show you the fraction. These fractions calculators often have extra features. They can simplify fractions or do other math. Adding these tools to your workflow makes the process faster and more accurate. Recognize Common Decimal Patterns Knowing common decimal patterns helps a lot. For example, 0.25 is 1/4, and 0.5 is 1/2. Spotting these patterns lets you quickly find the fraction without hard math. Practice with different decimals and fractions. This will help you get better at recognizing and converting these patterns. Leverage Visual Aids Visual aids can make converting decimals to fractions easier. Try using visual models like number lines or fraction bars. They help you see the decimal and fraction relationship, which is great for tricky decimals. By using these tips, you’ll get better at converting decimals to fractions fast and accurately. Try different methods to find what works best for you. Real-World Applications of Fractions Fractions are key in our daily lives, solving many real-world problems. They help in cooking, baking, finance, and construction. Knowing how to convert decimals to fractions is very useful. Cooking and Baking In the kitchen, fractions help measure ingredients right. They’re essential for doubling recipes or dividing portions. Fractions make sure your dishes are perfect every time. They help measure spices and slice cakes evenly. Fractions are the key to great cooking. Home Improvement and Construction Fractions are vital in home improvement and construction. They’re needed for measuring materials, cutting lumber, and laying tile. Contractors and DIY fans use fractions for accurate measurements and a professional look. Personal Finance In finance, fractions help with discounts, tips, and interest rates. They let you make smart choices and get the best deals. You can also ensure you’re getting fair service fees. Everyday Measurements • Measuring ingredients for cooking and baking • Calculating lengths, widths, and heights for home improvement projects • Determining appropriate portions or servings • Calculating discounts, tips, and interest rates Fractions have many uses in our lives. Learning to convert decimals to fractions helps you face many challenges. It makes you better at making decisions in your life and work. Application Example Cooking and Baking Measuring ingredients like 1/4 cup of flour or 3/8 teaspoon of salt Home Improvement and Construction Cutting lumber to 2 3/4 inches or laying tile with 1/8 inch grout lines Personal Finance Calculating a 15% tip or a 40% discount on a purchase Everyday Measurements Determining appropriate portions or servings, such as 1/2 cup of cereal or 3/4 of a sandwich Practicing Decimal to Fraction Conversions To get better at turning decimals into fractions, you need to practice a lot. We’ll give you many decimal to fraction conversion examples and practice problems to improve your skills. Examples of Decimal to Fraction Conversion Let’s begin with some easy examples to get you started: 1. Convert 0.5 to a fraction: 0.5 = ^1/[2] 2. Convert 0.75 to a fraction: 0.75 = ^3/[4] 3. Convert 0.125 to a fraction: 0.125 = ^1/[8] The trick is to spot the repeating pattern in the decimal. Then, write it as a fraction with the right denominator. Decimal to Fraction Conversion Exercises Now, let’s do some more decimal to fraction conversion exercises to help you practice: • Convert 0.333 to a fraction. • Convert 0.6666 to a fraction. • Convert 0.2375 to a fraction. The more you practice, the better you’ll get at converting decimals to fractions. Keep working on these examples, and soon you’ll be a pro! Decimal Fraction 0.5 ^1/[2] 0.75 ^3/[4] 0.125 ^1/[8] 0.333 ^1/[3] 0.6666 ^2/[3] 0.2375 ^19/[80] Advantages of Understanding Fractions Learning to convert decimals to fractions has many benefits of understanding fractions. It boosts your math skills, problem-solving, and ability to handle complex numbers. This knowledge is key for success in school and work, opening doors to new opportunities. One big advantage of mastering fractions is working with a broader range of numbers. Fractions help you understand and work with complex quantities more accurately. This skill is critical in engineering, finance, and science, where fractions are a must. Also, knowing fractions well improves your problem-solving skills. Fraction problems need you to think critically and break down big issues into smaller parts. This skill helps you solve many math challenges in school and real life. In short, the benefits of understanding fractions go beyond just converting decimals. Mastering fractions opens up a world of possibilities. It boosts your math skills and problem-solving, leading to success in many areas. Resources for Learning Decimal to Fraction Conversion Looking to improve your skills in converting decimals to fractions? There are many reliable resources out there. You can find online tutorials, educational websites, interactive apps, and practice workbooks. These materials will help you learn and get better at converting decimals to fractions. The Khan Academy is a great place to start. It has free online lessons and videos on decimal to fraction conversion. Websites like Math is Fun and Mathplanet also offer step-by-step guides, interactive exercises, and quizzes. For hands-on practice, try a practice workbook or a decimal to fraction conversion app. These tools give you feedback and help you learn in a personalized way. It’s important to keep practicing and using what you’ve learned. By checking out different resources, you’ll become an expert in converting decimals to fractions. This skill is useful in many real-world situations. What is the importance of converting decimals to fractions? Converting decimals to fractions is a key skill. It helps you understand math better and makes calculations easier. This skill is useful in many areas, like finance and cooking. It’s also vital for doing well in school, mainly in math and science. What are the steps to convert a decimal to a fraction? Converting a decimal to a fraction has two main steps. First, find the repeating pattern in the decimal. Then, write the fraction that matches it. This simple process lets you quickly turn decimals into fractions. What are some common decimal to fraction conversions? Some decimals convert easily to fractions, while others need more work. Terminating decimals stop at a point, and non-terminating decimals keep going. Knowing how to handle these types helps you get better at converting decimals. What are some tips for easily converting decimals to fractions? To make converting decimals to fractions easier, use visual aids and look for patterns. You can also use calculators or apps for fractions. These tips help you convert decimals quickly and accurately. Where are fractions used in everyday life? Fractions are used in many ways every day. They help when cooking, calculating discounts, and more. Knowing how to convert decimals to fractions is useful in these situations. It helps you in both personal and work life. What resources are available for learning decimal to fraction conversion? There are many resources to help you learn decimal to fraction conversion. You can find online tutorials, educational websites, apps, and practice books. These tools support your learning and help you get better at converting decimals. Leave a Comment
{"url":"https://factorcalculators.com/convert-decimal-to-fractions-easy-steps-tips","timestamp":"2024-11-09T12:38:18Z","content_type":"text/html","content_length":"93408","record_id":"<urn:uuid:ab59e874-ba0d-4017-9e50-e37bb479bbdf>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00223.warc.gz"}
An exact coupled cluster theory for Moshinsky and Hookean two-electron model atoms with spin-compensated ground states - Institutional Repository University of Antwerp Title An exact coupled cluster theory for Moshinsky and Hookean two-electron model atoms with spin-compensated ground states Author Amovilli, C. March, N.H. The Moshinsky (M) and Hookean (H) models of two-electron atoms replace the electron-nuclear interaction by harmonic forces. The difference between them resides in the interparticle interaction, the H model retaining e(2)/r(12) as in helium, whereas the M atom is entirely harmonic. Using a 'coupled cluster' representation that = exp((X) over cap)Phi, (X) over cap is shown to be the sum of a one-body operator (X) over cap (1) and a two-body contribution (X) over cap (2). For Phi taken as a product of Gaussian Abstract functions, the one-body operator (X) over cap (1), is of length scaling form. In the M model, (X) over cap (2) is proportional to r(12)(2), whereas in the H model it is given explicitly as an infinite series in powers of r(12). Finally, some comments are added about the He-like ions in the limit of large atomic number. (C) 2003 Elsevier B.V. All rights reserved. Language English Source (journal) Chemical physics letters. - Amsterdam, 1967, currens Publication Amsterdam : 2003 0009-2614 [print] 1873-4448 [online] DOI 10.1016/S0009-2614(03)01255-7 Volume/pages 378 :1-2 (2003) , p. 167-171 ISI 000185124600027 Full text (Publisher's DOI) Full text (publisher's https://repository.uantwerpen.be/docman/iruaauth/88da87/b9e5876.pdf version - intranet only)
{"url":"https://repository.uantwerpen.be/link/irua/103280","timestamp":"2024-11-10T19:06:28Z","content_type":"text/html","content_length":"28409","record_id":"<urn:uuid:af6ba779-794c-4420-93d7-1f54a482698b>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00580.warc.gz"}
Unsolved Mysteries Extra Dimensions String theory and other new proposals require more than three space dimensions. These extra dimensions could be very small, which is why we don't see them. How can there be extra, smaller dimensions? Think about an acrobat and a flea on a tight rope. The acrobat can move forward and backward along the rope. But the flea can move forward and backward as well as side to side. If the flea keeps walking to one side, it goes around the rope and winds up where it started. So the acrobat has one dimension, and the flea has two dimensions, but one of these dimensions is a small closed loop. So the acrobat cannot detect any more than the one dimension of the rope, just as we can only see the world in three dimensions, even though it might well have many more. This is impossible to visualize, precisely because we can only visualize things in three dimensions!
{"url":"https://ccwww.kek.jp/pdg/particleadventure/frameless/extra_dim.html","timestamp":"2024-11-04T14:59:29Z","content_type":"text/html","content_length":"4716","record_id":"<urn:uuid:5062a086-55a7-4c2c-8a02-e575483d4564>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00043.warc.gz"}
Determining If a Number is Odd or Even Please Note: This article is written for users of the following Microsoft Excel versions: 2007, 2010, 2013, and 2016. If you are using an earlier version (Excel 2003 or earlier), this tip may not work for you. For a version of this tip written specifically for earlier versions of Excel, click here: Determining If a Number is Odd or Even. Determining If a Number is Odd or Even Written by Allen Wyatt (last updated June 17, 2020) This tip applies to Excel 2007, 2010, 2013, and 2016 A common programming task is validating user input. Often, your macro may need to determine if a number entered by a user is odd or even. For instance, suppose you wrote your own macro that asked the user what worksheet number they wanted to process. If your macro had to process odd and even worksheets differently, then you need to figure out if the number the user provided was odd or even. The technique for this is relatively simple, as shown here: Even = (UserNum Mod 2) - 1 After execution of this line, Even will be True (-1) if UserNum was even, or False (0) if UserNum was odd. If you would like to know how to use the macros described on this page (or on any other page on the sites), I've prepared a special page that includes helpful information. Click here to open that special page in a new browser tab ExcelTips is your source for cost-effective Microsoft Excel training. This tip (11398) applies to Microsoft Excel 2007, 2010, 2013, and 2016. You can find a version of this tip for the older menu interface of Excel here: Determining If a Number is Odd or Even. Macros are often used to process the data in a worksheet. If that data includes dates that need to be processed, you'll ... Discover More Macros that run automatically when you open or close a workbook are quite helpful. You may not want them to run, however, ... Discover More Want to step through the worksheets in a workbook, displaying them like a slideshow? The macros provided in this tip can ... Discover More 2019-06-01 16:08:59 Rick Rothstein Another way to test whether a number is even or not... Num Like "*[02468]" 2015-05-26 14:11:53 OOPS ... made a typo ... It should be this: A / 2 = Int(A / 2) ... in 1.83203125 sec 2015-05-26 14:09:45 Here are some times I checked on my i7 desktop with Excel 2013 VBA ... A = Int(A / 2) ... in 1.615234375 sec (A Mod 2) = 0 ... in 1.0390625 sec (A Mod 2) - 1 ... in 1.2734375 sec Application.IsOdd(A) ... in 275.92578125 sec WorksheetFunction.IsEven(A) ... in 96.341796875 sec These time were generated on a 100,000,000 For...Next loop. Clearly (A Mod 2) = 0 is the fastest; over 20% faster than the method shown here. At the end of the day, most macro/VBA "normal" use wouldn't really notice any difference in performance using any even/odd check method above. But if you are a programmer/developer who wants to optimize your code, consider native commands as often as possible in VBA. 2015-05-25 05:46:36 Well, I stand corrected. 2015-05-24 23:32:44 Locke Garmin Ok, I couldn't help myself. :) I did the same tests only this time 100,000,000 times and got the results on the following methods: (i Mod 2) - 1 1.70 Seconds (i Mod 2) = 0 1.52 Seconds Hardly significant savings but interesting! :) 2015-05-24 23:22:17 I got interested in the idea of the worksheet functions being slower that I did a couple of tests. I ran a for loop 1,000,000 on each method like the following: Dim b As Boolean Dim i As Long: i = 2 dim x as Long 'Start Stopwatch Macro For x = 1 To 1000000 b = (i Mod 2) - 1 Next x 'End Stopwatch Macro And came to the following results: (i Mod 2) - 1 0.02 Seconds 2.19 Seconds 2.27 Seconds 5.80 Seconds So I concluded that there is a small time penalty you would start running into if you start testing quite a few numbers for being even or odd. I hope there is another VBA nut who finds this as interesting as I did! :) 2015-05-23 05:49:36 I don't think you get a time penalty for using a WorksheetFunction, it only happens if you go get a value from the book. Which is once in both examples. Furthermore, native functions tend to be faster than custom made. 2015-05-22 19:03:10 Michael (Micky) Avidan To my opinion the average Excelist isn't familiar with the results -1 and 0. He wants to get a straight answer like TRUE or FALSE. Therefore my suggestion would be one of the following (Short) commands: Sub Is_Odd() MsgBox Application.IsOdd([A1]) MsgBox CBool([A1] Mod 2) End Sub Michael (Micky) Avidan “Microsoft® Answers" - Wiki author & Forums Moderator “Microsoft®” MVP – Excel (2009-2015) 2015-05-22 12:26:29 Yep, you can use the WorksheetFunction method, as Eric said. Compare these two lines, however: x = Application.WorksheetFunction.ISEVEN(Range("A1").Value) x = (Range("A1").Value Mod 2) - 1 As a matter of preference, the second is better. It is not only shorter, but it doesn't have the time penalty of using the WorksheetFunction method. The MOD operator doesn't make it it "so complicated" (as Eric stated); it actually makes it less complicated and faster. 2015-05-22 11:03:14 Eric Augusta Why are we making this so complicated by using MOD? You can use the standard functions ISEVEN or ISODD in both situations. In a workbook just use ISEVEN or ISODD directly to return either TRUE or FALSE. In VBA you can use the same functions by writing code like this: Dim x as Boolean x = Application.WorksheetFunction.ISEVEN(Range("A1").Value) 2015-05-22 10:22:26 icy322: Absolutely nothing, if you are working in a workbook. Those functions won't work directly in a macro, though. And, as is stated in the first two sentence of this tip, this is for use in a macro. 2015-05-22 10:20:26 What's wrong with using the iseven and isodd funtion?=iseven(a1) or = isodd(a1) 2015-05-22 10:16:42 Gary: This tip is for use in a macro, in VBA. Thus, no cell reference is required. In the example in the tip, the value being evaluated is in the UserNum variable. 2015-05-22 10:05:08 Gary Lundblad Forgive my ignorance, but I'm not following how this formula would work, as there is no cell reference in the formula. How does it know what cell you want evaluated? I'm probably missing something really obvious. Thank you! Gary Lundblad 2015-05-22 07:30:18 From personal experience and Euler Project tinkering, mod does not work with numbers longer than Long's limit (2e9). In that case the workaround would be to test the last digit of the number. Something like: Val(Right(X,1)) Mod 2 ...Supposing the number is an integer. ANd seeing how CInt and CLng don't work with big numbers, you are stuck. Two solutions: - Treat the number as text. Remember that Excel has a native limitation of 15 digits per number, so if your number is bigger than 1e15 it won't work. - Use an addon specialized in big numbers. My favourite is xNum:http://www.bowdoin.edu/~rdelevie/excellaneous/ 2012-02-25 14:38:43 Ken Kast This solution depends on knowing that -1 will be treated as True. An implementation-independent solution is Even = (UserNum Mod 2) = 0. The other advantage to this is that it is literally the definition of even. In a spreadsheet it would be =Mod(UserNum, 2) = 0.
{"url":"https://excelribbon.tips.net/T011398_Determining_If_a_Number_is_Odd_or_Even.html","timestamp":"2024-11-01T19:29:38Z","content_type":"text/html","content_length":"59971","record_id":"<urn:uuid:8247c599-0211-48b0-998f-07b029fe7d2c>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00499.warc.gz"}
The Relation Between Total, Average and Marginal Products in Grade 12 - Economics - Theory of Production | Online Notes Nepal The Relation Between Total, Average and Marginal Products Relation between total, average and marginal products can be illustrated with the help of the following hypothetical table. Here the land is a fixed factor and labour is a variables factor of The above table shows the relationship between total production, average production, and marginal production. • Nature of total product. • Nature of average product. • Nature of marginal production. Nature of total product It increases the beginning reaches at the maximum point when marginal production is zero and starts to fall as marginal production becomes negative. Nature of average product It increases but slower the marginal product reaches as the maximum point when the marginal point equal to the average product and starts to fall. Nature of marginal product It increases faster than the average product reaches as the maximum point and stat to fall become zero and even negative. Concept of the production function: Production function expresses the technology or mathematical relationship quantity of physical inputs and output. In other words, it shows with a given state of technological knowledge and during a particular period of time how much can be produced with a given amount of input. According to Stigler “production function is a relationship between inputs productive services per unit of time and output of product per unit of time. Thus we can calculate that production function is the technical relationship between the quantity of goods production and the factor of production necessary to produce it”. It may be express in the form of the equation. X=F (LD, L, C, M, T)
{"url":"https://onlinenotesnepal.com/the-relation-between-total-average-and-marginal-products","timestamp":"2024-11-09T10:41:37Z","content_type":"text/html","content_length":"75961","record_id":"<urn:uuid:681adae2-f211-4b11-b909-2a78d2393cd9>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00605.warc.gz"}
Grasping Variable based math: The Underpinning of Cutting edge Numerical Reasoning Variable based math is a foundation of science that overcomes any barrier among number-crunching and further developed numerical ideas. It includes the investigation of images and rules for controlling these images to settle conditions and figure out connections. This article dives into the basics of polynomial math, its key ideas, and its significance in both scholar and true settings. 1. What is Algebra? At its center, polynomial math is tied in with finding obscure qualities utilizing known amounts. Not at all like math, which manages explicit numbers, polynomial math utilizes letters and images to address numbers and their connections. These images, known as factors, can represent any number, permitting variable based math to tackle a great many issues and express broad connections. 2. Central Ideas of Algebra a. Factors and Expressions: In polynomial math, factors are images (generally letters) that address numbers. For instance, in the articulation (2x + 3), (x) is a variable. Logarithmic articulations join factors, numbers, and activities (like expansion and increase) to shape articulations that can be rearranged or assessed. b. Conditions and Inequalities: Equations are numerical proclamations that attest the fairness of two articulations. For instance, (2x + 3 = 7) is a condition where the objective is to track down the worth of (x) that makes the two sides equivalent. Settling the condition includes separating the variable on one side to decide its worth. Inequalities include articulations that are not be guaranteed to approach yet are connected by imbalance signs (e.g., ( < ), ( > ), ( \leq ), ( \geq )). For instance, (2x + 3 > 7) requires tracking down upsides of (x) that make the imbalance valid. c. Capabilities and Graphs: A function is an exceptional kind of connection where each information (frequently addressed as (x)) has precisely one result (frequently addressed as (y)). Capabilities can be addressed mathematically (e.g., (f(x) = 2x + 3)) and graphically. Diagramming a capability gives a visual portrayal of how the result changes with the information. d. Frameworks of Equations: Frameworks of equations include settling numerous conditions at the same time. For example, settling a framework like 2x + y = 5 \ x – y = 1 \end{cases}]requires tracking down values for (x) and (y) that fulfill the two conditions simultaneously. Strategies for tackling frameworks incorporate replacement, disposal, and graphical 3. The Significance of Algebra a. Establishment for Cutting edge Mathematics: Variable based math fills in as an establishment for further developed fields of math like math, direct polynomial math, and dynamic variable based math. Understanding logarithmic ideas is significant for advancing to more elevated level arithmetic and for fathoming the more complicated connections and designs inside these fields. b. Critical thinking and Basic Thinking: Variable based math creates decisive reasoning and critical thinking abilities. By controlling images and tackling conditions, understudies figure out how to move toward issues deliberately and intelligently. These abilities are adaptable to other scholastic subjects and genuine situations. c. Genuine Applications: Polynomial math has various viable applications. In finance, polynomial math is utilized to compute financing costs, credit installments, and speculation development. In designing and science, it helps model connections among factors and take care of mind boggling issues. For instance, mathematical conditions are utilized to configuration structures, break down information, and anticipate 4. Instructing and Learning Algebra Successful polynomial math schooling includes a blend of hypothetical comprehension and pragmatic application. Instructors utilize different strategies, including intuitive activities, visual guides, and certifiable issues, to assist understudies with embracing mathematical ideas. Practice is fundamental, as it supports understanding and fabricate capability. Polynomial math is a central area of science that gives fundamental devices to settling conditions, grasping capabilities, and breaking down connections. Its ideas structure the reason for cutting edge numerical review and have far reaching applications in day to day existence and different callings. By dominating variable based math, understudies foster decisive reasoning and critical thinking abilities that are significant in both scholar and true settings. Polynomial math isn’t just about controlling images; about understanding examples and connections support our general One thought on “Grasping Variable based math: The Underpinning of Cutting edge Numerical Reasoning” 1. Hi, this is a comment. To get started with moderating, editing, and deleting comments, please visit the Comments screen in the dashboard. Commenter avatars come from Gravatar.
{"url":"https://dasuha.tech/hello-world/","timestamp":"2024-11-13T21:09:51Z","content_type":"text/html","content_length":"157909","record_id":"<urn:uuid:5273af91-013b-4fec-a0ca-527714c980df>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00523.warc.gz"}
Demystifying ANOVA: A Step-by-Step Guide to Interpretation and Significance Testing Demystifying Anova: A Step-By-Step Guide To Interpretation And Significance Testing To read an ANOVA table, examine the degrees of freedom (df) for the between-group and within-group comparisons. The sum of squares (SS) and mean square (MS) quantify the variation within each group. The F-statistic is the ratio of between-group MS to within-group MS, testing for significant group differences. The P-value indicates the probability of observing the F-statistic given the null hypothesis. Eta squared and partial eta squared estimate the effect size, showing the proportion of variance attributable to the independent variable. Understanding Degrees of Freedom (df): The Key to ANOVA In the realm of statistics, when we delve into the world of Analysis of Variance (ANOVA), one crucial concept that holds the key to understanding our results is degrees of freedom (df). Think of it as the number of independent pieces of information we have, allowing us to draw meaningful conclusions from our data. Degrees of Freedom: The Building Block of ANOVA The df is intertwined with several fundamental statistical concepts: • Relationship with F-distribution: The F-distribution, used in ANOVA, relies on the df to determine the shape of its probability curve. • Connection with Mean Square: Mean square, which compares variation between groups to variation within groups, is directly influenced by the df. • Influence on Statistical Power: Higher df generally lead to higher statistical power, meaning we’re more likely to detect true differences when they exist. Degrees of Freedom and Probability Distributions In ANOVA, df plays a vital role in shaping the probability distributions we use to test our null hypotheses. By knowing the df between and within groups, we can calculate the expected distribution of our data if there were no real differences between groups. This allows us to determine the probability of obtaining our observed results if the null hypothesis is true. For instance, in a one-way ANOVA with 2 groups and 10 observations in each group, the df between groups is 1, and the df within groups is 18. This information guides us in selecting the appropriate probability distribution for our F-statistic. Measuring Variation: Sum of Squares (SS) In our quest to understand statistical analyses, we delve into the realm of variation, an inherent characteristic of data that reflects differences among observations. To quantify this variation, statisticians employ a crucial measure known as the Sum of Squares (SS). Imagine a set of data points, each representing a measurement. The SS measures how far each data point deviates from the mean, or central tendency, of the entire dataset. The greater the deviation, the larger the SS. In essence, the SS captures the total amount of variation present in the data. Role in Variance Estimates The SS plays a vital role in estimating the variance, which measures the spread or dispersion of data around the mean. Variance is calculated by dividing the SS by the degrees of freedom (df) associated with the dataset. By quantifying variation, the SS helps us assess how closely our data points cluster around the mean. Chi-Square Tests Beyond variance estimates, the SS finds application in the widely used chi-square tests. These tests help determine whether observed data frequencies significantly deviate from expected frequencies. The SS serves as a key component in calculating the chi-square statistic, which is used to assess the likelihood of such deviations occurring by chance alone. Measuring variation is essential for understanding the characteristics of data and making inferences from statistical analyses. The Sum of Squares, a versatile measure of variation, empowers researchers to quantify data spread, estimate variance, and conduct chi-square tests. By grasping the concept of SS, we enhance our ability to interpret data and draw meaningful conclusions from our research endeavors. Mean Square (MS): Variation per Degree of Freedom In the realm of statistical analysis, understanding degrees of freedom (df) is crucial. They play a pivotal role in determining the probability distributions for Analysis of Variance (ANOVA) tests. Another fundamental concept in ANOVA is Sum of Squares (SS), which measures variation within data sets. Mean Square (MS) is a statistical measure that captures variation per degree of freedom. It is calculated by dividing the sum of squares by its corresponding degrees of freedom. MS is particularly useful in ANOVA because it provides a more precise estimate of variation. In ANOVA, the F-statistic is used to test for differences between group means. The F-statistic is calculated by dividing the between-groups mean square by the within-groups mean square. • Between-groups mean square represents the variation between different groups. • Within-groups mean square represents the variation within each group. By comparing the between-groups mean square to the within-groups mean square, the F-statistic determines if there is a statistically significant difference between group means. If the F-statistic is large, it indicates that there is a greater amount of variation between groups compared to within groups, suggesting that the group means are likely different. Understanding mean square is essential for interpreting the results of ANOVA tests. It helps researchers make informed decisions about the presence of significant differences between groups, providing valuable insights into the data being analyzed. F-statistic: Testing for Group Differences • Overview of the F-statistic and its purpose in ANOVA • Null hypothesis and comparison of variation between and within groups The F-statistic: Unraveling the Significance of Group Differences In the realm of statistics, the F-statistic holds a pivotal position, playing a crucial role in analysis of variance (ANOVA). This statistical test allows us to determine if there are statistically significant differences between two or more groups. At its core, the F-statistic compares the variability between groups to the variability within groups. By doing so, it helps us understand whether the differences we observe between groups are due to random chance or to a genuine effect of the independent variable (the factor being tested). The F-statistic is calculated as the ratio of two mean squares. The numerator, called the between-groups mean square, represents the variation between the group means. The denominator, the within-groups mean square, measures the variation within each group. Null Hypothesis and ANOVA Assumptions In ANOVA, we start with a null hypothesis that states that there is no difference between the groups. The F-statistic helps us to either support or reject this hypothesis. For the F-test to be valid, we must meet certain assumptions: • The data must be normally distributed. • The variances of the groups must be equal (homogeneity of variances). • The samples must be independent. Interpreting the F-statistic A large F-statistic indicates that there is a significant difference between the group means. This suggests that the independent variable is having an effect on the dependent variable. Conversely, a small F-statistic suggests that there is no significant difference between the groups. Next Steps: Understanding P-values and Statistical Significance Once we have calculated the F-statistic, we need to determine its statistical significance. This is done by comparing the F-statistic to a critical value obtained from the F-distribution. If the F-statistic is greater than the critical value, we conclude that the group differences are statistically significant at a predetermined significance level (usually α = 0.05). P-value: The Gateway to Statistical Significance What is a P-value? In the realm of statistics, a P-value is a crucial number that represents the probability of obtaining a result as extreme or more extreme than the one observed, assuming the null hypothesis is true. The null hypothesis is a statement that there is no statistically significant difference between the groups being compared. The Significance of Statistical Significance A low P-value (typically below 0.05) suggests that the observed result is unlikely to have occurred by chance alone, assuming the null hypothesis is true. This leads to the rejection of the null hypothesis and the conclusion that a statistically significant difference exists between the groups. Relationship with Alpha Level The alpha level is a predetermined threshold that defines the level of statistical significance. It is often set at 0.05, meaning that a P-value below 0.05 is considered statistically significant. By setting the alpha level, researchers effectively decide how strict they want their criteria for rejecting the null hypothesis to be. Understanding Statistical Significance Statistical significance is not the same as practical significance. A statistically significant result does not necessarily mean that the observed difference is large or meaningful in real-world terms. Researchers must also consider the magnitude of the effect and the context in which it occurs. The P-value is a fundamental concept in statistical analysis. It helps researchers determine whether their results are statistically significant and supports the rejection or retention of the null hypothesis. By understanding the concept of P-value and its relationship with statistical significance and alpha level, researchers can interpret their findings more accurately and make informed Eta Squared (η²): Estimating Effect Size • Concept of effect size and its measurement through eta squared • Proportion of variance explained by the independent variable Eta Squared (η²): Quantifying the Impact of the Independent Variable In the realm of statistical analysis, understanding the effect size of a relationship between variables is crucial. Effect size measures the magnitude of that relationship, indicating the proportion of variance in the dependent variable that is explained by the independent variable. One commonly used measure of effect size in analysis of variance (ANOVA) is eta squared (η²). Defining Eta Squared Eta squared is a statistic that represents the proportion of variance in the dependent variable that is explained by the independent variable, after accounting for the effects of error variance. It is calculated as the ratio of the variance between groups to the total variance: η² = SSbetween / (SSbetween + SSwithin) where SSbetween is the sum of squares between groups and SSwithin is the sum of squares within groups. Interpretation of Eta Squared The value of η² ranges from 0 to 1. A value of 0 indicates no relationship between the independent and dependent variables, while a value of 1 indicates a perfect relationship. In general, an η² value of: • Less than 0.06 is considered small • Between 0.06 and 0.14 is considered medium • Greater than 0.14 is considered large Eta Squared in Context Eta squared is a useful measure of effect size in ANOVA, as it provides insight into the strength of the independent variable’s effect on the dependent variable. It is important to note that η² is not affected by sample size, unlike other effect size measures such as Cohen’s d. By understanding the concept of eta squared, researchers can gain a deeper understanding of the practical significance of their findings. A high η² value indicates that the independent variable has a substantial impact on the dependent variable, while a low η² value suggests that other factors may be more influential. Partial Eta Squared (η²p): The Refined Effect Size Just like Eta squared, Partial Eta squared (η²p) is a measure of effect size that indicates the proportion of variance in the dependent variable that is explained by the independent variable. However, unlike Eta squared, Partial Eta squared is adjusted for sample size and the number of independent variables. This makes it a more precise estimate of the effect size, especially when the sample size is small or there are multiple independent variables. The formula for Partial Eta squared is: η²p = SS_effect / (SS_effect + SS_error) • SS_effect is the sum of squares due to the effect of the independent variable • SS_error is the sum of squares due to error To calculate Partial Eta squared, you first need to calculate the sum of squares due to the effect of the independent variable and the sum of squares due to error. The sum of squares due to the effect of the independent variable is the variance between the means of the different groups. The sum of squares due to error is the variance within the groups. Once you have calculated the sum of squares due to the effect of the independent variable and the sum of squares due to error, you can plug these values into the formula for Partial Eta squared to calculate the effect size. Why is Partial Eta Squared a More Precise Estimate of Effect Size than Eta Squared? Partial Eta squared is a more precise estimate of effect size than Eta squared because it takes into account the sample size and the number of independent variables. Eta squared can be inflated when the sample size is small or there are multiple independent variables. This is because the sum of squares due to error is smaller when the sample size is small and when there are multiple independent variables. As a result, the effect size, which is calculated by dividing the sum of squares due to the effect of the independent variable by the sum of squares due to error, is larger. Partial Eta squared corrects for this problem by dividing the sum of squares due to the effect of the independent variable by the sum of squares due to error and the number of independent variables. This gives a more accurate estimate of the effect size, even when the sample size is small or there are multiple independent variables. When Should You Use Partial Eta Squared? You should use Partial Eta squared whenever you are interested in estimating the effect size of an independent variable. Partial Eta squared is a more precise estimate of effect size than Eta squared, especially when the sample size is small or there are multiple independent variables. Partial Eta squared is a useful measure of effect size that can be used to estimate the proportion of variance in the dependent variable that is explained by the independent variable. Partial Eta squared is a more precise estimate of effect size than Eta squared because it takes into account the sample size and the number of independent variables. You should use Partial Eta squared whenever you are interested in estimating the effect size of an independent variable. Leave a Comment
{"url":"https://sciencemind.blog/anova-interpretation-significance-testing/","timestamp":"2024-11-04T07:41:47Z","content_type":"text/html","content_length":"145329","record_id":"<urn:uuid:1fe48ca7-acf2-4598-90c3-6cf1b58e8f19>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00417.warc.gz"}
How to insert Beta (β) symbol in LaTeX? | Electricalvoice How to insert Beta (β) symbol in LaTeX? In LaTeX, you can insert the beta (β) symbol using the command \beta. This command will produce the Greek letter beta in the output. You can use \beta inside $$ to include the beta symbol within text or in mathematical equations. Beta (β) symbol in LaTeX Here’s an example of how to use it in a LaTeX document. The beta symbol is: $\beta$ When you compile this document, you will see the beta symbol in the output. Let’s have a look at another example of using the beta symbol. The beta symbol ($\beta$) often plays a pivotal role in statistical modeling. When estimating parameters in linear regression, for instance, each predictor variable is associated with its own beta coefficient. These coefficients, denoted as $\beta_{1}$,$\beta_{2}$, and so forth, quantify the impact of each predictor on the response variable. Researchers interpret these beta values to understand the strength and direction of the relationships between variables. Use of Beta (β) symbol in Equations Here’s an example of using the beta symbol in a LaTeX equation. Consider a linear regression model with two predictor variables \(x_1\) and \(x_2\), and a response variable \(y\). The model can be represented as: y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \epsilon Here, \(\beta_0\) is the intercept, \(\beta_1\) and \(\beta_2\) are the coefficients associated with \(x_1\) and \(x_2\) respectively, and \(\epsilon\) represents the error term. The values of \(\beta_0\), \(\beta_1\), and \(\beta_2\) are estimated during the model fitting process. Leave a Comment This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"url":"https://electricalvoice.com/beta-symbol-latex/","timestamp":"2024-11-11T04:20:19Z","content_type":"text/html","content_length":"111977","record_id":"<urn:uuid:cda0a976-dc6a-45d1-945e-04353961b8b2>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00803.warc.gz"}
Now that we’re starting to get somewhere with Vectors and moving around in 3D, let’s turn our attention to the next topic, rotations. (Get it? “Turn our attention.”) Rotations in 3D are pretty awesome, it’s surprising to me how many clever ways there are to represent them. Unfortunately they all pretty much suck for writing a Physics Engine, but let’s take a look at them and discuss the pros and cons. If you want to jump to the answer, it’s Quaternions, but let’s see why. Caveat Emptor: I am not a Math major, I am self taught on this stuff. As I’ve learned new things, I’ve often found that my previous understandings were incorrect. The web is awash in articles about 3D rotation that are simply wrong, this may be just another example. Euler Angles Apparently this is pronounced “Oiler,” who knew? These are appealing because they’re the simplest to understand. If you’ve played any flight simulators or spaceship fighting games then you already understand them, they’re pitch, yaw, and roll. If you haven’t played those kinds of games then I feel sorry for you. Here’s a refresher, imagine you’re flying an airplane, these angles are: • Pitch – is the nose pointing up or down • Yaw – this isn’t used much in airplanes, but it’s turning left and right while keeping the wings level. It’s how a car turns • Roll – when planes turn one wing goes up and the other goes down, this is roll. Hopefully it’s never used in a car. This is probably a good a time as any to try to start thinking in terms of X, Y, and Z axis. For our discussions the X axis moves left to right. The Y axis moves up and down. The Z axis moves forward to back. Pretty simple stuff, right? For Euler Angles to make sense we also have to think about the planes that the axes define. In this case I mean plane like a flat surface like a table top or a wall, not the kind you fly in. Take a moment to convince yourself that any 2 axes define a plane. The X and Z axes together define planes like the floor or table top. The XY plane would be a TV you’re watching. The YZ plane defines the walls to your right or left side. So anyway, if you say your pitch is 45 degrees up from the XZ plane (a table top), 45 degrees to the right of the YZ plane (a wall to your left or right) and then 180 degrees of roll, that would mean you’re facing up and right, and you’re flipped upside down. That’s Euler Angles. Like I said, it’s not very hard to understand. However it suffers from some pretty serious limitations. The first of which is that the order of application is important. If you do the roll first or last you will end up in a completely different place. In our previous example if you started by rolling 180 degrees then you’d be upside down and your concept of “up” would be different. So from there if you went “up” and “right” you’d end up in a different place then if you did the roll last. So if you’re tempted to write a rotation function like void rotate( float pitch, float yaw, float roll ) Just stop! Nothing about that interface says in which order the operations will be defined, so you’re almost guaranteed to forget and change your assumption and end up with a horrible bug. The second major problem with Euler Angles is “Gimbal Lock.” Go to Wikipedia for more information on this one, it’s fascinating but a bit beyond the scope of this. The short version is that while Euler Angles can define every possible rotation, they can’t always be combined. So while you’re perfectly safe representing your current rotation with Euler Angles, if you then want to turn the ship a bit, and maybe spin when a missile hits and you try to combine all of these rotations it might work… or you might end up in a state where the next rotation will be jarring and random. The third major problem with Euler Angles is Interpolation. This means if you have two different rotations and you want to move some percentage of the way between them, it’s difficult to figure out where that should be. How could this be useful? Well if you know you’re rotating 50 degrees per second and a half a second has passed, how far should you rotate? 25 degrees sounds like an obvious answer. But trying doing this when you’re rotating around all 3 axes at the same time? That’s Interpolation and it’s very hard to do with Euler Angles. Axis and Angle This one here is my own personal favorite because it’s just so darned handy. It takes a little bit longer to get used to, but it’s worth the study time. For this system you start with a Vector, and then you rotate around it by the Angle specified. Makes sense, right? Think about a door. They rotate around the axis defined by the hinges, a Vector that points straight up and down. How far they rotate is specified by the Angle. What about if you’re standing straight up and you want to bow? The Axis would be a straight line from your left to right, forming a hinge through your hips. How far you bow around that Axis is the Angle. These start to get a little confusing when you want to rotate around an Axis that isn’t so easy to picture. Like what if the Axis is (1,1,1) that’s sort of a diagonal axis that’s up, right, and back so rotating around that is a little harder to imagine. I often end up holding my hands in front of my face with my fingers pointing in different directions and trying to rotate my arms around them. My wife calls this “Invisible Knitting.” Another great advantage of Axis and Angle is that it works really well with Cross Products. Imagine you’re pointing directly forward and you want to turn to your right. If you take your starting Vector (forward) and get the Cross Product with your destination Vector (right) you will end up with a new Vector that points straight up. It turns out that this new Vector is exactly the Axis you need to rotate around. And it’s length is proportional to how far you have to rotate, ie the Angle. So yeah, Cross Product and Axis and Angle are made for each other. So to compare Axis and Angle with Euler Angles I would say that Euler Angles are a little simpler to understand. But that’s about their only win. Axis and Angle may be a bit harder to grasp, but once you become comfortable with them they’re easy. Furthermore they don’t suffer from bugs due to order of execution, and they work really well with Cross Products. Big wins. As far as gimbal lock goes, I think they don’t have this problem. I have to confess I don’t understand all the math on this one, so we’ll chalk that one up as a half point in their favor. Alas Interpolation is still very difficult with Axis and Angle. Rotation Matrices I’m afraid if you’ve come here hoping for a deeper understanding of Rotation Matrices then you’ve come to the wrong place. In all my Physics Engines over all my years I have made it a point to never use these. A 3×3 rotation matrix simply has too many numbers in it to make any sense to me. I have no intuitive feel for how Matrices work, so debugging them is basically impossible. The Dubious Engine uses exactly one matrix, and it’s required to pass rotations to OpenGL. I don’t understand it, I just know how to dump my rotation into it. Here’s the important thing I do know about Rotation Matrices, they also suffer from Gimbal Lock and are hard to Interpolate. So as far as comparisons go, they’re impossible to understand, and suffer from the same exact failures as Euler Angles and Axis and Angles. So why bother learning them? Enter the Quaternion. For most of my Game Engine life I considered Quaternions to be unknowable. I couldn’t figure out how they worked or how to imagine them, so I just copied some equations I learned online, tested them enough to convince myself they worked, and moved on. However I am pleased to report that after some more study they’ve started to make some sense to me. I’ve recently gone back through my engine and smoothed out some misunderstandings with Quaternions. None of the math changed, so I don’t think I fixed any bugs, but it does make a bit more sense now, which is worthwhile. It’s also a little slower, which is a shame, but worth it for the clarity. I’m not going to try to write another article explaining Quaternions. I’ve found two that did the trick for me, so I’m just going to send you there and let you learn the same way I did. However I will fill in the starting point that I was missing. I was familiar with “Imaginary Numbers,” which I hope you are too. If not, they’re usually represented as i and are defined as: i = sqrt(-1) i * i = -1 What I did not know was that there is another kind of number called “Complex Numbers” that are defined as a real number plus an imaginary: a + bi I had no idea these existed, and most Quaternion articles start assuming you know them. Luckily not the ones I recently found. So, at this point, push the pause button on this article, and spend some time learning from these two sources. I am humbled by how well they explain the subject, they do it better then I ever will. When you’re done, feel free to return and we can look at how we can use Quaternions in a game engine: • https://www.youtube.com/watch?v=mHVwd8gYLnI – as taught by an actual University Professor. Unfortunately while attempting to speak and write on the blackboard he occasionally messes up an equation, so don’t copy the Math verbatim. Still an Amazing lecture that will get you up to speed very quickly. • https://www.3dgep.com/understanding-quaternions/ – the big one. Nothing about it is easy, you will have to re-read it a bunch of times, but it is complete and awesome. In fact, his whole site is awesome, I will be aspiring to be half as good. Okay, you know know what I do about Quaternions. Here’s how you can represent one in C++ struct Quaternion { float w; Vector v; From the 3D Game Engine Programming article, we know that in a lot of Quaternion Math, the imaginary component acts as a Vector (with dot products, cross products, scalar multiplication, etc). Luckily we already have a Vector class defined that does these things, so it’s easy to reuse. In fact, all of the math for a standard Quaternion is pretty straight forward, so my Quaternion class is relatively simple, here’s a link, there shouldn’t be any surprises. How do we use them? Well really for 3D rotations it’s a Unit Quaternion we want to use. These are the kind that represent rotations. They can easily be created from an Axis and Angle like this: Unit_quaternion::Unit_quaternion( const Unit_vector& axis, float angle ) w = cosf( angle / 2.0f ); v = Vector(axis) * sinf( angle / 2.0f ); So now we can convert from something we’re comfortable with, Axis and Angle, into a Unit Quaternion. From there the math for combining multiple Quaternions to produce an output Unit Quaternion is fairly easy to code up. Lastly any Unit Quaternion can be pretty easily converted back to three axis: X, Y, and Z. I’ll tell you one major drawback of working with these things though. There is just no way to get an intuitive sense of them. Here’s a pop quiz for you to see how your 3D intuition is developing. Point (with your finger) to Vector (1, 1, 0) and rotate 45 degrees forward around it. If you pointed diagonally up and right and then kind of rolled forward and left a bit, congrats! It might not be perfect, but at least you have a sense of it. Okay, now imagine Quaternion (1, (1, 1, 0)) and rotate around that. That real bit has something to do with half the cosine of the angle, and the imaginary bit is related to half the sine… I have absolutely no idea where to go. So imagine trying to debug this code. Like you notice in your simulation that some cubes are rotating “oddly.” So you set your breakpoint, look at your rotation and see it’s (0.43255, (0.97666432, 0.43234, 0.124535)). What do you do with that? So in summary, let’s compare Quaternions to the other Rotation representations we’ve discussed. On the plus side, they do not suffer from Gimbal Lock, or bugs due to order of execution. We didn’t specifically discuss Interpolation, but they do it very well, it’s called “Spherical Linear Interpolation” or SLERP and it’s easy to code up. On the negative side they’re very difficult to understand and debug. However this is somewhat smoothed over by the simplicity in converting between Unit Quaternions and Axis and Angle. All in all, they’re a big win, which is probably why most Physics Engines use them. Here’s the links: << Vector Math Contents Using Types >>
{"url":"https://dubiousoft.com/2017/01/23/rotations/","timestamp":"2024-11-12T15:11:11Z","content_type":"text/html","content_length":"47050","record_id":"<urn:uuid:0463ec3e-5370-4a72-9bba-570dcdc5a98b>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00157.warc.gz"}
How to include P-Delta effect in analysis The two primary means for including P-Delta effect in nonlinear analysis include: 1. For each load combination, create a nonlinear analysis case which includes the P-Delta geometric-nonlinearity parameter. 2. For the gravity loads anticipated, create an Initial P-Delta Analysis case which includes the P-Delta geometric-nonlinearity parameter, then consider all other analyses as linear while using the stiffness matrix developed for this one set of P-Delta loads. See Also
{"url":"https://web.wiki.csiamerica.com/wiki/spaces/kb/pages/2005387/How+to+include+P-Delta+effect+in+analysis?atl_f=content-tree","timestamp":"2024-11-05T15:36:32Z","content_type":"text/html","content_length":"1030774","record_id":"<urn:uuid:c28e0d57-6436-4b26-9550-7eefeda861be>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00335.warc.gz"}
Canonical extensions and profinite completions of semilattices and lattices Gouveia, M. J.; Priestley, H. A. Order, 31(2) (2014), 189-216 Canonical extensions of (bounded) lattices have been extensively studied, and the basic existence and uniqueness theorems for these have been extended to general posets. This paper focuses on the intermediate class S? of (unital) meet semilattices. Any S ? S? embeds into the algebraic closure system Filt(Filt(S)). This iterated filter completion, denoted Filt2 (S), is a compact and join-meet-dense extension of S. The complete meet-subsemilattice S? of Filt2(S) consisting of those elements which satisfy the condition of meet-join-density is shown to provide a realisation of the canonical extension of S. The easy validation of the construction is independent of the theory of Galois connections. Canonical extensions of bounded lattices are brought within this framework by considering semilattice reducts. Any S in S? has a profinite completion, ProS? (S). Via the duality theory available for semilattices, ProS? (S) can be identified with Filt2(S), or, if an abstract approach is adopted, with F?(F?(S)), the free join completion of the free meet completion of S. Lifting of semilattice morphisms can be considered in any of these settings. This leads, inter alia, to a very transparent proof that a homomorphism between bounded lattices lifts to a complete lattice homomorphism between the canonical extensions. Finally, we demonstrate, with examples, that the profinite completion of S, for S ? S?, need not be a canonical extension. This contrasts with the situation for the variety of bounded distributive lattices, within which profinite completion and canonical extension coincide.
{"url":"https://cemat.tecnico.ulisboa.pt/document.php?member_id=16&doc_id=513","timestamp":"2024-11-15T00:08:30Z","content_type":"text/html","content_length":"9555","record_id":"<urn:uuid:c3a95ee0-8d9a-444a-a777-8b0d84685894>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00825.warc.gz"}
An analysis on strip vibration coupled with torsional vibration of main drive system of rolling mill According to the movement mechanism of strip and rollers during the continuous rolling process, the main drive system of each stand was simplified to a single degree of freedom discrete model, and the strip was simplified to an axially moving Euler beam. Then, a nonlinear continuous-discrete coupled vibration model between transverse and longitudinal vibrations of strip and torsional vibration of main drive system was established. According to Hamilton’s principle, the nonlinear differential equations were established. Moreover, modified iteration method and Kantorovich averaging method were used to solve the differential equations. Depending on numerical calculation, the amplitude-frequency responses of strip vibration coupled with torsional vibration of main drive system were obtained. Finally, the influences of the axial velocity, the strip tension, the torsional stiffness, and the rotational inertia on the vibration characteristics were discussed. The results would provide a theoretical reference for control and analysis of rolling mill vibration in engineering practice. 1. Introduction The rolling mill vibration has brought serious troubles to iron and steel industry for many years, which is a significant technical problem. The vibration forms are various, and characteristics and causes are different. At present, the research on the different forms of rolling mill vibration mainly includes vertical vibration and horizontal vibration of rollers, transverse and longitudinal vibration of strip [1, 2], torsional vibration of main drive system [3] and the axial vibration, etc. As early as in 1967, the research of various vibrations of rolling mill was begun. Moller and Hoggart et al. [4] found the existence of torsional vibration of the rolling mill when they were employing the two rolls test machine, and the vibration behavior, which was considered as the self-excited vibration. Lawrence and Thomas [5] analyzed the chatter-mark on the strip surface, which caused torsional vibration by defect of transmission gear. Swiatoniowski and Bar [6] analyzed the chattering phenomenon during the rolling process, and the mathematical model of self-excited vibration was established. Wang et al. [7] analyzed the influence of rolling force on horizontal stiffness of rolling mill, and the functional relationship among the vertical vibration of rolling mill, the horizontal vibration of work roll and the torsional vibration of main drive system was established. Sun et al. [8] investigated the torsional vibration on the dynamic model of six rollers system of 1100 rolling mill, and the influence of torsional vibration on strip shape was acquired. Xu et al. [9, 10] established the hybrid system model of strip coupled with rollers, and the coupled vibration mechanism of roller and strip with tension fluctuation was obtained. Zou et al. [11] analyzed the different influences of cold rolling and hot rolling on axial force, and a mathematical model was established. Du et al. [12] established the nonlinear dynamic model of coupled model between rollers and strip, and the inertial boundary condition was proposed. However, there are few researches studying on coupled vibration of multiple systems. In the earlier stage, the research of strip vibration coupled with torsional vibration of main drive system has been investigated by our group, such as the vibration analysis of main drive system of each stand by finite element software. Researchers found the energy was transferred from one to another main drive system through the strip when resonance occurred, which resulted to the great fluctuation of rollers, and the vibration amplitude may exceed self-resonance [13]. Therefore, it is significant to analyze the coupled vibration between strip and main drive system of rolling mill furtherly. In this paper, the main drive system of rolling mill is simplified to a single degree of freedom model of discrete system, and the strip is simplified to an axially moving Euler beam [14, 15]. Then, a continuous-discrete coupled vibration model between transverse and longitudinal vibrations of strip and torsional vibration of rolling mill main drive system is established. The research results can provide important theoretical reference for control and analysis of strip vibration coupled with torsional vibration during the continuous rolling process. 2. Mechanical and mathematical models The main drive system of each stand can be regarded as a mass-spring system in the tandem mill, which mainly composed of inertial components of electromotor, reduction gears and rollers etc., and elastic components of connecting shafts, a dimensional geometric model is shown in Fig. 1. Due to the traction behavior of motor drive, the motor of main drive system could be regarded as a fixed end [16]. And the rotational inertia of roller is much bigger than the other parts, so the main drive system can be simplified to a single degree of freedom discrete model of spring-mass system. The rollers can be equivalent to the rigid inertial components, $j$ is the rotational inertia and $\theta$ is the rotation angle; the connecting shafts can be equivalent to the elastic element, $k$ is the torsional stiffness, as shown in Fig. 2. When transverse and longitudinal vibrations of strip are considered, the mechanical geometry characteristic is similar to an axially moving beam [17, 18]. Also, the strip is an elastic continuum whose thickness is much less than the length, the strip is equivalent to the isotropic Euler beam based on the theory of axially moving beam. The equivalent mechanical model is shown in Fig. 3, and assumes that the strip with uniform motion is running, ${v}_{0}$is the axial velocity. It hypothesizes that there is no relative motion between rollers and strip, and the upper and lower rollers are symmetric discs of fixed axis rotation along the width of beam. The transverse displacement and the longitudinal displacement of Euler beam are $w\left({x}_ {0},{y}_{0},t\right)$ and $u\left({x}_{0},{y}_{0},t\right)$ respectively; $l$ is the distance of Euler beam between two stands; the left tension and the right tension are ${P}_{1}$ and ${P}_{2}$ respectively, and ${P}_{1}={P}_{2}={P}_{0}$. Fig. 1A dimensional geometric model of main drive system of one stand: 1 – electromotor, 2 – intermediate shaft, 3 – reduction gears, 4 – profile spindle, 5 – working roll, 6 – back-up roll, 7 – screwdown device, 8 – balancing device, 9 – unjamming gear, 10 – mill housing, 11 – platform Based on Hamilton principle, the mathematical model of strip vibration coupled with torsional vibration of main drive system is established. The kinetic energy ${T}_{1}$ of axially moving Euler beam can be written as: $\begin{array}{l}{T}_{1}=\frac{1}{2}{\int }_{0}^{l}\rho A\left[{\left({v}_{0}+\frac{du}{dt}\right)}^{2}+{\left(\frac{dw}{dt}\right)}^{2}\right]d{x}_{0}\\ =\frac{1}{2}{\int }_{0}^{l}\rho A\left[\left where $\rho$ is the density of strip, $A$ is the cross-sectional area of Euler beam, ${u}_{,t}$, ${w}_{,t}$ and ${u}_{,{x}_{0}}$, ${w}_{,{x}_{0}}$ are the $u\left({x}_{0},{y}_{0},t\right)$ and $w\ left({x}_{0},{y}_{0},t\right)$ on variables $t$ and ${x}_{0}$ of the first partial derivatives respectively. Fig. 2A simplified model of main drive system of rolling mill Fig. 3A mechanical model of strip between main drive systems of two stands The kinetic energy ${T}_{2}$ of roller can be written as: where $r$ is the radius of roller. The potential energy ${U}_{1}$ of Euler beam can be written as: $\begin{array}{l}{U}_{1}=\frac{1}{2}\iiint E{\epsilon }_{{x}_{0}}^{2}d{x}_{0}d{y}_{0}d{z}_{0}\\ \end{array}=\frac{1}{2}{\int }_{0}^{l}\left[EA{\left({u}_{,{x}_{0}}+\frac{1}{2}{w}_{,{x}_{0}}^{2}\ where ${\epsilon }_{{x}_{0}}$ is the strain of Euler beam, $E$ is the Young’s modulus and $I$ is the moment of inertia. The potential energy ${U}_{2}$ of tension can be written as: ${U}_{2}={\int }_{0}^{l}{P}_{0}\left({u}_{,{x}_{0}}+\frac{1}{2}{w}_{,{x}_{0}}^{2}\right)d{x}_{0}.$ Then according to Hamilton principle, the following equation can be obtained: $\delta {\int }_{{t}_{2}}^{{t}_{1}}\left({T}_{1}+{T}_{2}-{U}_{1}-{U}_{2}\right)dt=0.$ Substitution of Eqs. (1-4) into Eq. (5), the motion equations can be expressed as: $\rho A{u}_{,tt}+\left(\rho A{v}_{0}^{2}-EA\right){u}_{,{x}_{0}{x}_{0}}+2\rho A{v}_{0}{u}_{,{x}_{0}t}-EA{w}_{,{x}_{0}}{w}_{,{x}_{0}{x}_{0}}=0,$ $\begin{array}{l}\rho A{w}_{,tt}+EI{w}_{,{x}_{0}{x}_{0}{x}_{0}{x}_{0}}+\left(\rho A{v}_{0}^{2}-{P}_{0}\right){w}_{,{x}_{0}{x}_{0}}+2\rho A{v}_{0}{w}_{,{x}_{0}t}\\ -EA\left({w}_{,{x}_{0}{x}_{0}}{u}_{, In general, the axial kinetic energy caused by transverse vibration is relatively small. Let ${u}_{,t}={u}_{,tt}=0$, So, Eq. (6) is simplified as: $\left(\rho A{v}_{0}^{2}EA\right){u}_{,{x}_{0}{x}_{0}}-EA{w}_{,{x}_{0}}{w}_{,{x}_{0}{x}_{0}}=0.$ The relationship between torque and torsional stiffness is expressed as: ${M}_{i}={k}_{i}\mathrm{\Delta }{\theta }_{i}$, where ${\theta }_{i}$ is relative torsional angle between two shaft sections, ${k}_{i}$ is the torsional stiffness of shaft segment. Therefore, boundary conditions can be written as: When ${x}_{0}=0$: When ${x}_{0}=l$: ${w}_{\mathrm{m}\mathrm{a}\mathrm{x}}={\phi }_{m}.$ The time variable of equation is separated from the space variable by substitution of $w={\varphi }_{0}\left({x}_{0}\right)\mathrm{c}\mathrm{o}\mathrm{s}{\omega }_{0}t$ into Eq. (8), the following equation can be obtained as: $u={\phi }_{0}\left({x}_{0}\right)\mathrm{c}\mathrm{o}{\mathrm{s}}^{2}{\omega }_{0}t$, where ${\omega }_{0}$ is vibration frequency. Based on Kantorovich averaging method on the interval $\left[0,2\pi /{\omega }_{0}\right]$, the time variable can be eliminated. The motion equations can be simplified as: $\left(\rho A{v}_{0}^{2}-EA\right){{\varphi }_{0}}_{,{x}_{0}{x}_{0}}-EA{{\phi }_{0}}_{,{x}_{0}}{{\phi }_{0}}_{,{x}_{0}{x}_{0}}=0,$ $EI{{\phi }_{0}}_{,{x}_{0}{x}_{0}{x}_{0}{x}_{0}}+\left(\rho A{v}_{0}^{2}-{P}_{0}\right){{\phi }_{0}}_{,{x}_{0}{x}_{0}}-{\omega }_{0}^{2}\rho A{\phi }_{0}$$-\frac{3}{4}EA\left({{\phi }_{0}}_{,{x}_{0} {x}_{0}}{{\varphi }_{0}}_{,{x}_{0}}+{{\varphi }_{0}}_{,{x}_{0}{x}_{0}}{{\phi }_{0}}_{,{x}_{0}}+\frac{3}{2}{{\phi }_{0}}_{,{x}_{0}}^{2}{{\phi }_{0}}_{,{x}_{0}{x}_{0}}\right)=0.$ Boundary conditions can be written as, when ${x}_{0}=0$: $EA{r}^{2}{{\varphi }_{0}}_{,{x}_{0}}+j{v}_{0}^{2}{{\varphi }_{0}}_{,{x}_{0}{x}_{0}}-\frac{4}{3}j{w}_{0}^{2}{\varphi }_{0}-\frac{4}{3}{P}_{0}r+k\frac{{\varphi }_{0}}{r}=0.$ When ${x}_{0}=l$: $EA{r}^{2}{{\varphi }_{0}}_{,{x}_{0}}+j{v}_{0}^{2}{{\varphi }_{0}}_{,{x}_{0}{x}_{0}}-\frac{4}{3}j{w}_{0}^{2}{\varphi }_{0}+\frac{4}{3}{P}_{0}r+k\frac{{\varphi }_{0}}{r}=0.$ ${\phi }_{0}\left(0\right)={\phi }_{0}\left(l\right)={{\phi }_{0}}_{,{x}_{0}{x}_{0}}\left(0\right)={{\phi }_{0}}_{,{x}_{0}{x}_{0}}\left(l\right)=0,$ ${\phi }_{0}\left(\frac{1}{2}\right)={{\phi }_{0}}_{m}.$ The dimensionless quantities are given by: $x=\frac{{x}_{0}}{l},\phi =\frac{{\phi }_{0}}{l},\varphi =\frac{{\varphi }_{0}}{l},v={v}_{0}\sqrt{\frac{\rho }{E}},\omega ={\omega }_{0}\sqrt{\frac{\rho A{l}^{4}}{EI}},$ $S=\frac{A{l}^{2}}{I},J=\frac{j}{\rho A{r}^{2}l},P=\frac{{P}_{0}{l}^{2}}{EI},K=\frac{kl}{EA{r}^{3}}.$ Then, the dimensionless form of Eqs. (13-18) can be obtained respectively: $\left({v}_{0}^{2}-1\right){\varphi }_{,xx}-{\phi }_{,x}{\phi }_{,xx}=0,$ ${\phi }_{,xxxx}+\left(S{v}^{2}-P\right){\phi }_{,xx}-{\omega }^{2}\phi -\frac{3}{4}S\left({\phi }_{,xx}{\varphi }_{,x}+{\varphi }_{,xx}{\phi }_{,x}+\frac{3}{2}{\phi }_{,x}^{2}{\phi }_{,xx}\right)= When $x=0$: ${\varphi }_{,x}+J{v}^{2}{\varphi }_{,xx}-\frac{4J}{3S}{\omega }^{2}\varphi -\frac{4P}{3S}+K\varphi =0.$ When $x=1$: ${\varphi }_{,x}+J{v}^{2}{\varphi }_{,xx}-\frac{4J}{3S}{\omega }^{2}\varphi +\frac{4P}{3S}+K\varphi =0,$ $\phi \left(0\right)=\phi \left(1\right)={\phi }_{,xx}\left(0\right)={\phi }_{,xx}\left(1\right)=0,$ $\phi \left(\frac{1}{2}\right)={\phi }_{m}.$ 3. Analytical solution of vibration equations Due to more complicated solving process of the Eqs. (19-24), the modified iteration method is used to solve the equations in this section. 3.1. The first-order approximate solution Firstly, all the nonlinear terms of the Eq. (20) are omitted, the simplified motion equation can be obtained as follow: ${{\phi }_{1}}_{,xxxx}-{\omega }^{2}{\phi }_{1}=0.$ The series solution of Eq. (25) is: ${\phi }_{1}\left(x\right)={a}_{0}{M}_{0}\left(x\right)+{a}_{1}{N}_{0}\left(x\right)+{a}_{2}{I}_{0}\left(x\right)+{a}_{3}{K}_{0}\left(x\right),$ ${M}_{0}=\sum _{n=0}^{\infty }\frac{\left({\omega }^{2}{\right)}^{n}{x}^{4n}}{\left(4n\right)!},{N}_{0}=\sum _{n=0}^{\infty }\frac{\left({\omega }^{2}{\right)}^{n}{x}^{4n+1}}{\left(4n+1\right)!},{I}_ {0}=\sum _{n=0}^{\infty }\frac{\left({\omega }^{2}{\right)}^{n}{x}^{4n+2}}{\left(4n+2\right)!},{K}_{0}=\sum _{n=0}^{\infty }\frac{\left({\omega }^{2}{\right)}^{n}{x}^{4n+3}}{\left(4n+3\right)!}.$ By substitution of Eq. (26) into Eqs. (23-24), one has ${\omega }_{1}=$ 18.71. Then, the coefficients of Eq. (26) have: ${a}_{0}=0,{a}_{1}={\mu }_{1}{\phi }_{m},{a}_{2}=0,{a}_{3}={\mu }_{2}{\phi }_{m}.$ ${\phi }_{1}\left(x\right)={\phi }_{m}\left[{\mu }_{1}{N}_{0}\left(x\right)+{\mu }_{2}{K}_{0}\left(x\right)\right],$ ${\mu }_{1}=\frac{-{K}_{0}\left(1\right)}{{N}_{0}\left(1\right){K}_{0}\left(\frac{1}{2}\right)-{N}_{0}\left(\frac{1}{2}\right){K}_{0}\left(1\right)},{\mu }_{2}=\frac{{N}_{0}\left(1\right)}{{N}_{0}\ And substituting of Eq. (27) into Eq. (19), one can be got: ${\varphi }_{1}\left(x\right)=\frac{1}{2\left({v}^{2}-1\right)}\int {\phi }_{1,x}^{2}dx+{c}_{1}x+{c}_{2}.$ The coefficients ${c}_{1}$ and ${c}_{2}$ can be determined by Eqs. (21-22), and the results are as follows: ${c}_{1}=\frac{3S}{4J{\omega }^{2}-3SK}\left\{\frac{1}{2\left({v}^{2}-1\right)}{\left[2J{v}^{2}{\phi }_{1,x}{\phi }_{1,xx}-\left(\frac{4J}{3S}{\omega }^{2}-K\right)\int {\phi }_{1,x}^{2}dx+{\phi }_ $\begin{array}{l}{c}_{2}=\frac{3S}{4J{\omega }^{2}-3SK}\left\{\frac{1}{2\left({v}^{2}-1\right)}{\left[2J{v}^{2}{\phi }_{1,x}{\phi }_{1,xx}-\left(\frac{4J}{3S}{\omega }^{2}-K\right)\int {\phi }_{1,x}^ {2}dx+{\phi }_{1,x}^{2}\right]}_{x=1}\right\\\ -\frac{4P}{3S}+\frac{8P}{4J{\omega }^{2}-3SK}}.\end{array}$ Then, by substitution of ${c}_{1}$ and ${c}_{2}$ into ${\varphi }_{1}\left(x\right)$, the first-order approximate solution can be obtained. 3.2. The second-order modified-iterative solution In this subsection, the second-order modified-iterative solution can be given. By substitution of Eqs. (27-28) into Eq. (20), one has: ${\phi }_{2,xxxx}+\left(S{v}^{2}-P\right){\phi }_{1,xx}-{\omega }^{2}{\phi }_{2}-\frac{3}{4}S\left({{\phi }_{1}}_{,xx}{{\varphi }_{1}}_{,x}+{{\phi }_{1}}_{,x}{{\varphi }_{1}}_{,xx}+\frac{3}{2}{{\phi }_{1}}_{,x}^{2}{{\phi }_{1}}_{,xx}\right)=0.$ That is: ${\phi }_{2,xxxx}-{\omega }^{2}{\phi }_{2}=\alpha {{\phi }_{1}}_{,xx}+\beta {{\phi }_{1}}_{,x}^{2}{{\phi }_{1}}_{,xx},$ $\alpha =P+\frac{3}{4}S{c}_{1}-S{v}^{2},\beta =\frac{9S{v}^{2}}{8\left({v}^{2}-1\right)},{\phi }_{{1}_{,x}}^{2}\left(x\right)={\phi }_{m}^{2}\left(\sum _{n=0}^{\infty }{A}_{n}^{\left(1\right)}{x}^ {4n}+\sum _{n=0}^{\infty }{B}_{n}^{\left(1\right)}{x}^{4n+2}\right),$ ${\phi }_{{1}_{,xx}}\left(x\right)={\phi }_{m}\left(\sum _{n=1}^{\infty }{C}_{n}^{\left(1\right)}{x}^{4n-1}+\sum _{n=0}^{\infty }{D}_{n}^{\left(1\right)}{x}^{4n+1}\right),$ ${A}_{0}^{\left(1\right)}={\mu }_{1}^{2},$ ${A}_{n}^{\left(1\right)}=\sum _{{n}_{1}=0}^{n}\frac{{\mu }_{1}^{2}{\omega }_{1}^{2n}}{\left(4{n}_{1}\right)!\left(4n-4{n}_{1}\right)!}+\sum _{{n}_{1}=0}^{n-1}\frac{{\mu }_{2}^{2}{\omega }_{1}^ {2n-2}}{\left(4{n}_{1}+2\right)!\left(4n-4{n}_{1}-2\right)!},\left(n=1,2\dots \right),$ ${B}_{n}^{\left(1\right)}=\sum _{{n}_{1}=0}^{n}\frac{2{\mu }_{1}{\mu }_{2}{\omega }_{1}^{2n}}{\left(4{n}_{1}\right)!\left(4n-4{n}_{1}+2\right)!},\left(n=0,1\dots \right),$ ${C}_{n}^{\left(1\right)}=\frac{{\mu }_{1}{\omega }_{1}^{2n}}{\left(4n-1\right)!}\begin{array}{ll},& \left(n=1,2\dots \right)\end{array},{D}_{n}^{\left(1\right)}=\frac{{\mu }_{2}{\omega }_{1}^{2n}}{\ left(4n+1\right)!}\begin{array}{ll},& \left(n=0,1...\right)\end{array}.$ With the above equations, that is obtained as follow: ${\phi }_{{1}_{,x}}^{2}{\phi }_{{1}_{,xx}}={\phi }_{m}^{3}\left(\sum _{n=1}^{\infty }{A}_{n}^{\left(2\right)}{x}^{4n-1}+\sum _{n=0}^{\infty }{B}_{n}^{\left(2\right)}{x}^{4n+1}+\sum _{n=0}^{\infty } ${A}_{n}^{\left(2\right)}=\sum _{m=0}^{n}\left({A}_{n}^{\left(1\right)}{C}_{n-m}^{\left(1\right)}\right),\mathrm{}\mathrm{}\mathrm{}\left(n=1,2\dots \right),{B}_{0}^{\left(2\right)}={\mu }_{1}^{2}{\ mu }_{2},$ ${B}_{n}^{\left(2\right)}=\sum _{m=0}^{n}\left({A}_{n}^{\left(1\right)}{D}_{n-m}^{\left(1\right)}+{B}_{n}^{\left(1\right)}{C}_{n-m}^{\left(1\right)}\right),\left(n=1,2\dots \right),$ ${C}_{n}^{\left(2\right)}=\sum _{m=0}^{n}{B}_{n}^{\left(1\right)}{D}_{n-m}^{\left(1\right)},\left(n=0,1...\right).$ Due to the property of series solution, the solution of Eq. (32) can be written as: $\begin{array}{l}{\phi }_{2}\left(x\right)={\phi }_{m}\left[{\zeta }_{1}\sum _{n=0}^{\infty }\frac{{\omega }^{2n}{x}^{4n+1}}{\left(4n+1\right)!}+{\zeta }_{2}\sum _{n=0}^{\infty }\frac{{\omega }^{2n} {x}^{4n+3}}{\left(4n+3\right)!}]\right\\ +{{\phi }_{m}}^{3}\left(\sum _{n=1}^{\infty }{C}_{n}{x}^{4n-1}+\sum _{n=0}^{\infty }{D}_{n}{x}^{4n+1}+\sum _{n=0}^{\infty }{E}_{n}{x}^{4n+3}\right)\\ +{\phi } _{m}\left(\sum _{n=1}^{\infty }{A}_{n}{x}^{4n-1}+\sum _{n=0}^{\infty }{B}_{n}{x}^{4n+1}\right),\\ \end{array}$ where ${\zeta }_{1}$ and ${\zeta }_{2}$ denote the undetermined coefficients: ${A}_{1}={B}_{0}={C}_{1}={D}_{0}={E}_{0}=0,{A}_{n+1}=\frac{{\omega }^{2}{A}_{n}+\alpha {C}_{n}^{\left(1\right)}}{\left(4n+3\right)\left(4n+2\right)\left(4n+1\right)4n},$ ${B}_{n+1}=\frac{{\omega }^{2}{B}_{n}+\alpha {D}_{n}^{\left(1\right)}}{\left(4n+5\right)\left(4n+4\right)\left(4n+3\right)\left(4n+2\right)},{C}_{n+1}=\frac{{\omega }^{2}{C}_{n}+\beta {A}_{n}^{\left ${D}_{n+1}=\frac{{\omega }^{2}{D}_{n}+\beta {B}_{n}^{\left(2\right)}}{\left(4n+5\right)\left(4n+4\right)\left(4n+3\right)\left(4n+2\right)},{E}_{n+1}=\frac{{\omega }^{2}{E}_{n}+\beta {C}_{n}^{\left(2 By substitution of Eq. (34) into Eqs. (23-24), one can be get: $\zeta ={\left[\begin{array}{lll}{\zeta }_{1}& {\zeta }_{2}& 1\end{array}\right]}^{T},D=\left[\begin{array}{lll}{d}_{11}& {d}_{12}& {d}_{13}\\ {d}_{21}& {d}_{22}& {d}_{23}\\ {d}_{31}& {d}_{32}& {d}_ ${d}_{11}=\sum _{n=0}^{\infty }\frac{{\omega }^{2n}}{\left(4n+1\right)!},{d}_{12}=\sum _{n=0}^{\infty }\frac{{\omega }^{2n}}{\left(4n+3\right)!}$ ${d}_{13}=\left(\sum _{n=1}^{\infty }{A}_{n}+\sum _{n=0}^{\infty }{B}_{n}\right)+{\phi }_{m}^{2}\left(\sum _{n=1}^{\infty }{C}_{n}+\sum _{n=0}^{\infty }{D}_{n}+\sum _{n=0}^{\infty }{E}_{n}\right),$ ${d}_{21}=\sum _{n=0}^{\infty }\frac{{\omega }^{2n}}{\left(4n+1\right)!}{\left(\frac{1}{2}\right)}^{4n+1},{d}_{22}=\sum _{n=0}^{\infty }\frac{{\omega }^{2n}}{\left(4n+3\right)!}{\left(\frac{1}{2}\ $\begin{array}{l}{d}_{23}=\left(\sum _{n=1}^{\infty }{A}_{n}{\left(\frac{1}{2}\right)}^{4n-1}+\sum _{n=0}^{\infty }{B}_{n}{\left(\frac{1}{2}\right)}^{4n+1}\right)\\ +{{\phi }_{m}}^{2}\left[\sum _{n= 1}^{\infty }{C}_{n}{\left(\frac{1}{2}\right)}^{4n-1}+\sum _{n=0}^{\infty }{D}_{n}{\left(\frac{1}{2}\right)}^{4n+1}\right+\sum _{n=0}^{\infty }{E}_{n}{\left(\frac{1}{2}\right)}^{4n+3}]-1,\end{array}$ ${d}_{31}=\sum _{n=1}^{\infty }\frac{{\omega }^{2n}}{\left(4n-1\right)!},{d}_{32}=\sum _{n=0}^{\infty }\frac{{\omega }^{2n}}{\left(4n+1\right)!},$ $\begin{array}{l}{d}_{33}=\left(\sum _{n=1}^{\infty }\left(4n-1\right)\left(4n-2\right){A}_{n}+\sum _{n=0}^{\infty }\left(4n+1\right)4n{B}_{n}\right)\\ +{{\phi }_{m}}^{2}\left[\sum _{n=1}^{\infty }\ left(4n-1\right)\left(4n-2\right){C}_{n}\right+\sum _{n=0}^{\infty }\left(4n+1\right)4n{D}_{n}+\sum _{n=0}^{\infty }\left(4n+3\right)\left(4n+2\right){E}_{n}].\end{array}$ The frequency ${\omega }_{2}$ can be obtained by $\mathrm{d}\mathrm{e}\mathrm{t}D=$ 0. The coefficients ${\zeta }_{1}$ and ${\zeta }_{2}$ can be determined by Eq. (35), and then, the second-order modified-iterative solution can be obtained. 4. Numerical simulation and analysis To verify the proposed method, some numerical simulations are carried out by GL-E36 strip. The main parameters of tandem mill and strip are listed as follows: the distance between stand F2 and stand F3 $l=$ 2.5 m, the thickness of strip $h=$ 0.018 m, the torsional stiffness $k=$ 9×10^3 N·m/rad, and the strip tension ${P}_{0}=$ 8×10^3 N. When the rotational inertia of roller $j$ is 200 kg·m^2, 400 kg·m^2, 600 kg·m^2 and 800 kg·m^2, respectively, the relationship curves between axial velocity and frequency of strip are shown in Fig. 4. It can be seen that the vibration frequency decreases with the increasing axial velocity gradually, and the greater the rotational inertia of roller, the faster the frequency decreases. In particular, the vibration frequency becomes more obvious, when $ {v}_{0}<$ 0.01 m/s, greater rotational inertia and larger vibration frequency, and vice versa. That is to say, the axial velocity has strong influence on vibration characteristic of strip while the rotational inertia of roller is larger than the others. Fig. 4Relationship between axial velocity and frequency Fig. 5Relationship between tension and frequency Fig. 5 shows the relationship curves between strip tension and frequency of strip with different values of rotational inertias, which are 200 kg·m^2, 400 kg·m^2 and 600 kg·m^2, respectively. The torsional stiffness $k=$ 9×10^3 N·m/rad, the axial velocity ${v}_{0}=$ 0.008 m/s. From Fig. 5, the vibration frequency decreases with the increase of the strip tension. When ${P}_{0}\le$ 5×10^3 N, as long as the rotational inertia becomes larger, the frequency of strip gets stronger. Whereas, when ${P}_{0}>$ 5×10^3 N, the ${P}_{0}-{\omega }_{0}$ curves present fluctuation, then stabilize gradually. Thus, it follows that a smaller strip tension change results in a greater effect of frequency. Additionally, it is shown that the bigger the rotational inertia is, the faster the trend of ${P}_{0}-{\omega }_{0}$ curve declines. In other words, a further proof is given that a bigger variation of frequency caused by a greater rotational inertia. Fig. 6 depicts the relationship curves between torsional stiffness of drive system and frequency of strip with different values of rotational inertias, which are 200 kg·m^2, 400 kg·m^2, 600 kg·m^2, and 800 kg·m^2, respectively. The strip tension ${P}_{0}=$ 8×10^3 N, the axial velocity ${v}_{0}=$ 0.008 m/s. As can be seen from the Fig. 6, with the increase of the torsional stiffness, the $k-{\ omega }_{0}$ curves show a uniform change, and the vibration frequency decreases gradually. And a larger rotational inertia leads to a larger downtrend of $k-{\omega }_{0}$ curve. The result shows that the boundary of strip is close to the clamped condition with the infinite increase of rotational inertia. Compared with a model of moving strip without main drive system, the overall frequency curves tend to be more stable. Moreover, the greater the torsional stiffness is, the smaller the rolling forces exerts, which reduces the vibration frequency of strip to some extent. From the frequency-response curves in Figs. 4-6, a common feature can be found that a larger rotational inertia of roller gives rise to a greater variation tendency of frequency, which causes a more obvious vibration. In other words, a larger rotational inertia has a greater effect on the relationships from the influencing factors that is the axial velocity, strip tension and torsional stiffness to the frequency. Therefore, it is very important to select the appropriate rotational inertia of roller in the research and practice. Fig. 7 illustrates the amplitude-frequency responses under the condition of different axial velocities: 2×10^–3 m/s, 2×10^–3 m/s, 10×10^–3 m/s, 14×10^–3 m/s and 18×10^–3 m/s. The torsional stiffness $k=$ 4×10^4 N·m/rad, the strip tension ${P}_{0}=$ 8×10^3 N, and the rotational inertia $j=$ 800 kg·m^2. From Fig. 7, it is obtained that with the raise of the axial velocity, the vibration performance is transformed from sclerotic type into intenerate type gradually. When ${v}_{0}\le$ 10×10^–3 m/s, with the increase of the amplitude, the vibration frequencies increase, and the hardening degree increases with the decreasing axial velocity. When ${v}_{0}\ge$ 14×10^–3 m/s, the curve shows a trending of decrease, namely, the vibration performance is intenerated type. That is, being oversized or excessively small of the axial velocity will cause the great influence of the amplitude on frequency. Fig. 6Relationship between torsional stiffness and frequency Fig. 7Amplitude-frequency response with different axial velocities Fig. 8 demonstrates the amplitude-frequency responses under the condition of different strip tensions: 4×10^3 N, 6×10^3 N, 8×10^3 N and 10×10^3 N. The torsional stiffness $k=$ 4×10^4 N·m/rad, the rotational inertia $j=$ 800 kg·m^2, and the axial velocity ${v}_{0}=$ 0.01 m/s. In Fig. 8, with the increase of the amplitude, the vibration frequency increases, and the vibration performance keeps consistent. It also can be known that the changing of vibration frequency is small and stable under the condition of lower amplitude. However, at the point of amplitude ${\phi }_{0}=$ 0.006, there is an intersection. After that, with the raise of the amplitude, the vibration frequency increases fast. That is to say, a smaller strip tension results in a larger variation range of the amplitude-frequency response. Based on the mechanics of vibration, the smaller the tensile force, the more unstable the system, thereby, it makes strip vibration quite strong. Fig. 8Amplitude-frequency response with different strip tensions Fig. 9Amplitude-frequency response with different torsional stiffness Fig. 9 displays the amplitude-frequency responses under the condition of different torsional stiffness: 2×10^4 N·m/rad, 4×10^4 N·m/rad, 6×10^4 N·m/rad and 8×10^4 N·m/rad. The rotational inertia $j=$ 800 kg·m^2, the strip tension ${P}_{0}=$ 8×10^3 N, and the axial velocity ${v}_{0}=$ 0.01 m/s. From Fig. 9, as the torsional stiffness increases, the vibration performance changes from positive correlation to negative correlation gradually. When $k\le$ 4×10^4 N·m/rad, the frequencies increase with the increasing amplitude, and the vibration performance shows positive correlation. Keep on increasing the size of torsional stiffness, the amplitude can be decreased, and the vibration performance shows negative correlation. Due to the above analysis, unsuitable size of the torsional stiffness will lead to inconsistent movement between rollers and strip, therefore, it leads to the torsional vibration of main drive system. Consequently, the torsional stiffness should be selected appropriately in practical design to reduce the rolling instability. 5. Conclusions In this paper, the strip vibration coupled with torsional vibration of main drive system of rolling mill is investigated. It is given that the influences of the axial velocity, strip tension and torsional stiffness of drive system on vibration frequency of strip, in which all of the influencing factors are related to the rotational inertia of roller. A larger rotational inertia of roller gives rise to a greater changing trend of frequency, and leads to a more obvious vibration of strip. Moreover, the axial velocity, the strip tension and the torsional stiffness of drive system have also great influences on the amplitude-frequency characteristic of strip vibration. The changing trends of the amplitude-frequency response curves are accordantly increasing with the different strip tension, and a larger tension results in a smaller increasing gradient. Whereas, the amplitude-frequency characteristic of strip vibration will transform from sclerotic type into intenerate type with the increasing axial velocity or with the increasing torsional stiffness. Therefore, the analysis results of this paper show that the importance of choosing appropriate axial velocity, strip tension, torsional stiffness and rotational inertia on control and optimization of strip vibration of rolling mill. • Hou F. X., Zhang J., Cao J. G. Review of chatter studies in cold rolling. Journal of Iron and Steel Research, Vol. 19, Issue 10, 2007, p. 6-11, (in Chinese). • Wright J. Mill drive system to minimize torque amplification. Iron and Steel Engineer, Vol. 53, Issue 7, 1976, p. 56-60. • Fan X. B., Zang Y., Wang F., et al. Hot strip mill nonlinear torsional vibration with multi-stand coupling. Journal of Vibroengineering, Vol. 17, Issue 4, 2015, p. 1623-1633. • Moller R. H., Hoggart J. S. Periodic surface finish and torque effects during cold strip rolling. Journal of the Australian Institute of Metals, Vol. 12, Issue 2, 1967, p. 155-164. • Lawrence A. B., Thomas S. R. Winding reel involvement in temper mill chatter. Iron and Steel Engineer, Vol. 71, Issue 11, 1994, p. 27-29. • Swiatoniowski A., Bar A. Parametrical excitement vibration in tandem mills-mathematical model and its analysis. Journal of Materials Processing Technology, Vol. 134, Issue 2, 2003, p. 214-224. • Wang R. P., Peng Y., Zhang Y. Mechanism research of rolling mill coupled vibration. Journal of Mechanical Engineering, Vol. 49, Issue 12, 2013, p. 66-71, (in Chinese). • Sun J. L., Peng Y., Liu H. M. Dynamic characteristics of cold rolling mill and strip based on flatness and thickness control in rolling process. Journal of Central South University, Vol. 21, Issue 2, 2014, p. 567-576. • Wang Y., Xu F., Li Y. Dynamic modeling and coupling vibration analysis of hybrid systems consisting of strip, rolls and flexible supports. Journal of Vibration Engineering, Vol. 26, Issue 4, 2013, p. 599-607, (in Chinese). • Xu F., Li Y., Li L. F. Parametric vibration analysis for sheet metal and its supporting system. Mechanical Research and Application, Vol. 26, Issue 3, 2013, p. 27-30, (in Chinese). • Zou J. X., Xu L. J. Tandem Mill Vibration Control. Metallurgical Industry Press, Beijing, 1998, (in Chinese). • Gao C. Y., Du G. J., Feng Y., et al. Nonlinear vibration analysis of moving strip with inertial boundary condition. Mathematical Problems in Engineering. Vol. 8, Issue 2015, 2015, p. 1-9. • Gao C. Y., Du G. J., Li J. X. Numerical analysis on torsional vibration of main driving system of rolling mill with strip steel. Journal of Yanshan University, Vol. 40, Issue 1, 2016, p. 58-65, (in Chinese). • Chen L. Q., Yang X. D. Steady-state response of axially moving viscoelastic beam with pulsating speed: comparison of two nonliner models. International Journal of Solids and Structures, Vol. 42, 2005, p. 10-15. • Du G. J., Ma J. Q. Nonlinear vibration and buckling of circular sandwich plate under complex load. Applied Mathematics and Mechanics, Vol. 28, Issue 8, 2007, p. 1081-1091. • Shen Y. Z., Liu H. M., Xiong J., and et al. Analysis of chaotic behavior of the main drive system with clearance of a heavey plate mill. Engineering Mechanics, Vol. 27, Issue 7, 2010, p. 232-236, (in Chinese). • Han S. M., Benaroya H., Wei T. Dynamics of transversely vibrating beam using four engineering theories. Journal of Sound and Vibration, Vol. 225, Issue 5, 1999, p. 935-988. • Huang J. L., Chen S. H. Study on nonlinear vibration of an axially moving beam with coupled transverse and longitudinal motions. Journal of Vibration and Shock, Vol. 30, Issue 8, 2011, p. 24-27, 50, (in Chinese). About this article Mechanical vibrations and applications strip vibration main drive system torsional vibration continuous-discrete coupled vibration model modified iteration method This research was supported in part by Natural Science Foundation of Hebei Province (E2017203115), in part by Key Project of Science and Technology of Hebei Higher School (No. ZD2015077), and in part by Doctor Foundation of Yanshan University (No. B992). Copyright © 2017 JVE International Ltd. This is an open access article distributed under the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
{"url":"https://www.extrica.com/article/18216","timestamp":"2024-11-11T23:12:45Z","content_type":"text/html","content_length":"224050","record_id":"<urn:uuid:9d1493d8-6355-4d77-b1b8-e62d1546b19a>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00898.warc.gz"}
Merkle Patricia Trie | ethereum.org Merkle Patricia Trie The state of Ethereum (the totality of all accounts, balances, and smart contracts), is encoded into a special version of the data structure known generally in computer science as a Merkle Tree. This structure is useful for many applications in cryptography because it creates a verifiable relationship between all the individual pieces of data entangled in the tree, resulting in a single root value that can be used to prove things about the data. Ethereum's data structure is a 'modified Merkle-Patricia Trie', named so because it borrows some features of PATRICIA (the Practical Algorithm To Retrieve Information Coded in Alphanumeric), and because it is designed for efficient data retrieval of items that comprise the Ethereum state. A Merkle-Patricia trie is deterministic and cryptographically verifiable: The only way to generate a state root is by computing it from each individual piece of the state, and two states that are identical can be easily proven so by comparing the root hash and the hashes that led to it (a Merkle proof). Conversely, there is no way to create two different states with the same root hash, and any attempt to modify state with different values will result in a different state root hash. Theoretically, this structure provides the 'holy grail' of O(log(n)) efficiency for inserts, lookups and In the near future, Ethereum plans to migrate to a Verkle Tree(opens in a new tab) structure, which will open up many new possibilities for future protocol improvements. To better understand this page, it would be helpful to have basic knowledge of hashes(opens in a new tab), Merkle trees(opens in a new tab), tries(opens in a new tab) and serialization(opens in a new tab). This article begins with a description of a basic radix tree(opens in a new tab), then gradually introduces the modifications necessary for Ethereum's more optimized data structure. Basic radix tries In a basic radix trie, every node looks as follows: 1 [i_0, i_1 ... i_n, value] Where i_0 ... i_n represent the symbols of the alphabet (often binary or hex), value is the terminal value at the node, and the values in the i_0, i_1 ... i_n slots are either NULL or pointers to (in our case, hashes of) other nodes. This forms a basic (key, value) store. Say you wanted to use a radix tree data structure for persisting an order over a set of key value pairs. To find the value currently mapped to the key dog in the trie, you would first convert dog into letters of the alphabet (giving 64 6f 67), and then descend the trie following that path until you find the value. That is, you start by looking up the root hash in a flat key/value DB to find the root node of the trie. It is represented as an array of keys pointing to other nodes. You would use the value at index 6 as a key and look it up in the flat key/value DB to get the node one level down. Then pick index 4 to look up the next value, then pick index 6, and so on, until, once you followed the path: root -> 6 -> 4 -> 6 -> 15 -> 6 -> 7, you would look up the value of the node and return the result. There is a difference between looking something up in the 'trie' and the underlying flat key/value 'DB'. They both define key/value arrangements, but the underlying DB can do a traditional 1 step lookup of a key. Looking up a key in the trie requires multiple underlying DB lookups to get to the final value described above. Let's refer to the latter as a path to eliminate ambiguity. The update and delete operations for radix tries can be defined as follows: 1 def update(node,path,value): 2 curnode = db.get(node) if node else [ NULL ] * 17 3 newnode = curnode.copy() 4 if path == '': 5 newnode[-1] = value 6 else: 7 newindex = update(curnode[path[0]],path[1:],value) 8 newnode[path[0]] = newindex 9 db.put(hash(newnode),newnode) 10 return hash(newnode) 12 def delete(node,path): 13 if node is NULL: 14 return NULL 15 else: 16 curnode = db.get(node) 17 newnode = curnode.copy() 18 if path == '': 19 newnode[-1] = NULL 20 else: 21 newindex = delete(curnode[path[0]],path[1:]) 22 newnode[path[0]] = newindex 24 if all(x is NULL for x in newnode): 25 return NULL 26 else: 27 db.put(hash(newnode),newnode) 28 return hash(newnode) Zobraziť všetko A "Merkle" Radix tree is built by linking nodes using deterministically-generated cryptographic hash digests. This content-addressing (in the key/value DB key == keccak256(rlp(value))) provides a cryptographic integrity guarantee of the stored data. If the root hash of a given trie is publicly known, then anyone with access to the underlying leaf data can construct a proof that the trie includes a given value at a specific path by providing the hashes of each node joining a specific value to the tree root. It is impossible for an attacker to provide a proof of a (path, value) pair that does not exist since the root hash is ultimately based on all hashes below it. Any underlying modification would change the root hash. You can think of the hash as a compressed representation of structural information about the data, secured by the pre-image protection of the hashing function. We'll refer to an atomic unit of a radix tree (e.g. a single hex character, or 4 bit binary number) as a "nibble". While traversing a path one nibble at a time, as described above, nodes can maximally refer to 16 children but include a value element. We, hence, represent them as an array of length 17. We call these 17-element arrays "branch nodes". Merkle Patricia Trie Radix tries have one major limitation: they are inefficient. If you want to store one (path, value) binding where the path, like in Ethereum, is 64 characters long (the number of nibbles in bytes32), we will need over a kilobyte of extra space to store one level per character, and each lookup or delete will take the full 64 steps. The Patricia trie introduced in the following solves this issue. A node in a Merkle Patricia trie is one of the following: 1. NULL (represented as the empty string) 2. branch A 17-item node [ v0 ... v15, vt ] 3. leaf A 2-item node [ encodedPath, value ] 4. extension A 2-item node [ encodedPath, key ] With 64 character paths it is inevitable that after traversing the first few layers of the trie, you will reach a node where no divergent path exists for at least part of the way down. To avoid having to create up to 15 sparse NULL nodes along the path, we shortcut the descent by setting up an extension node of the form [ encodedPath, key ], where encodedPath contains the "partial path" to skip ahead (using a compact encoding described below), and the key is for the next DB lookup. For a leaf node, which can be marked by a flag in the first nibble of the encodedPath, the path encodes all prior node's path fragments and we can look up the value directly. This above optimization, however, introduces ambiguity. When traversing paths in nibbles, we may end up with an odd number of nibbles to traverse, but because all data is stored in bytes format. It is not possible to differentiate between, for instance, the nibble 1, and the nibbles 01 (both must be stored as <01>). To specify odd length, the partial path is prefixed with a flag. Specification: Compact encoding of hex sequence with optional terminator The flagging of both odd vs. even remaining partial path length and leaf vs. extension node as described above reside in the first nibble of the partial path of any 2-item node. They result in the 1hex char bits | node type partial path length 3 0 0000 | extension even 4 1 0001 | extension odd 5 2 0010 | terminating (leaf) even 6 3 0011 | terminating (leaf) odd For even remaining path length (0 or 2), another 0 "padding" nibble will always follow. 1 def compact_encode(hexarray): 2 term = 1 if hexarray[-1] == 16 else 0 3 if term: hexarray = hexarray[:-1] 4 oddlen = len(hexarray) % 2 5 flags = 2 * term + oddlen 6 if oddlen: 7 hexarray = [flags] + hexarray 8 else: 9 hexarray = [flags] + [0] + hexarray 10 // hexarray now has an even length whose first nibble is the flags. 11 o = '' 12 for i in range(0,len(hexarray),2): 13 o += chr(16 * hexarray[i] + hexarray[i+1]) 14 return o Zobraziť všetko 1 > [ 1, 2, 3, 4, 5, ...] 2 '11 23 45' 3 > [ 0, 1, 2, 3, 4, 5, ...] 4 '00 01 23 45' 5 > [ 0, f, 1, c, b, 8, 10] 6 '20 0f 1c b8' 7 > [ f, 1, c, b, 8, 10] 8 '3f 1c b8' Here is the extended code for getting a node in the Merkle Patricia trie: 1 def get_helper(node,path): 2 if path == []: return node 3 if node = '': return '' 4 curnode = rlp.decode(node if len(node) < 32 else db.get(node)) 5 if len(curnode) == 2: 6 (k2, v2) = curnode 7 k2 = compact_decode(k2) 8 if k2 == path[:len(k2)]: 9 return get(v2, path[len(k2):]) 10 else: 11 return '' 12 elif len(curnode) == 17: 13 return get_helper(curnode[path[0]],path[1:]) 15 def get(node,path): 16 path2 = [] 17 for i in range(len(path)): 18 path2.push(int(ord(path[i]) / 16)) 19 path2.push(ord(path[i]) % 16) 20 path2.push(16) 21 return get_helper(node,path2) Zobraziť všetko Example Trie Suppose we want a trie containing four path/value pairs ('do', 'verb'), ('dog', 'puppy'), ('doge', 'coins'), ('horse', 'stallion'). First, we convert both paths and values to bytes. Below, actual byte representations for paths are denoted by <>, although values are still shown as strings, denoted by '', for easier comprehension (they, too, would actually be bytes): 1 <64 6f> : 'verb' 2 <64 6f 67> : 'puppy' 3 <64 6f 67 65> : 'coins' 4 <68 6f 72 73 65> : 'stallion' Now, we build such a trie with the following key/value pairs in the underlying DB: 1 rootHash: [ <16>, hashA ] 2 hashA: [ <>, <>, <>, <>, hashB, <>, <>, <>, [ <20 6f 72 73 65>, 'stallion' ], <>, <>, <>, <>, <>, <>, <>, <> ] 3 hashB: [ <00 6f>, hashC ] 4 hashC: [ <>, <>, <>, <>, <>, <>, hashD, <>, <>, <>, <>, <>, <>, <>, <>, <>, 'verb' ] 5 hashD: [ <17>, [ <>, <>, <>, <>, <>, <>, [ <35>, 'coins' ], <>, <>, <>, <>, <>, <>, <>, <>, <>, 'puppy' ] ] When one node is referenced inside another node, what is included is H(rlp.encode(node)), where H(x) = keccak256(x) if len(x) >= 32 else x and rlp.encode is the RLP encoding function. Note that when updating a trie, one needs to store the key/value pair (keccak256(x), x) in a persistent lookup table if the newly-created node has length >= 32. However, if the node is shorter than that, one does not need to store anything, since the function f(x) = x is reversible. Tries in Ethereum All of the merkle tries in Ethereum's execution layer use a Merkle Patricia Trie. From a block header there are 3 roots from 3 of these tries. 1. stateRoot 2. transactionsRoot 3. receiptsRoot State Trie There is one global state trie, and it is updated every time a client processes a block. In it, a path is always: keccak256(ethereumAddress) and a value is always: rlp(ethereumAccount). More specifically an ethereum account is a 4 item array of [nonce,balance,storageRoot,codeHash]. At this point, it's worth noting that this storageRoot is the root of another patricia trie: Storage Trie Storage trie is where all contract data lives. There is a separate storage trie for each account. To retrieve values at specific storage positions at a given address the storage address, integer position of the stored data in the storage, and the block ID are required. These can then be passed as arguments to the eth_getStorageAt defined in the JSON-RPC API, e.g. to retrieve the data in storage slot 0 for address 0x295a70b2de5e3953354a6a8344e616ed314d7251: 1curl -X POST --data '{"jsonrpc":"2.0", "method": "eth_getStorageAt", "params": ["0x295a70b2de5e3953354a6a8344e616ed314d7251", "0x0", "latest"], "id": 1}' localhost:8545 Retrieving other elements in storage is slightly more involved because the position in the storage trie must first be calculated. The position is calculated as the keccak256 hash of the address and the storage position, both left-padded with zeros to a length of 32 bytes. For example, the position for the data in storage slot 1 for address 0x391694e7e0b0cce554cb130d723a9d27458f9298 is: 1keccak256(decodeHex("000000000000000000000000391694e7e0b0cce554cb130d723a9d27458f9298" + "0000000000000000000000000000000000000000000000000000000000000001")) In a Geth console, this can be calculated as follows: 1> var key = "000000000000000000000000391694e7e0b0cce554cb130d723a9d27458f9298" + "0000000000000000000000000000000000000000000000000000000000000001" 3> web3.sha3(key, {"encoding": "hex"}) The path is therefore keccak256(<6661e9d6d8b923d5bbaab1b96e1dd51ff6ea2a93520fdc9eb75d059238b8c5e9>). This can now be used to retrieve the data from the storage trie as before: 1curl -X POST --data '{"jsonrpc":"2.0", "method": "eth_getStorageAt", "params": ["0x295a70b2de5e3953354a6a8344e616ed314d7251", "0x6661e9d6d8b923d5bbaab1b96e1dd51ff6ea2a93520fdc9eb75d059238b8c5e9", "latest"], "id": 1}' localhost:8545 Note: The storageRoot for an Ethereum account is empty by default if it's not a contract account. Transactions Trie There is a separate transactions trie for every block, again storing (key, value) pairs. A path here is: rlp(transactionIndex) which represents the key that corresponds to a value determined by: 1if legacyTx: 2 value = rlp(tx) 4 value = TxType | encode(tx) More information on this can be found in the EIP 2718(opens in a new tab) documentation. Receipts Trie Every block has its own Receipts trie. A path here is: rlp(transactionIndex). transactionIndex is its index within the block it was included in. The receipts trie is never updated. Similar to the Transactions trie, there are current and legacy receipts. To query a specific receipt in the Receipts trie, the index of the transaction in its block, the receipt payload and the transaction type are required. The Returned receipt can be of type Receipt which is defined as the concatenation of TransactionType and ReceiptPayload or it can be of type LegacyReceipt which is defined as rlp([status, cumulativeGasUsed, logsBloom, logs]). More information on this can be found in the EIP 2718(opens in a new tab) documentation. Further Reading Bol tento článok užitočný?
{"url":"https://ethereum.org/sk/developers/docs/data-structures-and-encoding/patricia-merkle-trie/","timestamp":"2024-11-05T10:40:58Z","content_type":"text/html","content_length":"400072","record_id":"<urn:uuid:a635d09e-7821-4465-a7d7-27f1a1131fc6>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00573.warc.gz"}
Kinetic Energy MDCAT MCQs with Answers - Youth For Pakistan Welcome to the Kinetic Energy MDCAT MCQs with Answers. In this post, we have shared Kinetic Energy Multiple Choice Questions and Answers for PMC MDCAT 2024. Each question in MDCAT Physics offers a chance to enhance your knowledge regarding Kinetic Energy MCQs in this MDCAT Online Test. Kinetic Energy MDCAT MCQs Test Preparations The kinetic energy of an object depends on: a) Its mass and the square of its velocity b) Its weight and height c) The force applied and the distance traveled d) The time taken and the displacement a) Its mass and the square of its velocity If the velocity of an object is doubled, its kinetic energy becomes: a) Four times greater b) Twice as much c) Half as much d) Unchanged a) Four times greater The kinetic energy of a 2 kg object moving at 3 m/s is: a) 9 Joules b) 18 Joules c) 6 Joules d) 12 Joules The kinetic energy of an object at rest is: a) Zero b) Equal to its potential energy c) Equal to the work done on it d) Equal to its mass Kinetic energy is a type of: a) Mechanical energy b) Thermal energy c) Chemical energy d) Electrical energy a) Mechanical energy The unit of kinetic energy is: a) Joule b) Watt c) Newton d) Meter The kinetic energy of an object with a mass of 4 kg and a velocity of 5 m/s is: a) 50 Joules b) 25 Joules c) 100 Joules d) 20 Joules If the mass of an object is halved while its velocity remains constant, its kinetic energy will be: a) Halved b) Doubled c) Unchanged d) Quartered The kinetic energy of an object is increased by a factor of 9 when its velocity is: a) Tripled b) Doubled c) Quadrupled d) Increased by 50% The work done on an object is equal to its change in: a) Kinetic energy b) Gravitational potential energy c) Thermal energy d) Elastic potential energy The kinetic energy of a 1 kg object moving at 10 m/s is: a) 50 Joules b) 100 Joules c) 10 Joules d) 5 Joules Which of the following is true about kinetic energy? a) It is always positive b) It can be negative c) It depends on the object’s temperature d) It depends on the object’s age a) It is always positive Kinetic energy is directly proportional to the: a) Square of the velocity b) Cube of the velocity c) Mass of the object d) Time of motion a) Square of the velocity The change in kinetic energy of an object is equal to: a) The work done on the object b) The change in its potential energy c) The force applied times the distance d) The power output of the object a) The work done on the object An object moving with a constant velocity has: a) Constant kinetic energy b) Increasing kinetic energy c) Decreasing kinetic energy d) No kinetic energy a) Constant kinetic energy The kinetic energy of a car moving at 20 m/s is: a) Four times the kinetic energy of the car moving at 10 m/s b) Half the kinetic energy of the car moving at 40 m/s c) Twice the kinetic energy of the car moving at 10 m/s d) Equal to the kinetic energy of the car moving at 30 m/s a) Four times the kinetic energy of the car moving at 10 m/s If the velocity of an object is tripled, its kinetic energy will increase by: a) Nine times b) Six times c) Three times d) Twelve times The work done to accelerate an object from rest to a velocity v is equal to its: a) Kinetic energy b) Potential energy c) Thermal energy d) Elastic energy If two objects have the same mass, the one with higher velocity will have: a) More kinetic energy b) Less kinetic energy c) The same kinetic energy d) Zero kinetic energy a) More kinetic energy The kinetic energy of a moving object is zero when its velocity is: a) Zero b) Half the maximum velocity c) Double the maximum velocity d) The maximum velocity The kinetic energy of an object is calculated using: a) Its mass and velocity b) Its mass and height c) The force applied and the distance d) The temperature and volume a) Its mass and velocity A 5 kg object moving at 2 m/s has a kinetic energy of: a) 10 Joules b) 20 Joules c) 5 Joules d) 50 Joules The kinetic energy of an object increases if: a) Its velocity increases b) Its mass decreases c) It moves at a constant speed d) The temperature decreases a) Its velocity increases The kinetic energy of a 3 kg object moving at 4 m/s is: a) 24 Joules b) 12 Joules c) 6 Joules d) 48 Joules The work-energy theorem states that: a) The work done on an object is equal to its change in kinetic energy b) Work is equal to potential energy c) The energy of an object is conserved d) Kinetic energy is independent of mass a) The work done on an object is equal to its change in kinetic energy An object with higher kinetic energy will: a) Have higher momentum b) Have lower momentum c) Have zero momentum d) Have equal momentum as another object with lower kinetic energy a) Have higher momentum The kinetic energy of a body is directly proportional to: a) The square of its velocity b) The mass of the body c) The height of the body d) The density of the body a) The square of its velocity The kinetic energy of an object can be increased by: a) Increasing its velocity b) Decreasing its mass c) Increasing its height d) Reducing the work done on it a) Increasing its velocity The kinetic energy of an object is calculated using the mass and: a) The square of its velocity b) The square root of its velocity c) The product of its velocity and acceleration d) Its gravitational force a) The square of its velocity In a frictionless environment, the total mechanical energy of an object is: a) The sum of its kinetic and potential energy b) Equal to its kinetic energy only c) Equal to its potential energy only d) Zero a) The sum of its kinetic and potential energy The kinetic energy of a body of mass 2 kg moving at 3 m/s is: a) 9 Joules b) 18 Joules c) 6 Joules d) 12 Joules An object’s kinetic energy is directly proportional to: a) The square of its speed b) Its density c) The time it has been in motion d) Its volume a) The square of its speed If an object’s velocity is halved, its kinetic energy will be: a) Quartered b) Doubled c) Halved d) Unchanged The energy associated with an object’s motion is called: a) Kinetic energy b) Gravitational potential energy c) Elastic potential energy d) Thermal energy A car traveling at 30 m/s has a kinetic energy of: a) Four times that of a car traveling at 15 m/s b) Twice that of a car traveling at 15 m/s c) Equal to a car traveling at 15 m/s d) Half that of a car traveling at 15 m/s a) Four times that of a car traveling at 15 m/s The work done to stop a moving object is equal to: a) Its kinetic energy b) Its potential energy c) The work done on it to accelerate d) The energy converted to heat a) Its kinetic energy The kinetic energy of an object of mass 10 kg moving at 4 m/s is: a) 80 Joules b) 40 Joules c) 20 Joules d) 10 Joules If you are interested to enhance your knowledge regarding Physics, Chemistry, Computer, and Biology please click on the link of each category, you will be redirected to dedicated website for each Was this article helpful?
{"url":"https://youthforpakistan.org/kinetic-energy-mdcat-mcqs/","timestamp":"2024-11-02T02:11:57Z","content_type":"text/html","content_length":"240118","record_id":"<urn:uuid:6cdf6ce2-9212-44f4-8109-fa1d35164f7a>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00646.warc.gz"}
Homotopic curve shortening and the affine curve-shortening flow We define and study a discrete process that generalizes the convex-layer decomposition of a planar point set. Our process, which we call homotopic curve shortening (HCS), starts with a closed curve (which might self-intersect) in the presence of a set P ⊂ R^2 of point obstacles, and evolves in discrete steps, where each step consists of (1) taking shortcuts around the obstacles, and (2) reducing the curve to its shortest homotopic equivalent. We find experimentally that, if the initial curve is held fixed and P is chosen to be either a very fine regular grid or a uniformly random point set, then HCS behaves at the limit like the affine curve-shortening flow (ACSF). This connection between HCS and ACSF generalizes the link between “grid peeling” and the ACSF observed by Eppstein et al. (2017), which applied only to convex curves, and which was studied only for regular grids. We prove that HCS satisfies some properties analogous to those of ACSF: HCS is invariant under affine transformations, preserves convexity, and does not increase the total absolute curvature. Furthermore, the number of self-intersections of a curve, or intersections between two curves (appropriately defined), does not increase. Finally, if the initial curve is simple, then the number of inflection points (appropriately defined) does not increase. Publication series Name Leibniz International Proceedings in Informatics, LIPIcs Volume 164 ISSN (Print) 1868-8969 Conference 36th International Symposium on Computational Geometry, SoCG 2020 Country/Territory Switzerland City Zurich Period 23/06/20 → 26/06/20 • Affine curve-shortening flow • Convex-layer decomposition • Integer grid • Shortest homotopic path Dive into the research topics of 'Homotopic curve shortening and the affine curve-shortening flow'. Together they form a unique fingerprint.
{"url":"https://cris.ariel.ac.il/en/publications/homotopic-curve-shortening-and-the-affine-curve-shortening-flow-3","timestamp":"2024-11-12T02:16:17Z","content_type":"text/html","content_length":"58833","record_id":"<urn:uuid:0c5fe0ad-8ffb-455a-b6f5-5f3bcda88a3f>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00103.warc.gz"}
Polygon-Constrained Motion Planning Problems We consider the following class of polygon-constrained motion planning problems: given a set of kk centrally controlled mobile agents (say pebbles) initially sitting on the vertices of an nn-vertex simple polygon pp, we study how to plan their vertex-to-vertex motion in order to reach with a minimum (either maximum or total) movement (either in terms of number of hops or euclidean distance) a final placement enjoying a given requirement. In particular, we focus on final configurations aiming at establishing some sort of visual connectivity among the pebbles, which in turn allows for wireless and optical intercommunication. Therefore, after analyzing the notable (and computationally tractable) case of gathering the pebbles at a single vertex (i.e., the so-called rendez-vous), we face the problems induced by the requirement that pebbles have eventually to be placed at: (i) a set of vertices that form a connected subgraph of the visibility graph induced by pp, say g(p)g(p) (connectivity), and (ii) a set of vertices that form a clique of g(p)g(p) (clique-connectivity). We will show that these two problems are actually hard to approximate, even for the seemingly simpler case in which the hop distance is considered. Original language English Title of host publication Proceedings of the 9th International Symposium on Algorithms and Experiments for Sensor Systems, Wireless Networks and Distributed Robotics (ALGOSENSORS) Publisher Springer Verlag Pages 67-82 Number of pages 16 Publication status Published - 2013 Externally published Yes Publication series Series Lecture Notes in Computer Science Volume 8243 Dive into the research topics of 'Polygon-Constrained Motion Planning Problems'. Together they form a unique fingerprint.
{"url":"https://cris.maastrichtuniversity.nl/en/publications/polygon-constrained-motion-planning-problems","timestamp":"2024-11-09T04:26:23Z","content_type":"text/html","content_length":"55025","record_id":"<urn:uuid:70203d0e-e6a0-4130-ac77-e65d12dc3c9e>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00481.warc.gz"}
Go to the source code of this file. subroutine ctbcon (NORM, UPLO, DIAG, N, KD, AB, LDAB, RCOND, WORK, RWORK, INFO) Function/Subroutine Documentation subroutine ctbcon ( character NORM, character UPLO, character DIAG, integer N, integer KD, complex, dimension( ldab, * ) AB, integer LDAB, real RCOND, complex, dimension( * ) WORK, real, dimension( * ) RWORK, integer INFO Download CTBCON + dependencies [TGZ] [ZIP] [TXT] CTBCON estimates the reciprocal of the condition number of a triangular band matrix A, in either the 1-norm or the infinity-norm. The norm of A is computed and an estimate is obtained for norm(inv(A)), then the reciprocal of the condition number is computed as RCOND = 1 / ( norm(A) * norm(inv(A)) ). NORM is CHARACTER*1 Specifies whether the 1-norm condition number or the [in] NORM infinity-norm condition number is required: = '1' or 'O': 1-norm; = 'I': Infinity-norm. UPLO is CHARACTER*1 [in] UPLO = 'U': A is upper triangular; = 'L': A is lower triangular. DIAG is CHARACTER*1 [in] DIAG = 'N': A is non-unit triangular; = 'U': A is unit triangular. N is INTEGER [in] N The order of the matrix A. N >= 0. KD is INTEGER [in] KD The number of superdiagonals or subdiagonals of the triangular band matrix A. KD >= 0. AB is COMPLEX array, dimension (LDAB,N) The upper or lower triangular band matrix A, stored in the first kd+1 rows of the array. The j-th column of A is stored in the j-th column of the array AB as follows: [in] AB if UPLO = 'U', AB(kd+1+i-j,j) = A(i,j) for max(1,j-kd)<=i<=j; if UPLO = 'L', AB(1+i-j,j) = A(i,j) for j<=i<=min(n,j+kd). If DIAG = 'U', the diagonal elements of A are not referenced and are assumed to be 1. LDAB is INTEGER [in] LDAB The leading dimension of the array AB. LDAB >= KD+1. RCOND is REAL [out] RCOND The reciprocal of the condition number of the matrix A, computed as RCOND = 1/(norm(A) * norm(inv(A))). [out] WORK WORK is COMPLEX array, dimension (2*N) [out] RWORK RWORK is REAL array, dimension (N) INFO is INTEGER [out] INFO = 0: successful exit < 0: if INFO = -i, the i-th argument had an illegal value Univ. of Tennessee Univ. of California Berkeley Univ. of Colorado Denver NAG Ltd. November 2011 Definition at line 143 of file ctbcon.f.
{"url":"https://netlib.org/lapack/explore-html-3.4.2/d1/d2a/ctbcon_8f.html","timestamp":"2024-11-06T05:13:18Z","content_type":"application/xhtml+xml","content_length":"13881","record_id":"<urn:uuid:3adab145-7c2e-4e3b-915d-f7e0f924a6df>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00037.warc.gz"}
The SQL AVG() Function Explained With Examples The SQL AVG() function calculates the average value of a numeric column in a table. It is commonly used to find the average of a set of values, such as prices, scores, or quantities. Here’s an overview of the SQL AVG() function: SELECT AVG(column_name) AS average_value FROM table_name; column_name: The name of the numeric column for which you want to calculate the average. table_name: The name of the table containing the column. Example: Suppose you have a table named sales with a column named amount, and you want to find the average amount of sales: SELECT AVG(amount) AS average_sales FROM sales; Result: The AVG() function returns a single value, which is the average of the values in the specified column. If there are no rows in the table, or if the specified column contains NULL values, the function returns NULL. Aggregate Function: AVG() is an aggregate function in SQL, which means it operates on a set of rows and returns a single result. It calculates the average value across all rows that meet the conditions specified in the WHERE clause (if present). Data Type: The data type of the result returned by AVG() is typically the same as the data type of the column being averaged. For example, if the column is of type INT, the result will also be an INT. However, in some cases, the result may be automatically cast to a larger data type to avoid loss of precision. AVG() is commonly used in statistical analysis, reporting, and data exploration to calculate the mean value of a dataset. It can be combined with other SQL functions, such as GROUP BY, WHERE, and HAVING, to perform more complex calculations or filter the data before averaging. Overall, the SQL AVG() function is a powerful tool for calculating the average value of numeric data in a table, making it easier to analyze, count and interpret numeric value in large datasets. SQL Server AVG() function: ALL vs. DISTINCT n SQL Server, the AVG() function calculates the average value of a numeric column. The differences between using ALL and DISTINCT with AVG() lie in how duplicates are handled in the query and what returns the average value calculation: When ALL is used with AVG(), it includes all values, including duplicates, in the calculation of the average. It is the default behavior of the AVG() function if neither ALL nor DISTINCT is specified. If there are duplicate values in the column, each occurrence is counted separately in the average calculation. SELECT AVG(ALL column_name) AS average_value FROM table_name; When DISTINCT is used with AVG(), it only considers distinct values in the column for the average calculation. It eliminates duplicate values from the calculation, ensuring that each distinct value contributes only once to the average. SELECT AVG(DISTINCT column_name) AS average_value FROM table_name; Use ALL when you want to include all values in the average calculation, including duplicates. This is useful when each occurrence of a value should contribute to the average independently. Use DISTINCT when you want to calculate the average based on unique values only, excluding duplicates. This is useful when you’re interested in the average value across distinct entities or when you want to eliminate redundancy in the calculation. In summary, choose between ALL and DISTINCT based on whether you want to include or exclude duplicates from the average calculation, respectively. SQL Server AVG() with GROUP BY example Let’s create a scenario with two tables: products and sales. The products table contains information about average list price different products, including their IDs and names. The sales table records sales transactions, including the product ID, quantity sold, and the sale amount. We’ll then use the AVG() function with GROUP BY to calculate the average name price and sale amount for each Here’s the T-SQL code to create all the records tables and insert sample data: -- Create the products table CREATE TABLE products ( product_id INT PRIMARY KEY, product_name VARCHAR(100) -- Insert sample data into the products table INSERT INTO products (product_id, product_name) (1, 'Product A'), (2, 'Product B'), (3, 'Product C'); -- Create the sales table CREATE TABLE sales ( sale_id INT PRIMARY KEY, product_id INT, quantity_sold INT, sale_amount DECIMAL(10, 2) -- Insert sample data into the sales table INSERT INTO sales (sale_id, product_id, quantity_sold, sale_amount) (1, 1, 10, 100.00), (2, 1, 5, 50.00), (3, 2, 8, 120.00), (4, 2, 12, 180.00), (5, 3, 15, 200.00); Now, let’s use the AVG() function with GROUP BY to calculate the average sale price and amount for each product: SELECT p.product_id, p.product_name, AVG(s.sale_amount) AS avg_sale_amount FROM products p JOIN sales s ON p.product_id = s.product_id GROUP BY p.product_id, p.product_name; product_id | product_name | avg_sale_amount 1 | Product A | 75.0000 2 | Product B | 150.0000 3 | Product C | 200.0000 Each row represents a product. avg_sale_amount shows the average sale amount for each product. The result is calculated by averaging the sale amounts for each product using the AVG() function along with GROUP BY to group the sales data by product. AVG() With a DISTINCT Clause Let’s create a scenario with a table named students that contains information about students and their scores in different subjects. We’ll then use the AVG() function with a DISTINCT clause to calculate the average score across all distinct subjects. Here’s the T-SQL code to create the table and insert sample data: -- Create the students table CREATE TABLE students ( student_id INT PRIMARY KEY, student_name VARCHAR(100), subject VARCHAR(50), score INT -- Insert sample data into the students table INSERT INTO students (student_id, student_name, subject, score) (1, 'Alice', 'Math', 90), (2, 'Bob', 'Science', 85), (3, 'Charlie', 'Math', 95), (4, 'David', 'English', 80), (5, 'Eve', 'Science', 90), (6, 'Frank', 'Math', 85), (7, 'Grace', 'English', 75), (8, 'Hannah', 'Science', 88), (9, 'Ian', 'Math', 92), (10, 'Jack', 'English', 78); Now, let’s use the AVG() function with a DISTINCT clause to calculate the sum of the average score across all distinct subjects: SELECT AVG(DISTINCT score) AS average_score FROM students; The AVG() function calculates the average of the score column. The DISTINCT clause ensures that only distinct values of score are considered in the average calculation. The result, 85.8, represents the average score across all distinct subjects in the students table. We can use the AVG() function with a CASE statement to calculate the average score for each subject. Here’s how you can do it: AVG(CASE WHEN subject = 'Math' THEN score ELSE NULL END) AS avg_math_score, AVG(CASE WHEN subject = 'Science' THEN score ELSE NULL END) AS avg_science_score, AVG(CASE WHEN subject = 'English' THEN score ELSE NULL END) AS avg_english_score FROM students GROUP BY subject; subject | avg_math_score | avg_science_score | avg_english_score Math | 90.6667 | NULL | NULL Science | NULL | 87.6667 | NULL English | NULL | NULL | 77.6667 We use a CASE statement within the AVG() function to conditionally calculate the average score for each subject. The CASE statement checks the subject column. If the subject matches the specified subject (‘Math’, ‘Science’, ‘English’), it includes the score in the average calculation; otherwise, it includes The GROUP BY clause groups the results by the subject column, allowing us to calculate the average score for each subject separately. The output displays the average score for each subject. If there are no scores for a particular subject, for example the average score is shown as a NULL value. Rated 0 out of 5 stars. No ratings yet
{"url":"https://www.bps-corp.com/post/the-sql-avg-function-explained-with-examples-1","timestamp":"2024-11-11T16:10:22Z","content_type":"text/html","content_length":"1049925","record_id":"<urn:uuid:41635408-adb0-4f3a-a1f3-72f62eec39f8>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00038.warc.gz"}
Online Math Classes for Kids | Learn Math Online @ BYJU'S FutureSchool Online Math Classes for Kids By learning math online through our online math classes, students will start at their independent learning level, which allows them to build math confidence right from the start. And most importantly, our students receive instruction, encouragement, and a strong mathematical foundation with the support of a knowledgeable teacher. What qualifies as an effective online math course is very similar to that of an offline/in-person class. • For both an online and offline math class to be effective as well as enjoyable for students, the math classes first and foremost have to be led by qualified and engaged teachers. There is evidence that students who participate in online classes with human instruction and support are more likely to learn and reach achievement goals than those in self-paced online classes. • One of the benefits that online math courses often have over in-person instruction is the teacher-to-student ratio. Often highly effective and high-ranking online math courses provide a very low student-to-teacher ratio like 1:1 or even up to 1:4, where students can collaborate as well as learn from others. Whereas, many in-person classes have more than 25 students to one teacher. • Different students have different learning needs. Traditional, classroom teaching typically involves a teacher addressing a class of 25 to as many as 40 students. In this model, individual attention to students does not happen. • Technology is a big advantage many online math courses have over traditional classroom teaching. Online math classes utilize technology to motivate students to learn math concepts while creating fun apps and playing video games. In addition, online math students also learn using paper worksheets, which give them the skill and drill and metacognitive practice they need. Many in-person classes must rely mostly on worksheets because of the vast amounts of students in each class along with a lack of technology available at in-person schools. • With online math courses, students receive immediate feedback on their homework which allows them to gain confidence moving forward. The quick feedback also provides positive opportunities for students to learn and grow from their mistakes. Every learning scenario, for example, online versus in-person, has many benefits. As a parent, it’s important to evaluate your child individually to make the best decision. One reason to try online learning is that providing a child a variety of learning experiences (like ours, which include real-life applications, interactive activities, and challenges) helps parents to fully understand what a child’s preferred learning style is and how to best support them. Also, by exposing them to a plethora of opportunities, they will utilize different areas of their brain while developing skills like perseverance and grit. Here is why you should choose BYJU’S FutureSchool’s online math courses: • Learning math online is convenient and safe. Students can take these courses from any location and schedule them at times that are convenient for their families. • With online math classes, students can start from where they are rather than at a level that is prescribed by a grade-level curriculum dictated by one-size-fits-all standards. • Online math courses allow children to learn in an environment that is safe and away from harmful peer relationships, like bullying. • Often in-person teaching and learning are interrupted because of behavioral issues in a classroom. Students who learn math online won’t be interrupted by other students. • Students who struggle with focusing in an in-person setting can benefit from learning math online. With a 1:1 or a 1:4 ratio, teachers can help redirect each student if they become disengaged or distracted. This means more learning is happening each day in a shorter amount of time. • Often children hesitate to ask questions in class. Learning mathematics online gives children the freedom to ask questions, which helps deepen their understanding of mathematical principles, and strengthens their self-confidence. • One of the benefits of online math courses is they often use cutting-edge technology. This allows students to learn and practice on software that has been created with the best educational practices in mind. And, because many children love technology, it is often more engaging and motivating than traditional instruction and learning tools. • When children take math courses online, the parents are notified often of the child’s progress, giving them the ability to support and encourage their child so that they don’t fall behind or become frustrated. • Students receive instant feedback on their work which allows them to build confidence, to celebrate often, and to learn from their mistakes. They can focus more on learning the “how” and “why” of math rather than solely on whether they got the right answer. • Online math classes allow for more frequent assessment of children’s progress. This will help them to learn and move forward more quickly. It also provides teachers with the feedback they need to match their instruction with the individual child.
{"url":"https://www.byjusfutureschool.com/math/","timestamp":"2024-11-10T06:57:16Z","content_type":"text/html","content_length":"220502","record_id":"<urn:uuid:4ecacf6c-411f-4da2-9783-ce95a3a73a47>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00640.warc.gz"}
EquMath: Math Lessons About Us Useful Links Link to us EquMath is resource for math lessons from Algebra to Differential Equations! It's study site for high school, college students and other learners. Make your homework, refresh your memory, prepare for a test, etc Browse more than 3,700 Math pages filled with short and easy-to-understand explanations. Choose one of the following subject areas: Algebra, Trigonometry, Calculus, Differential Equations, Complex Variables, Matrix Algebra, or Mathematical Tables. You can also easy find topics ranging from simplifying fractions to the cubic formula, from the quadratic equation to Fourier series, from the sine function to systems of differential equations.
{"url":"https://equmath.net/","timestamp":"2024-11-04T13:33:14Z","content_type":"text/html","content_length":"4420","record_id":"<urn:uuid:8191f751-1856-4ffe-b7f0-978c2c2bba34>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00399.warc.gz"}
Our Geometry Tutors In Ohio Geometry Tutors Struggling with Geometry? As one of the oldest sciences, geometry has sent many students seeking a little extra help. Our geometry tutors will come directly to you and explain all you need o know about lengths, areas, and volumes. Astronomy, Euclidean geometry, manifolds, Riemannian geometry, and general relativity-we have a tutor for that! Improve your geometry test grades and overall performance in class with the help of our experienced geometry tutors. Whether you need differential geometry, fractal geometry, or algebraic geometry we have tutors ready to help you now. What is it? Euclidean/Plane Geometry is the study of flat space. Between every pair of points there is a unique line segment which is the shortest curve between those two points. These line segments can be extended to lines. Lines are infinitely long in both directions and for every pair of points on the line the segment of the line between them is the shortest curve that can be drawn between them. All of these ideas can be described by drawing on a flat piece of paper. From the laws of Euclidean Geometry, we get the famous Pythagorean Theorem. Non-Euclidean Geometry is any geometry that is different from Euclidean geometry. It is a consistent system of definitions, assumptions, and proofs that describe such objects as points, lines and planes. The two most common non-Euclidean geometries are spherical geometry and hyperbolic geometry. The essential difference between Euclidean geometry and these two non-Euclidean geometries is the nature of parallel lines: In Euclidean geometry, given a point and a line, there is exactly one line through the point that is in the same plane as the given line and never intersects it. In spherical geometry there are no such lines. In hyperbolic geometry there are at least two distinct lines that pass through the point and are parallel to (in the same plane as and do not intersect) the given line. Riemannian Geometry is the study of curved surfaces and higher dimensional spaces. For example, you might have a cylinder, or a sphere and your goal is to find the shortest curve between any pair of points on such a curved surface, also known as a minimal geodesic. Or you may look at the universe as a three dimensional space and attempt to find the distance between/around several planets. Students can succeed in any geometry class. From elementary school to college, math can be a difficult subject for many students. We make it easier and more understandable for them by providing expert tutors in every mathematics class including geometry. We will be happy to provide you with all the information you need to choose the tutor that is best suited for the geometry class you or your student is taking. You will review their educational background and experience to know that the geometry tutors we offer are experts in their field. Our Tutoring Service We believe that the most effective tutoring occurs when you have the undivided attention of a highly qualified and experienced tutor by your side. Our exceptional tutors are not only experienced and educated, but are experts in their field and passionate about teaching others. We will always provide you with a tutor specifically qualified and experienced in the subject matter you need. And for your peace of mind, we conduct a nation-wide criminal background check, sexual predator check and social security verification on every single tutor we offer you. Before you invest money and time into tutoring sessions, be sure you have selected the right tutor for you. Here Are Some Of Our Geometry Tutor Profiles Timothy T Teaching Style When I tutor a student, I seek first to understand the student and how he/she thinks. I find it is very important to have good rapport and communication with the student so I understand how he/she views the subject and the difficulties of it. Next I try to make "conceptual bridges" from what they know to what they are having difficulty understanding. This process usually teaches me about seeing the subject from a new point of view. I try to achieve a fine balance between guiding and directing the student’s thoughts on the topic with following the student in their own line of thinking of the subject. The student needs to learn to have confidence in his own thoughts on the subject and in his own ability to master it. Experience Summary Over the past four years, I have tutored high school and middle school students in math, algebra, calculus, chemistry, SAT Math, and general study skills. My preference is to tutor math, algebra, calculus, physics, physical science, chemistry, and programming. Math is the subject for which I have the greatest passion. I also participate in the homeschooling of four of my children (13, 11, 8, 6). I have mentored my 13 yr old son in Algebra I & II, Chemistry, Elementary Math, and Middle-school Physical Science, and taught elementary math to my 11, 8, and 6 year olds. Additionally, I read and review history lessons to my kids. I completed my MS in Electrical Engineering in 2006 from The University of Texas at Arlington and my BS in Electrical Engineering and a BA in Philosophy from Rice University. I have recent experience as a student having completed Cellular Biology II at St. Petersburg College in Fall 2011. Type Subject Issued-By Level Year Degree Electrical Engineering Univ. of Texas - Arlington Masters 2006 Certification Design for Six Sigma Honeywell International DFSS - Green Belt 2003 Degree Electrical Engineering Rice University BSEE 1989 Degree Philosophy Rice University BA 1989 Robert R Teaching Style I’ve always been interested in the application of math and science to the solution of real world problems. This led me to a very satisfying career in engineering. Therefore my approach to teaching is very application oriented. I like to relate the subject to problems that the students will encounter in real life situations. I've generally only worked with older students; high school or college age or older mature adults who have returned to school to get advance training or learn a new trade. Experience Summary I’ve always been interested in math and science; especially in their application to solving real world problems. This led me to a very satisfying career in engineering. I have a BS in electrical engineering from General Motors Institute (now Kettering University) and an MS in electrical engineering from Marquette University. I am a registered professional engineer in Illinois. I have over 30 years of experience in the application, development, and sales/field support of electrical/electronic controls for industrial, aerospace, and automotive applications. I’m currently doing consulting work at Hamilton-Sundstrand, Delta Power Company, and MTE Hydraulics in Rockford. I also have college teaching and industrial training experience. I have taught several courses at Rock Valley College in Electronic Technology, mathematics, and in the Continuing Education area. I’ve done industrial technical training for Sundstrand, Barber Colman, and others. I’ve also taught math courses at Rasmussen College and Ellis College (online course). I’ve also been certified as an adjunct instructor for Embry-Riddle Aeronautical University for math and physics courses. I've tutored my own sons in home study programs. I'm currently tutoring a home schooled student in math using Saxon Math. I hope to do more teaching/tutoring in the future as I transition into retirement. Type Subject Issued-By Level Year Degree Electrical Engineering Marquette University MS 1971 Degree Electrical Engineering GMI (Kettereing University) BS 1971 Boris B Teaching Style I believe that excellence in teaching comes from the teacher's adaptation to student's specific needs. In particular, for some students, visualization may be the essential component for them to understand a certain concept or an idea in mathematics, and by furnishing examples and various proofs with pictures enables the student to learn the concept, whereas for other students it may be the algebraic equation that allows them to see a certain idea. In the first couple of sessions, I probe for the specific needs of the student and then am able to connect with that student so that he/she feels comfortable with the subject. I am a patient teacher and believe that all students are able to grasp the subject. I teach in a disciplined manner, so that the topic presented is coherent and follows a logical flow. I make sure that the theoretical concepts are internalized in a concrete example for the student. Above all, I carry a positive disposition wherever I go and encourage students to enjoy math. Experience Summary I am a graduate of GaTech, with a Math and Psychology BA degrees. Aside from my two majors I have minors in Philosophy and Cognitive Science. During my years of high school and college, I have tutored students in Mathematics -- be it in calculus or statistics, or the math portion of the SAT. At the moment I work part-time at the Korean after school program, called Daekyo America Inc., as a math instructor for both high school and middle school. I have participated in various Mathematical Competitions, and have won numerous awards, including the Grand Prize Winner in USAMTS (United States Mathematical Talent Search). Type Subject Issued-By Level Year Degree Applied Mathematics GaTech Bachelors 2004 Degree Psychology GaTech BA 2004 Maria M Teaching Style I love tutoring and consider myself to be effective at it. I approach it with enthusiasm. I apply theory to practical applications in my career and life. My approach is one of ease. I consider mathematics to be easy, as long as you accept what I call "the rules of the game"; i.e., there are certain principles that have to be accepted, and not questioned, then everything else falls in place. I believe that all people have the capability of learning, and I love the opportunity to provide a positive experience to students. Experience Summary I have been tutoring for more than 5 years. My main focus is mathematics, but I also tutor students in Spanish I through IV. I have tutored Spanish, Algebra I, Algebra II, Geometry, Trigonometry, and Pre-Calculus. I have recently helped several students prepare for the SAT math test. I enjoy making a difference in the student’s life. Type Subject Issued-By Level Year Other Spanish Native speaker Fluent Current Certification Project Management Studies Project Management Institute PMP Certification 2006 Degree Civil Engineering California State University at Long Beach MSCE 1987 Other Mathematics El Camino College AA 1981 Vinod V Teaching Style The cornerstone of my teaching philosophy and personal teaching goals is to help students develop their own thinking skills. I believe all students should leave the school armed with the ability to think for them selves, to think critically and to think creatively. Understanding how people learn is one of the significant aspects of teaching. This is linked to their “knowledge” background and maturity. The key to teaching is to relate to the audience by starting from what they know and building upon it. As a teacher I am totally involved with the class, dedicated to my students and 100% prepared to devote time and energy for their intellectual growth. Love for teaching evokes passion and dedication within me. I believe that the enthusiasm of a motivated teacher rubs off on his/her students, who derive the inspiration and encouragement which actuates their desire to learn. A good teacher should have sound fundamentals and command over the concepts. Fundamentals are the foundation intrinsic for mastering the subject; only teachers who are strong in fundamentals will be able to pass it on to their students. I believe that my strong command over the fundamentals will rub off on my students. I believe that the role of a teacher is that of a leader where you have to show the path, motivate, encourage, and lead by example. In short, my success lies in seeing my students succeed. Experience Summary My enthusiasm and love for education can be gauged from the fact that I pursued three Masters degrees in three distinct but related fields. One cannot pursue engineering as a profession without having an affinity for Math and Analysis. Math was a passion for me from my young days and still very much remains so. I have a thorough knowledge and understanding of math. Right from my school days I was involved and loved to teach math. I invariably obtained A+ scores in whatever math test I took in my lifetime. For instance my GRE math score was above 95% of test takers' scores. I have taught Middle school, High school and under-graduate students in Algebra, Geometry, Trigonometry, Quadratic Equations, Applied Probability and Calculus. Type Subject Issued-By Level Year Degree City Planning Kansas State University MRCP 2002 Degree Engineering Anna University ME 2000 Degree Business Administration Loyola Institute of Business PGDBA 1998 Degree Civil Engineering Institution of Engineers BE 1994 Herbert H Teaching Style The teaching style I use is completely dependent on the particular student being taught. I'm a big fan of the adage 'Give a man a fish, and he will eat for a day. Teach a man to fish, and he will eat for the rest of his life'. My objective is to teach a child not only the material, but how they best learn so that they can teach themselves. I enjoy using applicable references that are relevant to the child so that they will more easily grasp the concepts. I believe a personalized approach works best. Experience Summary I have earned a bachelor's degree in Computer Engineering with a software focus along with a Sales Engineering minor. I was 3 credit hours away from both my Math and Business minors. In the last 5 years, I have tutored students in Basic Math, Algebra 1 and 2, Geometry, Calculus, Visual Basic, Physics, and Differential Equations. I have taught math for the past 4 years at the Professional Academies Magnet at Loften High School in the subjects of Algebra I, Algebra I Honors, Geometry, Geometry Honors, Algebra II, Algebra II Honors, Statistics, Mathematics for College Readiness, and Liberal Arts Math. Type Subject Issued-By Level Year Other Mathematics Professional Academies Magnet 9-12 2007-current Other Tutor Starke Church of God by Faith 9-12 2005-2007 Degree Computer Engineering University of Florida BA 2004 Jennifer J Teaching Style I enjoy working with a student one-on-one. We know that every child develops on a different learning curve, which is why the tutoring process is so crucial. Working individually with a child can help them overcome some of the frustration and confusion that comes along with learning in a crowded classroom. I have a great amount of patience and understanding to help the child develop his own learning style and guide him to work through the problems at his own pace to ensure that he truly understands the material. Learning is a step by step process and I know that I can guide each child through the steps they need to take to complete any and all challenges they may face, not just in school, but in life. Experience Summary I began tutoring algebra when I was in high school for my boss' kids in the back of the restaurant where I worked. I then went on to get my BA in mathematics at La Salle University, with a minor in Education. While in college, I continued tutoring my fellow students in math and chemistry for a peer tutoring program set up by the school. In addition I observed/student taught at various schools in the Philadelphia area. I have knowledge of a wide range of mathematics, my specialty being algebra and calculus. Due to my experience, I am also proficient in English and can assist in research paper writing. Type Subject Issued-By Level Year Degree Mathematics La Salle University Bachelor's 2007 Justin J Teaching Style I am a very enthusiastic tutor and, as I stated in the previous section, believe teaching in such a way that the student gains a true mastery of the given subject. In a sense, I believe in each student understanding a particular concept in their own unique way that is consistent with their unique thinking processes. I strive to do this by relating a given concept analogously to something the student already fundamentally understands. I also create new problems to solve that probe the students’ progress and comprehension. I try to make these problems as realistic as possible to make them interesting to the student. Experience Summary Being a Ph.D. candidate and appointed research assistant, I very much realize the importance of true comprehension of a subject. Not only do I hold two B.S. degrees in Applied Mathematics and Chemical Engineering but I also carry credits to the equivalent of a Non-Thesis Masters in Applied Mathematics and am one course shy of a minor in Chemistry. I have tutored groups of students for Sylvan Learning Centers, conducted one-on-one sessions with Educational Enrichment and independently, and prepared and delivered lectures for advanced undergraduate chemical engineering courses. I am capable and experienced in tutoring all levels of mathematics, chemistry, chemical engineering, physics and materials science. Type Subject Issued-By Level Year Degree Chemical Engineering University of Florida Ph.D.--in progress 2010 Degree Chemical Engineering NC State University B.S. with Honors 2005 Degree Applied Mathematics NC State University B.S. with Honors 2005 A Word From Previous Students and Parents Julie G. Windermere, FL I would highly recommend Will to other parents. He is efficient knowledgeable and caring and has proved to be a real inspiration to Alexander. Thankyou. Judy L. Clermont, FL Thought I would share Jean's grades with you. The last numbers are her grades at mid term this final nine weeks!! She has come up on five grades. I know you are a HUGE part. I want to thank you so Jackie, J. Chuluota, FL We are very happy with Ronda and we would highly recommend her to anyone. Ronda has had prior teaching experience where Clay attends, and this has been helpful in that she understands how some of his teachers approach classroom objectives. Language ... Read More... Advancing students beyond the classroom... across the nation Tutoring for Other Subjects
{"url":"https://advancedlearners.com/ohio/geometry/tutor/find.aspx","timestamp":"2024-11-07T06:57:46Z","content_type":"text/html","content_length":"63545","record_id":"<urn:uuid:1e0d6f45-7a9d-444d-bbab-222c6846a57d>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00689.warc.gz"}
Putting Mathematics & Science symbols on web pages in Nodejs There are many ‘tex software’ for performing mathematical and other scientific expression on web pages. But using Mathjax is recommended, as Mathjax is an open source javascript based software which can easily be adopted in Node js. We can directly make link from cdn with code in our web page but I always prefer to make a separate copy of mathjax in our server. To adopt mathjax in our nodejs server we have to first install mathjax with the npm(node package manager). We use mathjax3 as it has many advantages over mathjax v2. Rendering MathJax in Nodejs system, is not a simple client-side task. We have to follow few steps in our web app(server). We need to access npm first. After getting npm we could use it to install mathjax into node system. So, npm install mathjax@3 Note: Otherwise, in cPanel Nodejs setup we have to update the package.json file and add the following line inside dependencies key section like this: Package.json "name": "Render Math", "version": "1.0.0", "description": "", "main": "app.js", "dependencies": { "express": "^4.17.1", "mathjax": "^3.1.2" "scripts": { "test": "echo \"Error: no test specified\" && exit 1" "keywords": [], "author": "", "license": "ISC" Then we have to run npm. This will install mathjax in node_modules folder in our base project. Now we got mathjax installed in our system. Now follow the following steps to render mathjax to our web pages from our nodejs server. 1.Configure mathjax in script tag within header section of the html page. <meta name='viewport' content="width=device-width, initial-scale=1.0" charset="utf-8"> window.MathJax = { tex: { inlineMath: [['$', '$'], ['\\(', '\\)']] svg: { fontCache: 'global' 2. Now load the mathjax with the code: <script type="text/javascript" src="path/tex-chtml.js" id="MathJax-script" async></script> This is also inside section in the html file. Here ‘path’ is the path of the tex-chtml.js file which should be served by the Nodejs server. NOTE: When we installed the mathjax through npm, the working components of the mathjax are stored inside the mathjax/es5 directory. So, if we put all the components of mathjax/es5 directory into a base directory folder and name it as mathjax, the text-chtml.js file will be accessible from “/mathjax/tex-chtml.js” location. So, in this case the ‘path’ inside the second script tag should be replaced as “/mathjax”. Thus, now the src path should be “/mathjax/tex-chtml.js” 3. Up to this, the code required for rendering maths and other related symbols in our web page is like this: <meta name='viewport' content="width=device-width, initial-scale=1.0" charset="utf-8"> window.MathJax = { tex: { inlineMath: [['$', '$'], ['\\(', '\\)']] svg: { fontCache: 'global' <script type="text/javascript" scr="/mathjax/tex-chtml.js" id="MathJax-script" async></script> The above code is required to render mathjax from the own copy of mathjax from the server. It is now the task of the server/web app to serve the mathjax files to the webpages. So, we require the following code in the Nodejs web app: app.get('/mathjax*', (req, resp)=>{ var path = req.url; fs.readFile(__dirname + path, function(err, data){ resp.writeHead(200, { 'Content-Type': 'text/javascript' }else resp.end('Not found'); NOTE: It is worthy to mention that all the concerned nodejs components are kept inside the /mathjax directory inside base directory. Also, we used a wildcard character (*) after ‘/mathjax’. So, we could access the tex-chtml.js and any other component file from this url path. Now restart Nodejs and put some mathematical expression like \$x^3 + y^3\$ inside anywhere in the web content. This appear as $x^3+y^3$. It is important that every expression which contain mathematics or such symbols should be written inside two ‘$’ characters or should be preceded by “\\(“ and followed by “\\)” characters. All of static contents containing mathematical expressions will be displayed by now. To render mathematical expressions for dynamic web pages we have to use mathjax typesets. There two types of mathjax typesets – synchronous and asynchronous. For the synchronous way, we have to use MathJax.typeset(). It will tell mathjax to look for mathematics in the page for any unprocessed mathematical expressions and then typeset it. For the asynchronous way we have to use MathJax.typesetPromise(); The following code snippet should be followed: function doMathJax(div,s){ const done = document.createElement('span'); done.textContent = ''; const syncTypeset = div; syncTypeset.innerHTML = s; setTimeout(async ()=>{ await MathJax.typesetPromise(); Function doMathJax will render mathematical expressions inside the variable ‘s’ to the html element ‘div’. We can do it also as following: //modify the DOM here. }).catch((err)=> console.log(err.message));
{"url":"https://subhraonline.com/webscience/render-math-science-symbols-nodejs","timestamp":"2024-11-12T12:30:26Z","content_type":"text/html","content_length":"18780","record_id":"<urn:uuid:9d685920-e5c9-464d-b252-6c4d6c1906e5>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00233.warc.gz"}
Multiplication Worksheets 3 Digit By 2 Digit Mathematics, especially multiplication, forms the foundation of numerous scholastic self-controls and real-world applications. Yet, for lots of students, mastering multiplication can present a challenge. To resolve this hurdle, instructors and parents have welcomed a powerful device: Multiplication Worksheets 3 Digit By 2 Digit. Intro to Multiplication Worksheets 3 Digit By 2 Digit Multiplication Worksheets 3 Digit By 2 Digit Multiplication Worksheets 3 Digit By 2 Digit - Liveworksheets transforms your traditional printable worksheets into self correcting interactive exercises that the students can do online and send to the teacher 3 Digit by 2 Digit Multiplication 3 Digit by 2 Digit Multiplication Daniel Felipe Rivera Marin Member for 3 years 5 months Age 9 12 Level 4 Language Grade 4 multiplication worksheets These worksheets have column form multiplication problems with 2 digit 10 99 and 3 digit 100 999 numbers Worksheet 1 Worksheet 2 Worksheet 3 Worksheet 4 Worksheet 5 Worksheet 6 5 More Value of Multiplication Method Recognizing multiplication is essential, laying a solid structure for sophisticated mathematical principles. Multiplication Worksheets 3 Digit By 2 Digit supply structured and targeted technique, promoting a deeper comprehension of this essential math operation. Development of Multiplication Worksheets 3 Digit By 2 Digit 3 digit by 2 digit multiplication Games And worksheets 3 digit by 2 digit multiplication Games And worksheets 3 Digit Multiplication Math ninjas work on their three by two digit multiplication technique in this practice worksheet Children will solve a total of 15 problems using the standard algorithm regrouping as needed Designed for fourth graders this worksheet reinforces foundational multiplication skills as students practice working with This Multiplication worksheet may be configured for 2 3 or 4 digit multiplicands being multiplied by 1 2 or 3 digit multipliers You may vary the numbers of problems on each worksheet from 12 to 25 This multiplication worksheet is appropriate for Kindergarten 1st Grade 2nd Grade 3rd Grade 4th Grade and 5th Grade From conventional pen-and-paper exercises to digitized interactive styles, Multiplication Worksheets 3 Digit By 2 Digit have actually progressed, satisfying varied understanding designs and choices. Kinds Of Multiplication Worksheets 3 Digit By 2 Digit Standard Multiplication Sheets Straightforward exercises focusing on multiplication tables, helping learners develop a strong math base. Word Issue Worksheets Real-life circumstances integrated into troubles, boosting vital thinking and application skills. Timed Multiplication Drills Examinations designed to boost speed and accuracy, aiding in quick psychological mathematics. Advantages of Using Multiplication Worksheets 3 Digit By 2 Digit Multiplying 3 Digit by 2 Digit Numbers With Comma Separated Thousands A Multiplying 3 Digit by 2 Digit Numbers With Comma Separated Thousands A There are several variants of each class of worksheet to allow for plenty of practice These two digit and three digit multiplication worksheets gradually introduce long multiplication problems to third and fourth grade The printanble PDFs are output in high resolution and include answer keys With this math worksheet students will solve 9 problems that involve multiplying a 3 digit number by a 2 digit number Each problem is set up in a vertical fashion and provides ample space to complete each calculation An answer key is included with your download to make grading fast and easy Improved Mathematical Abilities Constant practice hones multiplication effectiveness, boosting overall math capacities. Boosted Problem-Solving Abilities Word troubles in worksheets develop analytical reasoning and technique application. Self-Paced Discovering Advantages Worksheets accommodate individual knowing rates, promoting a comfy and versatile knowing setting. How to Produce Engaging Multiplication Worksheets 3 Digit By 2 Digit Incorporating Visuals and Shades Lively visuals and colors catch focus, making worksheets visually appealing and involving. Including Real-Life Situations Connecting multiplication to day-to-day situations adds importance and functionality to exercises. Tailoring Worksheets to Different Skill Levels Personalizing worksheets based on differing efficiency levels guarantees inclusive discovering. Interactive and Online Multiplication Resources Digital Multiplication Equipment and Gamings Technology-based resources offer interactive knowing experiences, making multiplication appealing and pleasurable. Interactive Internet Sites and Apps Online platforms provide varied and available multiplication technique, supplementing conventional worksheets. Personalizing Worksheets for Numerous Discovering Styles Visual Students Aesthetic help and representations help understanding for students inclined toward visual knowing. Auditory Learners Verbal multiplication troubles or mnemonics cater to students who understand concepts through auditory methods. Kinesthetic Learners Hands-on tasks and manipulatives sustain kinesthetic students in comprehending multiplication. Tips for Effective Application in Understanding Consistency in Practice Routine method enhances multiplication skills, advertising retention and fluency. Stabilizing Repetition and Variety A mix of repetitive workouts and varied issue styles keeps rate of interest and understanding. Giving Constructive Comments Comments aids in recognizing areas of renovation, encouraging ongoing progression. Challenges in Multiplication Method and Solutions Inspiration and Engagement Difficulties Monotonous drills can bring about disinterest; ingenious techniques can reignite motivation. Getting Over Fear of Mathematics Negative assumptions around mathematics can impede development; creating a positive understanding environment is necessary. Effect of Multiplication Worksheets 3 Digit By 2 Digit on Academic Efficiency Studies and Research Findings Research study shows a favorable relationship between constant worksheet usage and improved mathematics performance. Final thought Multiplication Worksheets 3 Digit By 2 Digit become functional tools, promoting mathematical effectiveness in learners while suiting varied knowing styles. From standard drills to interactive on-line sources, these worksheets not just improve multiplication skills but also promote important thinking and problem-solving capabilities. Two Digit Multiplication Worksheets Multiplication worksheets Two digit multiplication Math Multiplication 3 Digit By 3 Digit Worksheet Free Printable Check more of Multiplication Worksheets 3 Digit By 2 Digit below Free Multiplication Worksheet 2 Digit by 2 Digit Free4Classrooms Multiplication worksheets Multiplying 3 Digit By 3 Digit Numbers A 3 Digit By 3 Digit Multiplication Worksheets Pdf Multiplying 3 digit By Numbers Large Print 3 Digit By 2 Digit Multiplication Worksheets Free Printable 2 Digit Math Worksheets Math multiplication worksheets Multiplication worksheets Math worksheets 3 Digit By 1 Digit Multiplication Worksheets Printable Printable Worksheets Multiply in columns 2 by 3 digit numbers K5 Learning Grade 4 multiplication worksheets These worksheets have column form multiplication problems with 2 digit 10 99 and 3 digit 100 999 numbers Worksheet 1 Worksheet 2 Worksheet 3 Worksheet 4 Worksheet 5 Worksheet 6 5 More Multiplication Worksheets 3 Digits Times 2 Digits This 3 digit by 2 digit multiplication worksheet has ten vertical problems and one word problem for students to solve example 452 x 36 4th through 6th Grades Solve the 3 digit by 2 digit multiplication problems Then glue the puzzle pieces in the correct places on the grid to reveal a pirate picture 4th through 6th Grades Grade 4 multiplication worksheets These worksheets have column form multiplication problems with 2 digit 10 99 and 3 digit 100 999 numbers Worksheet 1 Worksheet 2 Worksheet 3 Worksheet 4 Worksheet 5 Worksheet 6 5 More This 3 digit by 2 digit multiplication worksheet has ten vertical problems and one word problem for students to solve example 452 x 36 4th through 6th Grades Solve the 3 digit by 2 digit multiplication problems Then glue the puzzle pieces in the correct places on the grid to reveal a pirate picture 4th through 6th Grades 3 Digit By 2 Digit Multiplication Worksheets Free Printable Multiplying 3 Digit By 3 Digit Numbers A 2 Digit Math Worksheets Math multiplication worksheets Multiplication worksheets Math worksheets 3 Digit By 1 Digit Multiplication Worksheets Printable Printable Worksheets 3 digit by 2 digit Multiplication Worksheets The Multiplying 3 Digit by 2 Digit Numbers With Comma Separated Thousands A Math Work The Multiplying 3 Digit by 2 Digit Numbers With Comma Separated Thousands A Math Work 2 Digit Multiplication Worksheet School Frequently Asked Questions (Frequently Asked Questions). Are Multiplication Worksheets 3 Digit By 2 Digit ideal for all age groups? Yes, worksheets can be tailored to various age and skill levels, making them versatile for different students. Just how typically should students exercise utilizing Multiplication Worksheets 3 Digit By 2 Digit? Consistent technique is vital. Normal sessions, ideally a few times a week, can produce considerable renovation. Can worksheets alone enhance math skills? Worksheets are an important device yet must be supplemented with diverse discovering techniques for detailed skill development. Exist online systems using totally free Multiplication Worksheets 3 Digit By 2 Digit? Yes, several educational websites offer free access to a wide variety of Multiplication Worksheets 3 Digit By 2 Digit. Just how can parents support their kids's multiplication technique in your home? Motivating regular technique, supplying assistance, and creating a positive learning atmosphere are valuable steps.
{"url":"https://crown-darts.com/en/multiplication-worksheets-3-digit-by-2-digit.html","timestamp":"2024-11-06T11:21:24Z","content_type":"text/html","content_length":"29490","record_id":"<urn:uuid:61d47183-3cdf-452d-a98c-363dd34dd54a>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00354.warc.gz"}
Kinetic theory and irreversible thermodynamics of dense fluids subject to an external field A formal theory of dense fluids subject to an external electric field is developed as an extension of the kinetic theory of dense fluids reported previously. The present formulation also refines and modifies the previous theory wherever deemed necessary and especially in the part related to the renormalization. The extended Gibbs relation and a theory of irreversible thermodynamics are developed. A generalized Kirkwood integral equation for correlation functions and a generalized equation for local nonequilibrium chemical potentials are also derived for nonequilibrium mixtures. Journal of Chemical Physics Pub Date: July 1987 □ Conducting Fluids; □ Dense Plasmas; □ Electric Fields; □ Irreversible Processes; □ Kinetic Theory; □ Boltzmann Transport Equation; □ Gibbs Free Energy; □ Nonlinear Systems; □ Similarity Theorem; □ Thermodynamics and Statistical Physics
{"url":"https://ui.adsabs.harvard.edu/abs/1987JChPh..87.1220E/abstract","timestamp":"2024-11-07T17:50:33Z","content_type":"text/html","content_length":"36277","record_id":"<urn:uuid:c84e75dc-929b-4e1a-aacd-11d63c28c761>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00666.warc.gz"}
La función DCOUNTA | Datapeaker keep in mind javascript is required for full website functionality. Welcome back to our regular blog of Excel functions from A to Z. Today we look at the DCOUNTA function. La función DCOUNTA Esta función cuenta las celdas que no están en blanco en un campo (column) of records in a list or databaseA database is an organized set of information that allows you to store, Manage and retrieve data efficiently. Used in various applications, from enterprise systems to online platforms, Databases can be relational or non-relational. Proper design is critical to optimizing performance and ensuring information integrity, thus facilitating informed decision-making in different contexts.... que coinciden con las condiciones que especifique. El argumento de campo es opcional. Si se omite el campo, DCOUNTA cuenta todos los registros de la base de datos que coinciden con los criterios. The DCOUNTA The function uses the following syntax to operate: DCOUNTA (database, [The DCOUNT function], criteria) The DCOUNTA The function has the following arguments: • database: The DCOUNT function. A database The DCOUNT function. The first row of the list contains labels for each column. • field: The DCOUNT function. Este argumento es opcional. The DCOUNT function (The DCOUNT function), p.ej “Age” o “Performance”, or a number (without quotation marks) that represents the position of the column within the list, namely, 1 for the first column, 2 for the second column, etc. • criteria: is the range of cells that contains the conditions you specify. It should also be noted that: • you can use any range for the Criteria argument, as long as you include at least one column label and at least one cell below the column label to specify the condition, p.ej if the range G1: G2 contains the Revenue column label in G1 and the amount 10,000 in G2, you could define the range as Income and use that name as the Criteria argument in database functions • Although the Criteria range can be located anywhere in the worksheet, do not put the Criteria rank below list. If you add more information to the list, the new information is added to the first row below the list. If the row below the list is not blank, Excel cannot add the new information • Make sure Criteria range does not overlap list • to perform an operation on a complete column in a database, enter a blank line below column labels in criteria range. Please, see my example below: Criteria Examples Typing an equal sign in a cell indicates that you want to enter a formula. To display text that includes an equals sign, surround the text and the equals sign with double quotes, So: You can also do this if you are entering an expression (a combination of formulas, operators and text) and you want to display the equals sign instead of having Excel use it in a calculation. For Where entry is the text or value you want to search for. For instance: • When filtering text data, Excel is not case sensitive. But nevertheless, you can use a formula to perform a case-sensitive search (look down). The following sections provide examples of complex criteria. Multiple criteria in one column boolean logic: (Seller = “Tim” O Seller = “Kathryn”) To find rows that meet multiple criteria for a column, write the criteria directly below each other in separate rows of the criteria range. p.ej write the criteria directly below each other in separate rows of the criteria range (write the criteria directly below each other in separate rows of the criteria range: write the criteria directly below each other in separate rows of the criteria range), write the criteria directly below each other in separate rows of the criteria range (B1: B3) write the criteria directly below each other in separate rows of the criteria range “Tim” o “Kathryn” in the write the criteria directly below each other in separate rows of the criteria range column (write the criteria directly below each other in separate rows of the criteria range: write the criteria directly below each other in separate rows of the criteria range). write the criteria directly below each other in separate rows of the criteria range boolean logic: (write the criteria directly below each other in separate rows of the criteria range “Audit” Y Sales> 1500) write the criteria directly below each other in separate rows of the criteria range, write the criteria directly below each other in separate rows of the criteria range. write the criteria directly below each other in separate rows of the criteria range (write the criteria directly below each other in separate rows of the criteria range: write the criteria directly below each other in separate rows of the criteria range), write the criteria directly below each other in separate rows of the criteria range (A1: C2) write the criteria directly below each other in separate rows of the criteria range “Audit” in the Service write the criteria directly below each other in separate rows of the criteria range $ 1,500 on the Sales column (write the criteria directly below each other in separate rows of the criteria range: write the criteria directly below each other in separate rows of the criteria range). write the criteria directly below each other in separate rows of the criteria range boolean logic: (write the criteria directly below each other in separate rows of the criteria range “Audit” O Seller = “Kathryn”) write the criteria directly below each other in separate rows of the criteria range, write the criteria directly below each other in separate rows of the criteria range, write the criteria in different rows of the criteria range. write the criteria directly below each other in separate rows of the criteria range (write the criteria directly below each other in separate rows of the criteria range: write the criteria directly below each other in separate rows of the criteria range), write the criteria directly below each other in separate rows of the criteria range (A1: B3) write the criteria directly below each other in separate rows of the criteria range “Audit” in the Service column (write the criteria directly below each other in separate rows of the criteria range: write the criteria directly below each other in separate rows of the criteria range) o “Kathryn” in the write the criteria directly below each other in separate rows of the criteria range column (write the criteria directly below each other in separate rows of the criteria range: write the criteria directly below each other in separate rows of the criteria range). Multiple criteria sets where each set includes criteria for multiple columns boolean logic: ((Seller = “Kathryn” Y Sales> 2000) O (Seller = “Tim” Y Sales> 1500)) To find rows that meet multiple sets of criteria, where each set includes criteria for multiple columns, write each set of criteria in separate rows. write the criteria directly below each other in separate rows of the criteria range (write the criteria directly below each other in separate rows of the criteria range: write the criteria directly below each other in separate rows of the criteria range), write the criteria directly below each other in separate rows of the criteria range (B1: C3) write the criteria directly below each other in separate rows of the criteria range “Kathryn” in the write the criteria directly below each other in separate rows of the criteria range write the criteria directly below each other in separate rows of the criteria range $ 2,000 on the Sales column, or show the rows that contain “Tim” in the write the criteria directly below each other in separate rows of the criteria range column (write the criteria directly below each other in separate rows of the criteria range: write the criteria directly below each other in separate rows of the criteria range) and a value greater than $ 1,500 in the Sales column (write the criteria directly below each other in separate rows of the criteria range: write the criteria directly below each other in separate rows of the criteria range). Multiple criteria sets where each set includes criteria for one column boolean logic: ((Sales> 2000 and sales <= 3000) O (Sales <1500)) To find rows that meet multiple sets of criteria, where each set includes criteria for a column, include multiple columns with the same column header. write the criteria directly below each other in separate rows of the criteria range (write the criteria directly below each other in separate rows of the criteria range: write the criteria directly below each other in separate rows of the criteria range), write the criteria directly below each other in separate rows of the criteria range (C1: D3) displays rows containing values between 2000 Y 3000 and values less than 1500 in the Sales column (write the criteria directly below each other in separate rows of the criteria range: write the criteria directly below each other in separate rows of the criteria range). Criteria for finding text values that share some characters but not others Criteria for finding text values that share some characters but not others, Criteria for finding text values that share some characters but not others: • Criteria for finding text values that share some characters but not others (=) Criteria for finding text values that share some characters but not others. For instance, Criteria for finding text values that share some characters but not others Criteria for finding text values that share some characters but not others Criteria for finding text values that share some characters but not others, Criteria for finding text values that share some characters but not others “Liam”, “Criteria for finding text values that share some characters but not others” Y “Criteria for finding text values that share some characters but not others” • Criteria for finding text values that share some characters but not others. Criteria for finding text values that share some characters but not others: write the criteria directly below each other in separate rows of the criteria range (write the criteria directly below each other in separate rows of the criteria range: write the criteria directly below each other in separate rows of the criteria range), write the criteria directly below each other in separate rows of the criteria range (A1: B3) Criteria for finding text values that share some characters but not others “Co” Criteria for finding text values that share some characters but not others Service Criteria for finding text values that share some characters but not others “i” in the write the criteria directly below each other in separate rows of the criteria range column (write the criteria directly below each other in separate rows of the criteria range: write the criteria directly below each other in separate rows of the criteria range). Criteria for finding text values that share some characters but not others You can use a calculated value that is the result of a formula as a criterion. You can use a calculated value that is the result of a formula as a criterion: • You can use a calculated value that is the result of a formula as a criterion • You can use a calculated value that is the result of a formula as a criterion, You can use a calculated value that is the result of a formula as a criterion: • = ” You can use a calculated value that is the result of a formula as a criterion ” • You can use a calculated value that is the result of a formula as a criterion; You can use a calculated value that is the result of a formula as a criterion (You can use a calculated value that is the result of a formula as a criterion, You can use a calculated value that is the result of a formula as a criterion Y You can use a calculated value that is the result of a formula as a • You can use a calculated value that is the result of a formula as a criterion, You can use a calculated value that is the result of a formula as a criterion #NAME? o #VALUE! You can use a calculated value that is the result of a formula as a criterion. You can ignore this error because it does not affect how the range is filtered • the formula you use for the criteria must use a relative reference to refer to the corresponding cell in the first row (in the examples below, C6 Y A6) • all other references in the formula must be absolute references. The following subsections provide specific examples of criteria created as a result of a formula.. Filtering for values greater than the average of all values in the data range write the criteria directly below each other in separate rows of the criteria range (write the criteria directly below each other in separate rows of the criteria range: write the criteria directly below each other in separate rows of the criteria range), write the criteria directly below each other in separate rows of the criteria range (D1: D2) displays rows that have a value in the Sales column greater than the average of all values (C6: write the criteria directly below each other in separate rows of the criteria range). in the formula, “C6“refers to the filtered column (C) from the first row of the data range (6). Filtering text using a case-sensitive search in the data range (write the criteria directly below each other in separate rows of the criteria range: write the criteria directly below each other in separate rows of the criteria range), write the criteria directly below each other in separate rows of the criteria range (D1: D2) shows rows containing “Audit” in the Service column using the EXACTLY function to perform a case-sensitive search ( write the criteria directly below each other in separate rows of the criteria range: A9). in the formula, “A6“refers to the filtered column (A) from the first row of the data range (6). Soon we will continue with our functions from A to Z of Excel. Keep checking: there is a new blog post every business day. You can find a full page of feature articles here.
{"url":"https://datapeaker.com/en/excel/la-funcion-dcounta/","timestamp":"2024-11-10T19:15:02Z","content_type":"text/html","content_length":"109993","record_id":"<urn:uuid:176129aa-abb5-4e8c-a0e6-550969c5cf4b>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00202.warc.gz"}
Site Navigation 1. Low cost energy efficiency improvements 2. Energy efficient retrofitting Technical papers Optimization of Refrigeration Plant Operation. Engineering Approach, presented at 2010 IIAR conference in San Diego Operating Ammonia Refrigeration Systems at Peak Efficiency, presented at 2013 IIR conference in Ohrid, Macedonia What is... The optimized defrosting The "floating" condensing (head) pressure The optimized suction pressure The optimum condensing pressure The most efficient way to reduce capacity of the refrigeration plant View Articles Add My e-Mail Change e-Mail Address Change e-Mail Format Remove My e-Mail View Previous Letters HTML Format Plain Text Format Support/Contact Us What is the optimized suction pressure? It is common knowledge in the industry that raising suction pressure improves compressor efficiency. Typical improvement might be 1% to 2% per degree increase in saturated suction pressure. Some designers suggest gradually increasing suction pressure when all space temperatures are satisfied and evaporator coils in low load mode. However, this is right only for the coils with variable frequency drives (VFD). The majority of modern refrigeration plants have single speed evaporator fans. To operate the plant efficiently, the total, not just the compressor power, must be minimized. The fans require an amount of power that is not negligible. Perhaps the power required by the evaporator fans constitutes 10-15% of the total full load power. However, when those fans continue to draw a constant power while the compressor unloads, they may require 30% or more of the total power. They not only use electricity, but all energy used by the motors must be removed from the space as refrigeration load. As a rule of thumb, for every 4HP we add in the refrigerated space, we add 2.5HP in the engine room in additional compressor and condenser motor requirements. Our research shows that refrigeration plants with single speed evaporator fans operate more efficiently at optimized suction pressure regardless of the refrigeration loads. Every plant has its own optimum T.D. (temperature difference). Major factor determining T.D. (the temperature difference between the coil temperature and the room air temperature) is a ratio between fan motor power and basic evaporator rating for this coil. From our research, optimum T.D. can vary from 5 degreeF to 20 degreeF. In real life T.D. can be increased by the follow factors: 1. Suction pressure losses. 2. Frost on the coil. 3. Actual fan power usage. Real power usage can increase up to 25% due to increase of cold air density. 4. A static pressure penalty. During low refrigeration load, we suggest to switch off several evaporators or let them cycle. Properly chosen T.D. can significantly improve efficiency of the refrigeration plants, especially for the plans with high ratio between fan power and basic evaporator rating.
{"url":"https://skenergy.ca/?6","timestamp":"2024-11-06T11:38:02Z","content_type":"application/xhtml+xml","content_length":"30218","record_id":"<urn:uuid:ac4a0fa1-2c52-4e3e-816e-7b45f77a86f9>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00422.warc.gz"}
T4T Cluster 2 Next Steps This resource is part of Tools4NCTeachers. The intended purpose of this document is to provide teachers with a tool to determine student understanding and suggest instructional moves that may help guide a student forward in their learning. It is not an exhaustive list of strategies. Download: KinderC2-NextSteps.docx
{"url":"https://goopennc.oercommons.org/authoring/588-t4t-cluster-2-next-steps/view","timestamp":"2024-11-12T03:50:55Z","content_type":"text/html","content_length":"63962","record_id":"<urn:uuid:82014486-f878-4f68-a4a5-2ef6e4c529ab>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00174.warc.gz"}
Performs advections of an arbitrary type of volume in a static velocity field. The advections are performed by means of various derivatives of Semi-Lagrangian integration, i.e. backwards tracking along the hyperbolic characteristics followed by interpolation. More... #include <openvdb/tools/VolumeAdvect.h> VolumeAdvection (const VelocityGridT &velGrid, InterrupterType *interrupter=nullptr) Constructor. More... virtual ~VolumeAdvection () int spatialOrder () const Return the spatial order of accuracy of the advection scheme. More... int temporalOrder () const Return the temporal order of accuracy of the advection scheme. More... void setIntegrator (Scheme::SemiLagrangian integrator) Set the integrator (see details in the table above) More... Scheme::SemiLagrangian getIntegrator () const Return the integrator (see details in the table above) More... void setLimiter (Scheme::Limiter limiter) Set the limiter (see details above) More... Scheme::Limiter getLimiter () const Retrun the limiter (see details above) More... bool isLimiterOn () const Return true if a limiter will be applied based on the current settings. More... size_t getGrainSize () const void setGrainSize (size_t grainsize) Set the grain-size used for multi-threading. More... int getSubSteps () const void setSubSteps (int substeps) Set the number of sub-steps per integration. More... double getMaxVelocity () const Return the maximum magnitude of the velocity in the advection velocity field defined during construction. More... template<typename VolumeGridT > int getMaxDistance (const VolumeGridT &inGrid, double dt) const template<typename VolumeGridT , typename VolumeSamplerT > VolumeGridT::Ptr advect (const VolumeGridT &inGrid, double timeStep) template<typename VolumeGridT , typename MaskGridT , typename VolumeSamplerT > VolumeGridT::Ptr advect (const VolumeGridT &inGrid, const MaskGridT &mask, double timeStep) template<typename VelocityGridT = Vec3fGrid, bool StaggeredVelocity = false, typename InterrupterType = util::NullInterrupter> class openvdb::v12_0::tools::VolumeAdvection< VelocityGridT, StaggeredVelocity, InterrupterType > Performs advections of an arbitrary type of volume in a static velocity field. The advections are performed by means of various derivatives of Semi-Lagrangian integration, i.e. backwards tracking along the hyperbolic characteristics followed by interpolation. Optionally a limiter can be combined with the higher-order integration schemes MacCormack and BFECC. There are two types of limiters (CLAMP and REVERT) that suppress non-physical oscillations by means of either claminging or reverting to a first-order schemes when the function is not bounded by the cell values used for tri-linear interpolation. The supported integrations schemes: /// ================================================================ /// | Lable | Accuracy | Integration Scheme | Interpolations | /// | |Time/Space| | velocity/volume | /// ================================================================ /// | SEMI | 1/1 | Semi-Lagrangian | 1/1 | /// | MID | 2/1 | Mid-Point | 2/1 | /// | RK3 | 3/1 | 3rd Order Runge-Kutta | 3/1 | /// | RK4 | 4/1 | 4th Order Runge-Kutta | 4/1 | /// | MAC | 2/2 | MacCormack | 2/2 | /// | BFECC | 2/2 | BFECC | 3/2 | /// ================================================================ VolumeAdvection ( const VelocityGridT & velGrid, InterrupterType * interrupter = nullptr inline velGrid Velocity grid responsible for the (passive) advection. interrupter Optional interrupter used to prematurely end computations. The velocity field is assumed to be constant for the duration of the advection. virtual ~VolumeAdvection ( ) inlinevirtual VolumeGridT::Ptr advect ( const VolumeGridT & inGrid, double timeStep inline Returns a new grid that is the result of passive advection of all the active values the input grid by timeStep. inGrid The input grid to be advected (unmodified) timeStep Time-step of the Runge-Kutta integrator. This method will advect all of the active values in the input inGrid. To achieve this a deep-copy is dilated to account for the material transport. This dilation step can be slow for large time steps dt or a velocity field with large magnitudes. If the VolumeSamplerT is of higher order than one (i.e. tri-linear interpolation) instabilities are known to occur. To suppress those monotonicity constrains or flux-limiters need to be applies. RuntimeError if inGrid does not have uniform voxels. VolumeGridT::Ptr advect ( const VolumeGridT & inGrid, const MaskGridT & mask, inline double timeStep Returns a new grid that is the result of passive advection of the active values in inGrid that intersect the active values in mask. The time of the output grid is incremented by timeStep. inGrid The input grid to be advected (unmodified). mask The mask of active values defining the active voxels in inGrid on which to perform advection. Only if a value is active in both grids will it be modified. timeStep Time-step for a single Runge-Kutta integration step. This method will advect all of the active values in the input inGrid that intersects with the active values in mask. To achieve this a deep-copy is dilated to account for the material transport and finally cropped to the intersection with mask. The dilation step can be slow for large time steps dt or fast moving velocity fields. If the VolumeSamplerT is of higher order the one (i.e. tri-linear interpolation) instabilities are known to occur. To suppress those monotonicity constrains or flux-limiters need to be applies. RuntimeError if inGrid is not aligned with mask or if its voxels are not uniform. size_t getGrainSize ( ) const inline the grain-size used for multi-threading A grainsize of 0 implies serial execution Scheme::SemiLagrangian getIntegrator ( ) const inline Return the integrator (see details in the table above) Scheme::Limiter getLimiter ( ) const inline Retrun the limiter (see details above) int getMaxDistance ( const VolumeGridT & inGrid, double dt inline ) const Returns the maximum distance in voxel units of inGrid that a particle can travel in the time-step dt when advected in the velocity field defined during construction. This method is useful when dilating sparse volume grids to pad boundary regions. Excessive dilation can be computationally expensive so use this method to prevent or warn against run-away RuntimeError if inGrid does not have uniform voxels. double getMaxVelocity ( ) const inline Return the maximum magnitude of the velocity in the advection velocity field defined during construction. int getSubSteps ( ) const inline the number of sub-steps per integration (always larger than or equal to 1). bool isLimiterOn ( ) const inline Return true if a limiter will be applied based on the current settings. void setGrainSize ( size_t grainsize ) inline Set the grain-size used for multi-threading. A grainsize of 0 disables multi-threading A small grainsize can degrade performance, both in terms of time and memory footprint! void setIntegrator ( Scheme::SemiLagrangian integrator ) inline Set the integrator (see details in the table above) void setLimiter ( Scheme::Limiter limiter ) inline Set the limiter (see details above) void setSubSteps ( int substeps ) inline Set the number of sub-steps per integration. The only reason to increase the sub-step above its default value of one is to reduce the memory footprint due to significant dilation. Values smaller than 1 will be clamped to 1! int spatialOrder ( ) const inline Return the spatial order of accuracy of the advection scheme. This is the optimal order in smooth regions. In non-smooth regions the flux-limiter will drop the order of accuracy to add numerical dissipation. int temporalOrder ( ) const inline Return the temporal order of accuracy of the advection scheme. This is the optimal order in smooth regions. In non-smooth regions the flux-limiter will drop the order of accuracy to add numerical dissipation.
{"url":"https://www.openvdb.org/documentation/doxygen/classopenvdb_1_1v12__0_1_1tools_1_1VolumeAdvection.html","timestamp":"2024-11-13T08:36:39Z","content_type":"application/xhtml+xml","content_length":"37392","record_id":"<urn:uuid:571695eb-98c3-46a3-93a6-9d72d92e1498>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00680.warc.gz"}
Grid-Based Navigation Functions for Motion Planning 8.2.3 Grid-Based Navigation Functions for Motion Planning To consider feedback plans for continuous spaces, vector fields and other basic definitions from differential geometry will be needed. These will be covered in Section 8.3; however, before handling such complications, we first will describe how to use the ideas presented so far in Section 8.2 as a discrete approximation to feedback motion planning. Examples 8.1 and 8.2 have already defined feedback plans and navigation functions for 2D grids that contain obstacles. Imagine that this model is used to approximate a motion planning problem for which . Section 5.4.2 showed how to make a topological graph that approximates the motion planning problem with a grid of samples. The motions used in Example 8.1 correspond to the 1-neighborhood definition, (5.37). This idea was further refined in Section 7.7.1 to model approximate optimal motion planning by moving on a grid; see Formulation 7.4. By choosing the Manhattan motion model, as defined in Example 7.4, a grid with the same motions considered in Example 8.1 is produced. To construct a navigation function that may be useful in mobile robotics, a high-resolution (e.g., to points per axis) grid is usually required. In Section 5.4.2, only a few points per axis were needed because feedback was not assumed. It was possible in some instances to find a collision-free path by investigating only a few points per axis. During the execution of a feedback plan, it is assumed that the future states of the robot are not necessarily predictable. Wherever the robot may end up, the navigation function in combination with the local operator must produce the appropriate action. If the current state (or configuration) is approximated by a grid, then it is important to reduce the approximation error as much as possible. This is accomplished by setting the grid resolution high. In the feedback case, the grid can be viewed as ``covering'' the whole configuration space, whereas in Section 5.4.2 the grid only represented a topological graph of paths that cut across the space.^8.3 Subsections Steven M LaValle 2020-08-14
{"url":"https://lavalle.pl/planning/node371.html","timestamp":"2024-11-03T07:46:26Z","content_type":"text/html","content_length":"7483","record_id":"<urn:uuid:b53f4ca3-4b64-4458-9dd6-7da78430c8b2>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00354.warc.gz"}
Application of Integrals Class 12 Notes Mathematics Chapter 8 Chapter at a Glance 1. Area Under Simple Curves (i) Let us find the area bounded by the curve y = f (x), x-axis and the ordinate x = a and x = b. Consider the area under the curve as composed of large number of thin vertical strips. Let us consider an arbitrary strip of height y and width dx. Area of this strip dA= ydx, where y = f (x). Total Area A of the region between x-axis, ordinates x = a, x = b and the curve y = f (x) is sum of areas of the thin strips across the region PQMLP (ii) To find the Area of bounded Regions, we may use the following Algorithm. Step. 1 Draw a rough sketch showing the region whose area is to be found. Step. 2 Slice the area into horizontal or vertical strips as the case may be. Step. 3 Consider an arbitrary strip approximatly as a rectangle. Step. 4 Find the area of rectangle. If the rectangular strip is parallel to y-axis then its width is taken as dx and if it is parallel to x-axis, then its width is taken as dy. In given fig.1, rectangle RLMQ has area = y. dx, and in fig (2) the area of the rectangle RLMQ = x.dy. Step. 5 Find the limit within which the rectangle can move. In fig (1), the rectangle of area y.dx can move between x = a and x = b, therefore the area of the region bounded by y = f(x), y = 0, x = a and x = b is given by. In fig (2), the rectangle of area x.dy can move between y = c and y = d, Therefore area of the region bounded by x = g (y), x = 0, y = c and y = d is given by dy 2. Area between Two curves (iii) Ellipse: represents an ellipse with center at origin and major axis is x-axis, minor axis is y-axis. Length of major axis is 2a and length of minor axis is 2b. Ellipse intersect the x-axis at a and –a and intersects the y-axis at b and –b. v) Graph of y = sin x (vi) Graph of y = cos x Related Articles:
{"url":"https://cbseacademic.in/class-12/notes/maths/application-of-integrals/","timestamp":"2024-11-07T22:00:31Z","content_type":"text/html","content_length":"97453","record_id":"<urn:uuid:2f25d35a-6e5e-4567-b73c-711fad7be14b>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00402.warc.gz"}
Challenging Rate Questions (New question of the week) A conversation last week went through a number of interesting questions, starting with a couple on percentages, and moving into some that I would call rate questions. I will extract these, which I think will be useful for others. (The rest could, too, but there was just too much there for one post!) Two problems Picking up in the middle of the discussion: These questions remind me of a similar one I worked out a few days ago, which works on the same principles, but easier to miss. I thought you’d like it, so here it is: Lily, Betty, and Louisa race each other in a 100 meters race. All of them run at a constant speed throughout the race. Lily beats Betty by 20 meters. Betty beats Louisa by 20 meters. How many meters does Lily beat Louisa by? (I did this, though it gave me more of a challenge) Then, there’s this which is even harder (for me). The grass in a field grows in a constant rate. If 240 sheep were to graze on it, it will become bare in 5 weeks. If 200 sheep were to graze on it, it will become bare in 7 weeks. How many weeks would it take 170 sheep to graze, before the field becomes bare? For this, I just figured out that the grass will keep on growing during the 5/7 or however weeks it takes, and 240*5=1200 and 200*7=1400, the difference is of 200 in 2 weeks, which is 100/week. For 170 sheep, it’s 100+ … that’s the part I haven’t figured out. Solution to 100-meter race I replied, giving my answer directly for the problem Sarah had already solved, and expanding on what she said about the other, which was a little cryptic: Yes, these are nice challenges that can probably be solved in many ways, some of which are surely nicer than what I’ll do. For the first, Lily runs 100 m in the time Betty runs 80 m, so Lily’s speed is 100/80 times Betty’s speed. Similarly, Betty’s speed is 100/80 times Louisa’s speed, and therefore Lily’s speed is 100/80 * 100/80 = 100/64 times Louisa’s. In the time Lily runs 100 m, Louisa runs 64 m; so Lily beats Louisa by 36 m. (Not, as one might initially think, 40 m. But you and I know better!) Thoughts about the grazing sheep The second is harder. You made a very nice start. One way to finish would be to repeat what you did, comparing the 170 sheep case with one of the others, perhaps using a variable. It may help to label the quantities, so you can see better what you are doing. Your 1200 and 1400 are measured in sheep-weeks (number of sheep times number of weeks); this can be thought of as a measure of the quantity of grass (how much a sheep eats in a week). So the grass grows at a rate of 100 sheep-weeks per week. That is, 100 sheep could graze in the field indefinitely, without using up any grass, because they are eating only the amount that grows; any additional sheep will start overgrazing, decreasing the amount of grass. So if you reduce the number of sheep from 200 to 170, how much longer can they graze? This becomes just another proportion problem, if you look at it right. I’d usually just do this with algebra, though that may get pretty complicated, in order to make sure I was thinking right. Variables might be the rate at which each sheep eats grass (say, a pounds per week), the rate at which grass grows (b pounds per week), the amount of grass initially in the field (c pounds), and what we need to solve for, the number of weeks with 170 sheep, w. This works, but is a little ugly. Sarah answered: As for the “ugly” algebra, I think I’d prefer that to simple proportion, because I don’t think I’m seeing it right. Simple proportion, maybe you do this: 240 sheep = 5 weeks = 1200 sheep-weeks – (5*100) = 1200 – 500 = 700 So 240 sheep = 700 sheep-weeks, 170 sheep = ? When you say simple proportion, we have to use inverse proportion here, right? I’m not quite seeing what to do with algebra, but we know b, right? We had previously discussed the need to write carefully what you mean; the string of equalities above is a little confusing to me, because these quantities are not really equal. I think she meant something like this: 240 sheep for 5 weeks = 1200 sheep-weeks eaten grass growing for 5 weeks = 5*100 = 500 sheep-weeks added 1200 – 500 = 700 sheep-weeks used up So 240 sheep use 700 sheep-weeks, 170 sheep = ? This is all good thinking, but doesn’t lead yet to the answer. I’ll come back later to an insight that could go straight from here to the answer, but frankly, I didn’t notice at the time how close she was. Solution by proportion I replied, actually finishing the work by the method I had described, because it struck me as too complicated to communicate without just showing it: The proportion is pretty subtle — enough so that I, too, would be a lot more confident with the algebra. But here’s the thinking: since 100 sheep would be just enough to keep the grass constant (eating just as much each week as grew), the time to use up the grass will be inversely proportional to the excess over 100 sheep. Since 200 sheep use up the grass in 7 weeks, 170 use up the grass in w weeks: (200-100):(170-100) = w:7 100:70 = w:7 w = 10 As for algebra, I suggested the variables might be the rate at which each sheep eats grass (say, a pounds per week), the rate at which grass grows (b pounds per week), the amount of grass initially in the field (c pounds), and what we need to solve for, the number of weeks with 170 sheep, w. We know none of these things initially; and we know nothing about pounds (or any other actual unit of quantity), so we will never actually solve for a, b, or c! (We could if I had used sheep-weeks instead of pounds, but I probably wouldn’t think of that until later.) From the first given fact, we know that \(5b – 240\cdot 5a = c\) (amount grown – amount eaten = amount in field). Similarly, \(7b – 200\cdot 7a = c\). Those two equations can be solved to find the ratios of the three variables, though not their actual values; this is in effect what you already did. Then we have to solve \(wb – 170\cdot wa = c\) for w, This is what I had in mind when I said you could finish by (in some sense) doing the same thing but with a variable. Solution by algebra Sarah was able to do the algebra as far as she got on her own previously: I follow most of what you said. I don’t think I would have managed the proportion. You say w = 10 weeks, but how can you check to confirm it’s correct? And if you use ratios with variables as your second method above, here is what I have: 5b – 1200a = 7b – 1400a -1200a + 1400a = 7b – 5b 200a = 2b 100a = b which we had already found. I don’t see how that helps us finish it off. I just gave a nudge: Now use the third equation! As I said, “Those two equations can be solve to find the ratios of the three variables, though not their actual values; this is in effect what you already did. Then we have to solve \(wb – 170\ cdot wa = c\) for w.” Actually, it’s not just that one equation; I’d replace c with the expression from one of the first two equations, and also replace b with 100a. Then solve for w. As for checking an answer, I’d probably just plug my answer into the equations. In this case you don’t have actual values for a, b, and c, but you could express them all in terms of a, plug those expressions in, and see what happens. If I hadn’t used algebra, I might work through the statement of the problem in terms of sheep-weeks, showing that my answer worked. After a false start, she got it: Combining the first and third equations, wb – 170*wa = 7b – 200*7a w(100a) – 170*wa = 7 (100a)- 200*7a 100aw – 170aw = 700a – 1400a Dividing by a 100w – 170w = 700 – 1400 -70w = -700 w = -700/-70 = 10 weeks Let me try with the first equation: wb – 170*wa = 5b – 240*5a w (100a) – 170wa = 5 (100a) – 1200a 100aw – 170aw = 500a – 1200a 100w – 170w = 500 – 1200 -70w = -700 w = -700/-70 w = 10 weeks which is correct! (I used both equations because I made a mistake in my first one and whilst doing it with the second equation I found it) Now for the checking: 100aw – 170aw = 500a – 1200a 100a (10) – 170a (10) = -700a 1000a – 1700a = -700a -700a = -700a I don’t see how to do it using sheep weeks. Checking by sheep-weeks Again, I just showed the check because it would be too hard to give a mere hint; and I wasn’t sure what it would look like until I did it: Let’s see if I can check it with sheep-weeks. (I hadn’t tried.) The problem was The grass in a field grows in a constant rate. If 240 sheep were to graze on it, it will become bare in 5 weeks. If 200 sheep were to graze on it, it will become bare in 7 weeks. How many weeks would it take 170 sheep to graze, before the field becomes bare? We claim the answer is If 170 sheep graze, it will be 10 weeks before the field becomes bare. We also found that the rate of growth is 100 sheep-weeks per week. In 5 weeks, 500 sheep-weeks of grass grows; and 240 sheep each 1200 sheep-weeks of grass. That tells us that the field starts out with 700 sheep-weeks of grass. Now we have numbers for everything, so we can check that the other two scenarios also work: In 7 weeks, 700 sheep-weeks of grass grows, and 200 sheep eat 1400 sheep-weeks of grass. Starting with 700, adding 700, and removing 1400 leaves the field bare. In 10 weeks, 1000 sheep-weeks of grass grows, and 170 sheep eat 1700 sheep-weeks of grass. Starting with 700, adding 1000, and removing 1700 leaves the field bare. So everything checks. That sort of check takes more work (because we had to find all the relevant numbers, and check without ready-made equations), but it’s more satisfying to me, because it confirms that the equations made sense. Notice where I found that the field holds 700 sheep-weeks of grass. Now look back at Sarah’s attempt at solving with this approach; she ended with, “So 240 sheep use 700 sheep-weeks, 170 sheep = ?” So she’d found how much grass there was in the field! She might have continued like this: With 170 sheep, we are 70 sheep above the number that eat just the amount that grows. In how many weeks will these 70 extra sheep eat up the 700 sheep-weeks of grass to use it up? 10 weeks! Which is better: the algebra, or all the ad-hoc thinking about proportions and sheep-weeks and sustainable grazing? I’m not sure. I like solving a tricky problem in two very different ways (and also checking), in order to gain confidence. Thank you so much for all your help. I appreciate your time. It makes even more sense when you see it like that, and yes, I agree it’s more satisfying. I wasn’t expecting you to do all of it again, thanks! I feel more confident doing these kinds of problems now 🙂 While writing this, I found that we have previously answered a question of this type; the initial answer from 1996 was incomplete, but a reader later wrote in with an algebraic solution: Feeding Oxen In this more complicated problem, the solution involves growth rates measured in ox-weeks per acre. Leave a Comment This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"url":"https://www.themathdoctors.org/challenging-rate-questions/","timestamp":"2024-11-06T12:08:48Z","content_type":"text/html","content_length":"121998","record_id":"<urn:uuid:d4eefff1-f088-4785-84b0-97a562f13d12>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00675.warc.gz"}
Intro to Quantum Computing Sat 27 July 2024 I work at a trapped ion quantum computing company writing compiler and control software in Rust and Python. I've had to learn about quantum computing for my work, and I have found it hard to find introductory literature that is both informative and accessible to me. I thought it would be nice to put together my own introductory guide, and in the process improve my own understanding. After a lot of reading, and a lot of questions answered by my excellent colleague Dr Amy Hughes, I've put something together that covers some of what I was hoping. I think the first section should be accessible even without much maths or physics knowledge. After that, some basic linear algebra knowledge will be helpful, as well as some basic knowledge of atomic I hope this will elucidate the basic idea of how quantum algorithms work, what "superposition" and "entanglement" mean and why they are useful, and give a concrete physical illustration of how these abstract ideas can be realised on an actual qubit. Unpacking the hype There are two properties of quantum computers that you might have heard about, expressed variously: 1. Qubits can be in many states at once! Each qubit exists in a superposition of 0 and 1! 2. We can entangle qubits together, so that the state of one qubit determines the state of others! Somehow we are to believe that these properties allow quantum computers to be large and efficient parallel machines. In some sense this is true: superposition allows quantum computers to explore multiple computational paths simultaneously, and entanglement creates a state space that grows exponentially with the number of qubits. But when you measure a qubit, you get a value of $0$ or $1$. So what good are our vast parallelised computations if our output is compact, encoded in the same binary digits we get out of a classical computation? A small amount of classical information becomes a large amount of quantum information during the quantum computation. But the result is again a small amount of classic information. How can this be One of the things quantum computers are useful for is extracting some property of the large superposed state. So rather than quantum computers being useful for calculating lots of numbers in parallel, they can be useful for calculating, say, one number that describes some structure in a vast collection of many other numbers. Shor's algorithm uses quantum computers in this way. Public-key cryptography methods such as RSA multiply two large prime numbers together to generate the encryption key. Those two large prime numbers are kept secret, as the decryption key can be derived from them. Hence, the security of such encryption methods relies on it being infeasible for classical computers to find prime factors of a large integer. In Shor's algorithm, the problem of factoring that integer is reduced to finding the period of a repeating sequence. That sequence is then encoded into the qubits' state and the period is computed as the compact output of the computation. Factoring an $n$-bit integer with Shor's algorithm requires something like $2n + 3$ logical^1 qubits^2, depending on the implementation. So for an 2048-bit integer, as often used for RSA keys, something like 5099 logical qubits would be required. At writing, the most powerful quantum computer in the world has the equivalent of 20^3. Re-visualising the flow of information We can now update our image above: Many quantum algorithms exploit the large quantum state space to encode information that has some kind of large pattern or structure. That structure is then efficiently extracted. Importantly, quantum computers are not equivalent to many parallel classical computers. The quantum state space may be vast, but the result of our computation is compact. This makes them suitable for computations that can be formulated into the general structure shown in the image above, but not for simply parallelising many classical computations. How exactly do we go from lots of quantum information to a small amount of classical information? As we've previously stated, when you measure a qubit you get a compact result of 0 or 1. Before that measurement, we have a great deal more information. Don't we want that information? How do we distill the information we're interested in into our compact output? As we'll explore further in a later section, a qubit's state is expressed using complex numbers that represent measurement probabilities. A qubit in superposition is one that has <100% probability of being measured as 0 or 1. If our algorithm resulted in every qubit having a 50% probability of being measured as 0 or 1, it would be indistinguishable from noise. So probabilistic quantum algorithms must manipulate the probabilities such that we have a high probability of measuring something useful. Manipulation of qubit state is done via quantum logic gates, or simply gates. All gates are linear operators: each output is some linear combination of every input. So when we perform a multi-qubit gate, each qubit's state becomes a weighted sum of the input qubit states. When doing this sum, the qubit states can cancel each other out (destructive interference) or amplify each other (constructive interference). We can think of our quantum algorithms as exploring many computational paths, then pruning off those that aren't of interest, leaving us with a high probability of measuring the result we desire. What does the quantum state actually look like? For a single qubit The state of a single qubit can be described by two complex numbers, which encode the probability of measuring $0$ or $1$. If you can forgive the introduction of some notation without it being fully explained, we'll use $\ket{0}$ to describe 100% probability of measuring $0$, and $\ket{1}$ for $1$'s equivalent. Our qubit state $\ket{\psi}$ is then described by a linear combination of these: $$ \ket{\psi} = a_0\ket{0} + a_1\ket{1}, a_i \in \mathbb{C} $$ The complex numbers $a_i$ are known as probability amplitudes, and their square magnitude gives the probability of measuring the corresponding output. For example, the probability of measuring $0$ is given by ${\left|a_0\right|}^2$. We cannot directly observe the probability amplitudes, only the binary result of measurement. Why? Intuitively: how could you measure a probability directly? More honestly: well, why does an electron have charge? There are many things in the world that we can observe (whether directly or through complex experimentation) and describe with mathematics -- but ultimately they are "just the way it For multiple qubits For every $n$ qubits, we have $2^n$ possible measurement outcomes. For example, for 3 qubits we have possible results of $000, 001, 010, 011, 100, 101, 110, 111$. Each of these outcomes has a probability amplitude associated with it: $$ \ket{\psi} = a_0\ket{000} + a_1\ket{001} + a_2\ket{010} + \dots a_7\ket{111} $$ Why is it necessary to describe the joint probability distribution across all qubits? Why can't we factor these into independent distributions for each qubit, giving us $2n$ probability amplitudes instead of $2^n$? When qubits are entangled, their measurement outcomes are linked. Therefore, a general description of the state for multiple qubits must describe a joint probability distribution over the qubit measurement outcomes, rather than independent distributions for each qubit. Hence, entanglement is what creates the exponential growth of the state space with the number of qubits. How exactly does the state scale compared to classical computers? It's not obvious how best to quantify the difference in scale between our $\mathbb{C}^{2^n}$ quantum state space and our $\mathbb{Z}_{2}^{n}$ classical state space . One way we can directly compare them is by considering degrees of freedom. In a classic computer, we have as many degrees of freedom as we have bits. You can fully describe the state of $n$ bits with $n$ values of 0 or 1. In a quantum computer, the degrees of freedom scale exponentially with the number of qubits. How? Comparing how the degrees of freedom in classical and quantum computers scale with the number of bits From our equation for multi-qubit state, we know that the state of $n$ qubits is described by $2^n$ complex numbers. Absent any constraints, $2^n$ complex numbers would have $2 \times 2^n = 2^{n+ 1}$ degrees of freedom: each complex number is minimally and fully defined by two real numbers, an amplitude and a phase. As the square magnitudes of our complex numbers represent probabilities, we have the constraint $\sum_{i = 0}^{i = 2^n - 1}\left|a_i\right|^2 = 1$, which removes one degree of freedom. We eliminate another degree of freedom due to the fact that the relative phase of $a_i$ are observable, but the global phase of $\psi$ is not: constraining $a_0$ to be real (or, arbitrarily, imaginary) does not change the physical meaning of $\psi$. That leaves us with $2^{n+1} - 2$ degrees of freedom. Visualising qubit state We can visualise a single qubit's state on a unit sphere, known as the Bloch sphere: The qubit's state vector always has magnitude one, hence it always touches the surface of the unit sphere. The angles $\theta$ and $\phi$ are one way of expressing our two degrees of freedom. We can $$ \ket{0} = \begin{bmatrix} 1 \\ 0 \end{bmatrix} \equiv \begin{bmatrix} \cos\frac{\theta}{2} \\ 0 \end{bmatrix} $$ $$ \ket{1} = \begin{bmatrix} 0 \\ 1 \end{bmatrix} \equiv \begin{bmatrix} 0 \\ e^{i\phi}\sin\frac{\theta}{2} \end{bmatrix} $$ We will call the states $\ket{0}$ and $\ket{1}$ our computational basis, or simply our basis states. But what about visualising multiple qubits? The Bloch sphere gives us a convenient way of visualising the state of a single qubit, but what about multiple qubits? As previously described, if there is no entanglement, then the state of a multi-qubit system can be factored into separate individual qubit states. In that case, we can visualise the state of $n$ qubits on $n$ Bloch spheres. If there is entanglement, we cannot factor our many-dimensional state into multiple 2-dimensional states, and so visualisation can be challenging. We won't provide any visualisations here, but there are several proposed methods^4^5. Manipulating qubit state Now that we can visualise a single qubit's state on the Bloch Sphere, we can more easily imagine that state being manipulated. What would it mean for the state vector to be aligned with the positive $x$ axis? That's $90^{\circ}$ rotated around the $y$ axis from either of our basis states. So it's half way between them! This is superposition. Superposition on the Bloch sphere We can write the state vector in terms of $\ket{0}$ and $\ket{1}$: $$ \ket{\psi} = \frac{1}{\sqrt{2}}\left(\ket{0} + \ket{1}\right) $$ Remember, the square magnitude of the coefficients of the basis states gives us the probability of measuring them. In this case, $\left|\frac{1}{\sqrt{2}}\right|^2 = \frac{1}{2}$: we have an equal probability of measuring either $0$ or $1$. Manipulating qubit state is also described as executing a quantum logic gate, or gate for short. We can write a matrix operator for a gate that would take a qubit from $\ket{0}$ to the superposed state shown in the images above (although via a different rotation than described above): $$ \frac{1}{\sqrt{2}}\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}\ket{0} = \frac{1}{\sqrt{2}}\left(\ket{0} + \ket{1}\right) $$ This gate is known as the Hadamard gate, is shown graphically as a square with an H in it: Creating entanglement Let's demonstrate how entanglement is created between two qubits. Similar to our single and multi-qubit equations above, if we have two qubits we can write: $$ \ket{\psi} = a_0\ket{00} + a_1\ket{01} + a_2\ket{10} + a_3\ket{11} $$ Where e.g. $a_1$ is the probability amplitude associated with measuring $0$ for the first qubit and $1$ for the second. How would we find the coefficients ${a_0,a_1,a_2,a_3}$ given the state vectors of the individual qubits? For two qubits $c$ and $t$: $$ \begin{aligned} \ket{\psi_c} &= c_0 \ket{0} + c_1 \ket{1} \\ \ket{\psi_t} &= t_0 \ket{0} + t_1 \ket{1} \end{aligned} $$ we compute the state vector $\ket{\psi}$ of the two qubit system using the tensor product of the individual qubits' state vectors: $$ \begin{aligned} \ket{\psi} &= \ket{\psi_c} \otimes \ket{\psi_t} \\ &= \begin{bmatrix} c_0 \\ c_1 \end{bmatrix} \otimes \begin{bmatrix} t_0 \\ t_1 \end{bmatrix} \\ &= \begin{bmatrix} c_0 t_0 \\ c_0 t_1 \\ c_1 t_0 \\ c_1 t_1 \end{bmatrix} = \begin{bmatrix} a_0 \\ a_1 \\ a_2 \\ a_3 \end{bmatrix} \end{aligned} $$ We can now apply two-qubit operators to this vector, in the same way we applied the Hadamard gate matrix to the single qubit state vector. The controlled NOT or CNOT gate operates on two qubits: a control qubit ($c$) and a target qubit ($t$). It is defined by this matrix: $$ CNOT = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{bmatrix} $$ and performs this transformation: $$ \begin{aligned} &a_0\ket{00} + a_1\ket{01} + a_2\ket{10} + a_3\ket{11} \\ \rightarrow \hspace{0.5em} &a_0\ket{00} + a_1\ket{01} + a_3\ket{10} + a_2\ket{11} \end{aligned} $$ In other words, the probability amplitudes for the measurement outcomes $\ket{10}$ and $\ket{11}$ are swapped. If the control qubit is purely $\ket{1}$, then the target qubit is flipped. If the control qubit is purely $\ket{0}$, then $a_2$ and $a_3$ equal $0$, and there is no effect. If the controls qubit is in superposition, then the effect on the target qubit will be correspondingly Let's apply a CNOT gate to our qubits $c$ (for control) and $t$ (for target): $$ \begin{aligned} \psi &= \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{bmatrix} \begin{bmatrix} c_0 t_0 \\ c_0 t_1 \\ c_1 t_0 \\ c_1 t_1 \end{bmatrix} &= \ begin{bmatrix} c_0 t_0 \\ c_0 t_1 \\ c_1 t_1 \\ c_1 t_0 \end{bmatrix} \end{aligned} $$ If $C$ is $\ket{0}$, then ${c_0, c_1} = {1, 0}$: $$ \psi = \begin{bmatrix} t_0 \\ t_1 \\ 0 \\ 0 \end{bmatrix} $$ and if $C$ is $\ket{1}$, then ${c_0, c_1} = {0, 1}$: $$ \psi = \begin{bmatrix} 0 \\ 0 \\ t_1 \\ t_0 \end{bmatrix} $$ By inspection, we can see that these are factorisable into separate qubit states: $$ \begin{aligned} \begin{bmatrix} 1 \\ 0 \end{bmatrix} \otimes \begin{bmatrix} t_0 \\ t_1 \end{bmatrix} &= \begin{bmatrix} t_0 \\ t_1 \\ 0 \\ 0 \end{bmatrix} \\ \begin{bmatrix} 0 \\ 1 \end{bmatrix} \otimes \begin{bmatrix} t_1 \\ t_0 \end{bmatrix} &= \begin{bmatrix} 0 \\ 0 \\ t_1 \\ t_0 \end{bmatrix} \end{aligned} $$ But what if our control qubit $C$ was in superposition? E.g. ${c_0, c_1} = {\frac{1}{\sqrt{2}}, \frac{1}{\sqrt{2}}}$. If our target qubit is $\ket{0}$ then we get: $$ \psi = \begin{bmatrix} \frac{1}{\sqrt{2}} \\ 0 \\ 0 \\ \frac{1}{\sqrt{2}} \end{bmatrix} $$ What happens if we try to factor this into two separate qubit states? Using a $'$ to denote the new states of the qubits: $$ \begin{bmatrix} \frac{1}{\sqrt{2}} \\ 0 \\ 0 \\ \frac{1}{\sqrt{2}} \end{bmatrix} \stackrel{?}{=} \begin{bmatrix} c_0' t_0' \\ c_0' t_1' \\ c_1' t_0' \\ c_1' t_1' \end{bmatrix} $$ It's not possible for these to be equal: for the first entries and last entries of each vector to be equal, all of the coefficients $c_0', c_1', t_0', t_1'$ must be non-zero. But for the middle entries of the vectors to be equal, at least two of the coefficients must be zero. Hence it is not possible for us to factorise this state vector into two separate qubit states. We have created an entangled state. We can only describe the state of the two qubit system: we cannot meaningfully describe the qubits separately. How do you actually make a qubit? Making a quantum computer requires extremely fine control of the quantum state. The larger an object is, the harder that control becomes. Even a simple molecule presents challenges, as the state is affected by minute changes in the relative position of its component atoms due to thermal vibration. There are a variety of different qubit types used, but we'll use trapped ions as an example below. Trapped ion qubits What do $\ket{0}$ and $\ket{1}$ correspond to physically? In a trapped ion qubit, we map discrete excitation states of an ion to our computational basis states $\ket{0}$ and $\ket{1}$. Which excitation states you choose depends on various properties, including how stable the states are, and how they can be measured. We'll give an example below of a specific architecture that might be used. How do we measure the qubits to get the output of our computation? As an example, let's take single-charge ions of an element from Group 2 of the periodic table. Neutral atoms would have 2 electrons in their outermost electron shell, so our +1 charge ions have a single electron in that shell. In the below diagram, the labelled horizontal lines are energy levels (e.g. S1/2) of this electron, increasing in energy vertically. Each of the letters (S, P, D) represent an electron sub-shell, and the numbers are a measure of momentum. ) ^ ( | ) | ________________________D5/2 ( | ( ^ ) | ) | ( | ( | ) | ) | ( | ( | ) | ) | | | | | fluorescence | shelving and | | cycle | de-shelving | ) ^ ( = spontaneous decay | = laser-driven v v Within each energy level there are multiple excitation states. For ions with no nuclear spin, there are two states in S1/2, which correspond to the two possible spin states of the electron. If we map our computational bases $\ket{0}$ and $\ket{1}$ to these two states, how could we then measure the result at the end of our computation? When ions decay from a higher energy state to a lower energy state, the lost energy is emitted as a photon. Photons are something we can measure. In our example, both of our computational basis states are in S1/2. We can make the ions fluoresce by hitting them with a laser with the appropriate wavelength to drive transitions between P1/2 and S1/2. The laser drives transitions in both directions, so if we keep the laser on, the ions will continuously swap between P1/2 and S1/2, emitting photons when they move from the higher energy state to the lower. However, because both of our computational bases are in S1/2, we won't be able to differentiate between measuring a 0 and a 1! The properties of this transition are such that it's not feasible to have a laser that could select one of our two states within S1/2. Luckily, the transitions between S1/2 and D5/2 can be driven selectively. So, prior to measurement we "shelve" by mapping one of our bases only to an excitation state in D5/2. Measurement is then done when inducing fluorescence: this is when we get a final 0 or 1 measurement out from each qubit. The ions will be in a superposition of a state in S1/2 and a state in D5/2. Some will be measured to have been in S1/2 (as these will fluoresce, which we measure as 1) and others will be in D5/2 (as these will not fluoresce, which we measure as 0). The D5/2 is stable enough for us to keep the ions there until we've collected enough photons to make a measurement. We can tune our light detection to photons of the frequency that will be emitted by P1/2 -> S1/2 transitions, so that other spontaneous decay does not affect our measurements. Will quantum computers replace classical computers? Quantum computers are only well-suited to certain classes of problems, so I don't think it's likely that they'd replace classical computers. The goal is to enable solving of computational problems we can't solve today, not to solve the same problems with a new kind of computation. Are quantum computers useful? Not yet! Lol. But hopefully they will one day enable us to do useful things we can't do today. The most exciting one for me is that quantum computers should enable more simulation of quantum mechanics, which in turn will enable better drug discovery.
{"url":"https://mcla.ug/blog/qc-intro.html","timestamp":"2024-11-10T09:26:12Z","content_type":"text/html","content_length":"29185","record_id":"<urn:uuid:c2caef2a-b5a9-41fa-926a-7632a49ae5d5>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00765.warc.gz"}
Active flap control with the trailing edge flap hinge moment as a sensor: using it to estimate local blade inflow conditions and to reduce extreme blade loads and deflections Articles | Volume 6, issue 3 © Author(s) 2021. This work is distributed under the Creative Commons Attribution 4.0 License. Active flap control with the trailing edge flap hinge moment as a sensor: using it to estimate local blade inflow conditions and to reduce extreme blade loads and deflections Active trailing edge flaps are a promising technology that can potentially enable further increases in wind turbine sizes without the disproportionate increase in loads, thus reducing the cost of wind energy even further. Extreme loads and critical deflections of the blade are design-driving issues that can effectively be reduced by flaps. In this paper, we consider the flap hinge moment as a local input sensor for a simple flap controller that reduces extreme loads and critical deflections of the DTU 10MW Reference Wind Turbine blade. We present a model to calculate the unsteady flap hinge moment that can be used in aeroelastic simulations in the time domain. This model is used to develop an observer that estimates the local angle of attack and relative wind velocity of a blade section based on local sensor information including the flap hinge moment of the blade section. For steady wind conditions that include yawed inflow and wind shear, the observer is able to estimate the local inflow conditions with errors in the mean angle of attack below 0.2^∘ and mean relative wind speed errors below 0.4%. For fully turbulent wind conditions, the observer is able to estimate the low-frequency content of the local angle of attack and relative velocity even when it is lacking information on the incoming turbulent wind. We include this observer as part of a simple flap controller to reduce extreme loads and critical deflections of the blade. The flap controller's performance is tested in load simulations of the reference turbine with active flaps according to the IEC 61400-1 power production with extreme turbulence group. We used the lifting line free vortex wake method to calculate the aerodynamic loads. Results show a reduction of the maximum out-of-plane and resulting blade root bending moments of 8% and 7.6%, respectively, when compared to a baseline case without flaps. The critical blade tip deflection is reduced by 7.1%. Furthermore, a sector load analysis considering extreme loading in all load directions shows a reduction of the extreme resulting bending moment in an angular region covering 30^∘ around the positive out-of-plane blade root bending moment. Further analysis reveals that a fast reaction time of the flap system proves to be critical for its performance. This is achieved with the use of local sensors as input for the flap controller. A larger reduction potential of the system is identified but not reached mainly because of a combination of challenging controller objectives and the simple controller architecture. Received: 08 Jan 2021 – Discussion started: 04 Feb 2021 – Revised: 19 Apr 2021 – Accepted: 20 Apr 2021 – Published: 02 Jun 2021 Wind turbines have increased dramatically in size over the past years in an effort to reduce the cost of wind energy and make it a competitive source of energy. The increased installation numbers of new wind turbines in Germany over the years is an example of the success that this strategy has had (Burger, 2018). Increasing the turbine size also has its drawbacks. A larger rotor will see increased loads that cannot be compensated for by a geometric upscaling of the components alone (Jamieson, 2018, 97–123). In order to withstand these loads, the structure of turbine components such as the blade has to be stiffer, which requires more material or stronger (and more expensive) material. This results in an increase in cost of energy. Advanced turbine controllers counteract the increased loads of larger turbines, thereby limiting the need for additional material and decreasing the cost of energy. The most widely used actuator for load reduction is the pitch actuator. Its main advantage is its availability because it is already being used in the power regulation strategy. Yet it has several drawbacks regarding load control. As wind turbines become larger, the frequency bandwidth of their pitch actuators is reduced mainly due to the increased inertia of the blade. This reduces the ability of current advanced load-reduction controllers – based on full span pitch control – to react to sudden local wind gusts and to fast-changing turbulent inflow, thereby limiting their effectiveness. Loads arising from non-uniform wind fields (e.g., wind shear and turbulence) have a greater effect in larger turbine rotors (Madsen et al., 2020). Such effects are better countered with spatially distributed actuation devices than with an actuator for the whole blade, as is the case for the pitch system. A promising alternative to conventional full span pitch control is the concept of the smart rotor with distributed active flow control (AFC) devices. Barlas and van Kuik (2010) give an overview of the different control concepts, actuators and sensors that are relevant for active load alleviation in smart rotors. The authors conclude that trailing edge (TE) flaps and microtabs are the most promising actuation concepts. This comes from their ability to effectively change the local lift, their high actuation bandwidth and the simplicity considering their implementation. 1.1Sensors for active flap control In addition to AFC actuators, the choice of AFC sensors is another critical aspect for the load alleviation strategies of smart rotors. Cooperman and Martinez (2015) give a review of the possible sensors that can be used in smart rotor control. Many of these sensors have been used in TE flap control studies over the past years. We can broadly categorize them into three groups: strain sensors, inertial sensors and inflow sensors. • Strain sensors measure the local strains on a component. These can be conventional strain gauges or optical strain sensors. Strain sensors have been widely used in TE flap control strategies. In particular, strain sensors measuring the flapwise blade root bending moment (BRBM) have received much attention. A common strategy that uses these sensors is called individual flap control (IFC). It is based on the individual pitch control (IPC) strategy (Bossanyi, 2003) and is often used in combination with the latter (Plumley et al., 2014b; Lackner and van Kuik, 2010; Plumley et al., 2014a; Jost et al., 2015; Bernhammer et al., 2016; Zhang et al., 2016). Other studies have used the flapwise BRBM sensor in PID-type controllers (Barlas et al., 2016b; Bartholomay et al., 2018), model-based controllers (Henriksen et al., 2013; Bergami and Poulsen, 2015; Chen et al., 2016; Ng et al., 2016) and adaptive controllers (Navalkar et al., 2014). • Inertial sensors are able to measure the motion of a component resulting from a force. The most common inertial sensors are accelerometers. They offer the advantage of sensing the effect of the loads before any deflections and strains have occurred. If used for TE flap control, these sensors give the controller potentially more time to react to sudden disturbances. Integrating the acceleration values gives information about the velocity and displacement of the component. Zhang et al. (2016) compare the performance of IFC strategies based on acceleration and deflection signals to the more traditional IFC based on the flapwise BRBM. In Berg et al. (2009) and Engels et al. (2010), the authors implement TE flap controllers that use the blade tip deflection/deflection rate with a PD or PID feedback loop to control the flaps. This strategy is also used in Wilson et al. (2009), where the combination of IPC and the aforementioned feedback flap control is explored. In the INNWIND report (Jost et al., 2015), a similar control strategy that uses the low-passed blade tip acceleration as input for a PI feedback loop that sets the flap angle is implemented. A model predictive controller (MPC) for TE flaps based on local blade displacement is used in Barlas et al. (2012). • Inflow sensors are able to measure the local aerodynamic conditions of a blade section. They are attractive because they are able to measure the source of the aerodynamic loads on a specific blade section before they affect the section. This gives the TE flaps more time to react to aerodynamic disturbances and hence to reduce the loading. Local inflow sensors on the blade include pitot tubes and surface pressure sensors. A drawback of pitot tubes is that they might vibrate during operation or be affected by rain, ice, dirt or insects. Both factors diminish the accuracy of the measurements or in extreme cases disrupt them. Surface pressure sensors can be expensive and fragile and may be clogged during operation (Cooperman and Martinez, 2015). Other types of inflow sensor are remote inflow sensors and nacelle-mounted sensors. An example of the former are lidar sensors while spinner-mounted anemometers are an example of the latter. Bartholomay et al. (2021) analyze the load reduction capabilities of feed-forward flap controllers on a 2D airfoil in a wind tunnel. The controllers estimate the lift acting on the airfoil by means of either a pitot tube or three surface pressure ports. Barlas et al. (2018) test an active flap system on a 2m blade part mounted on a rotating test rig. They demonstrate the load alleviation potential of the system using an open-loop controller based on local inflow measurements. Andersen (2010) studies several control strategies based on local inflow measurements and their combination with strain gauges placed along the blade span. Barlas et al. (2012) also considered an extension of the MPC strategy that measured the local inflow of the blade section. Jones et al. (2018) use a blade-mounted pitot tube in addition to the blade root strain gauges as an addition to an IPC strategy. The use of the inflow sensor in a cascaded configuration helps improve the performance of the controller by bypassing the limitations of the slow blade dynamics on the controller input. Manolas et al. (2018) combine an IPC strategy with a feed-forward TE flap controller based on the inflow measurements of a spinner-mounted anemometer. Ungurán et al. (2018) use a combination of a model-based feedback individual pitch and flap controller and a model-inverse-based feed-forward controller that uses the blade effective wind speed measured with a blade-mounted lidar system. 1.2Active flap fatigue control and blade design driving loads Strain sensors measuring the flapwise BRBM have been a popular and effective choice because the main focus of TE flap control has been fatigue load reduction of the out-of-plane loads. The frequency content of these fatigue loads is fairly low (Bergami and Gaunaa, 2014). Therefore, these sensors can be effectively used even if they measure the effect of the aerodynamic loads with a certain delay (caused by the blade inertia). Out-of-plane fatigue loads are design driving for several turbine components. Yet if we focus on the wind turbine blade, flapwise fatigue loads are not necessarily the best objective to use TE flaps or other local AFC devices for. • Flapwise BRBM fatigue loads have a fairly low-frequency content. Most of the damage concentrates around the 1P frequency (Bergami and Gaunaa, 2014) and can be addressed by the pitch controller using strategies such as IPC. • Using TE flaps to mitigate 1P and 2P flapwise BRBM means that the high bandwidth capabilities of these actuators are not used effectively. • Using AFC for fatigue load reduction requires a high number of duty cycles of the actuator. This is a limiting factor in the choice of the actuator. It is also not in line with the current philosophy of wind turbine designs since wind turbines are designed to be low maintenance machines. Having a high number of duty cycles might result in high maintenance requirements for the AFC Blade optimization studies that include fatigue-oriented TE flap control as part of their optimization features did not show significant additional blade mass reduction compared to an optimized blade design without flaps. Barlas et al. (2016a) optimize the DTU 10MW Reference Wind Turbine-blade (Bak et al., 2013) using a multidisciplinary design, analysis and optimization tool. They find that the results of the blade optimization are comparable if the blade is optimized with a fatigue-focused flap controller or if the blade is optimized without flaps altogether. Chen et al. (2017) use an optimization algorithm to optimize the blade of the NREL 5MW Reference Wind Turbine (Jonkman et al., 2009) so that the levelized cost of energy is minimized. The algorithm also optimizes a model-based controller used for pitch and TE flap control (Chen et al., 2016). They conclude that [t]he blade optimization problems addressed in this work are primarily driven by flapwise stiffness, with blade deflection, rotor thrust and flapwise ultimate stresses in the spar forming the design drivers (Chen et al., 2017, p. 764). Chaviaropoulos et al. (2014) point out that important load components for preliminary innovative concepts of multi-megawatt-scaled turbines include the ultimate loads of the resulting BRBM, ultimate and fatigue loads of the blade root torsional moment, fatigue loads of the edgewise BRBM, and the blade tip-to-tower clearance. From these findings one can conclude that for modern large rotor blades, the extreme value of the resulting BRBM and the blade tip-to-tower clearance will be decisive. In the edgewise direction, the design-driving loads will be dictated by the fatigue loads arising from the blade's mass. In this direction, the blade root has to endure a load fluctuation with an amplitude equaling the blade's static moment once per revolution (1P). 1.3Active flap control for reducing extreme blade loads and deflections Should the objective of an AFC strategy shift to reducing flapwise extreme loads and critical deflections, it would also have a positive effect on the design-driving edgewise fatigue loads of the blade. When an effective controller reduces extreme loads and deflections, the spar cap of the blade can be optimized and hence the mass of the blade reduced. This in turn reduces the edgewise fatigue loads near the blade root. These loads are mainly caused by gravitational forces. In additional optimization loops, the blade mass could be further reduced, potentially creating a virtuous If a TE flap control strategy is used to alleviate loads with high frequency content (such as design-driving extreme loads), the choice of a sensor for controller input becomes much more relevant. Using the flapwise BRBM as an input sensor for the TE flap controller has its advantages in the implementation and maintenance of the sensor but can have negative effects on the performance of the controller. Because sensor and actuator are far apart, aerodynamic loads and deflections at the blade tip are sensed at the root with a considerable time lag. This time lag carries through to the actuation response of the controller, limiting its effectiveness (Andersen et al., 2010; Fisher and Madsen, 2016; Jones et al., 2018). Inflow sensors would be an attractive sensor choice were it not for the practical drawbacks of cost (e.g., lidar) or susceptibility to the elements (e.g., pitot tubes and pressure sensors) in their implementation (Cooperman and Martinez, 2015). A possible sensor that has not received much attention from the community is the TE flap hinge moment (Behrens and Zhu, 2011). It has the advantage of providing local loading information of a blade section without the need of additional inflow sensors on the blade. The flap actuator is assumed to be enclosed inside the blade section. This protects the hinge moment sensor (located in the actuator) from the elements, making it a very robust sensor. The loading information from the flap hinge can be used in combination with the high actuation bandwidth of the flaps to effectively reduce extreme loads and critical deflections of the blade. This idea is attractive because it uses the robust and already available hinge moment sensor in the flap actuator system as an input for the controller, thus addressing the aforementioned drawbacks of cost and susceptibility. It is also advantageous if the TE flap system is to be designed in a modular layout. The goal of the present study is to explore and quantify the potential of TE flaps to mitigate design-driving extreme loads and deflections. We are also interested in exploring the possibility of using a local and robust sensor as an input choice for a controller strategy. This paper presents a novel method that allows the use of the TE flap hinge moment as an input sensor. It is a model-based observer that estimates the effective angle of attack of the blade section given the hinge moment, the accelerations and the relative wind velocity of the section (the last quantity is also estimated with our method). In addition, we use this observer as part of a simple extreme load controller for flaps and analyze its performance in a power production scenario with extreme turbulent wind. Section 2 summarizes our turbine and flap models as well as the aeroelastic simulation tools used in this study. In Sect. 3, we present the model used to calculate the unsteady flap hinge moment in aeroelastic simulations and our novel observer that estimates the local aerodynamic information based on this and other sensors. Section 4 presents the results of the observer under steady and turbulent wind conditions. In Sect. 5, we present a simple extreme load controller that uses this observer. We analyze the controller's performance in reducing extreme loads under challenging extreme turbulent wind conditions. The conclusions are drawn in Sect. 6. The DTU 10MW RWT was chosen as the turbine model. It is representative of the new generation of wind turbines and has been used in several research studies. The complete description of the turbine can be found in Bak et al. (2013). 2.1Blade with trailing edge flaps The blade of the DTU 10MW RWT was modified to accommodate TE flaps. The flaps are modeled via dynamic polar sets that describe the airfoil with discrete flap angles. For this study we chose a flap that covers 10% of the chord and has maximum deflection angles of $\mathit{\delta }=±\mathrm{15}$^∘. Figure 1a shows the polar data of the FFA-W3-241 airfoil – used in the outer part of the blade – with the modeled flap at maximum and minimum deflection. In total 15 polar sets were generated for different discrete flap positions using a Reynolds number of 1.5×10^7. The polars were obtained using the airfoil simulator XFLR5 integrated into QBlade (Marten et al., 2010). In the simulation, the polar data between the discrete states are linearly interpolated. In total six flaps are integrated into the blade, as can be seen in Fig. 1b. Each section measures 3m in the spanwise direction and is assumed to have one hinge moment sensor and one accelerometer located at the center of the section. The six sections are located between 64 and 82m of the blade span, which corresponds to a relative location between 74.1% and 95% of the blade length. The flap actuators are modeled as a second-order low-pass filter. In the Laplace domain, the filter takes the form $\begin{array}{}\text{(1)}& {\mathit{\delta }}_{\mathrm{LP}}\left(s\right)=\frac{{\mathit{\omega }}^{\mathrm{2}}}{{s}^{\mathrm{2}}+\mathrm{2}\mathit{\xi }\mathit{\omega }\cdot s+{\mathit{\omega }}^{\ mathrm{2}}}\cdot \mathit{\delta }\left(s\right),\end{array}$ where ω is the filter frequency and ξ the damping factor. For the flap actuators, we chose a frequency of 5Hz and a damping factor of 1. The maximum and minimum flap rates are limited to ±100^∘s^ 2.2Aeroelastic simulation tools We did the development, testing and simulation of the methods presented in this study using two aeroelastic simulation tools: the first is NREL's FAST v8.15 and the second is TU Berlin's QBlade. FAST uses AeroDyn (Moriarty and Hansen, 2005) as its aerodynamic model. It is based on the blade element momentum (BEM) theory and uses several correction models to account for the unsteady aerodynamic phenomena typically present in aeroelastic simulations with turbulent conditions. These are the tip- and root-loss model, the turbulent wake state model, the oblique inflow model, the dynamic stall model, and the tower shadow model. The used structural model in FAST is ElastoDyn. It has a combined multi-body and modal dynamics representation that is able to model the wind turbine with flexible blades and tower (Jonkman, 2003). QBlade uses the lifting line free vortex wake (LLFVW) method as its aerodynamic model (Marten et al., 2015). In this method, the blade aerodynamic forces are evaluated on a blade element basis using polar data. The near and far wake are modeled with vortex line elements. These are shed at the blade's trailing edge during every time step and then undergo free convection behind the rotor. Vortex methods can model the wake with far fewer assumptions and engineering corrections compared to BEM methods. Especially when the wind turbine is subjected to unsteady inflow or varying blade loads, the LLFVW method increases the accuracy compared to BEM methods (Perez-Becker et al., 2020). To model the dynamic stall of the blade elements, QBlade uses the ATEFlap unsteady aerodynamic model (Bergami and Gaunaa, 2012), modified so that it excludes contribution of the wake in the attached flow region (Wendler et al., 2016). QBlade is able to model AFC devices such as TE flaps using two dynamic polar sets. They are defined for the inner and outer spanwise locations of the AFC device. It is thereby possible to model AFC elements that span over two different airfoils. The ATEFlap model is also capable of modeling unsteady aerodynamic effects of flap deflections at high reduced frequencies. This allows QBlade to accurately model the aerodynamics of TE flap actuators with high bandwidths. This is required if a flap control strategy aims at reducing extreme loads and deflections. QBlade has a structural solver based on the open-source multi-physics library CHRONO (Tasora et al., 2016). It uses a multi-body representation which includes Euler–Bernoulli beam elements in a co-rotational formulation. It allows QBlade to accurately simulate the blade deflections and include the blade torsion, which has a significant influence on the blade loads. A more detailed comparison between QBlade and (Open)FAST can be found in Perez-Becker et al. (2020). For this study, we used the TUB Controller (Perez-Becker et al., 2021). It is based on the DTU Wind Energy Controller (Hansen et al., 2013), which features a baseline pitch and torque control. It has been extended with a supervisory control based on the report by Iribas et al. (2015). The supervisory control allows the controller to run a full load analysis. The controller baseline pitch and torque control parameters were taken from the report by Borg et al. (2015). The controller has been extended so that it can control AFC devices such as TE flaps. 3Estimation of the aerodynamic information In this section we present the flap hinge moment model used in this study and the observer based on this model. The latter is able to estimate the effective angle of attack and relative velocity of a flapped blade section by means of local and global sensors, the former being an accelerometer, a flap position sensor and the flap hinge moment sensor. These sensors and the hinge are assumed to be positioned at the span-wise center of the blade section with active flap. The global sensors are the rotor speed, the rotor azimuth angle and the blade pitch angle. In this study, we define the sign of the hinge moment to be the same as for the flap angle, i.e., positive if the flap moves to the pressure side (see Fig. 2). The observer comprises two parts: an angle of attack estimator and a relative velocity estimator. The first is a collection of linear observers that estimate the angles of attack for different constant relative wind speeds. Together they form a linear parameter varying (LPV) system which is parameterized by the relative wind velocity. The relative velocity estimator uses simple models to estimate the relative wind speed and serves as the parameter input for the LPV system. 3.1Hinge moment model Because we use polar data to derive the aerodynamic loads on a blade section, we cannot measure the unsteady hinge moment directly in the simulations. We therefore need a model to calculate the hinge moment on a blade section. Ignoring friction, the total hinge moment on a blade section with a TE flap can be determined by the sum of the hinge moment due to gravity loads, the moment due to the flap inertial loads and the moment due to aerodynamic loads (Plumley, 2015). 3.1.1Gravity and inertial loads The gravity loads can be calculated by taking the fraction of the flap's center of mass that has an offset in the out-of-plane direction relative to the flap hinge. If m[flap] is the flap's mass and d[flap] the flap's center of mass measured from the flap hinge, then the hinge moment due to gravitational loads is given by $\begin{array}{}\text{(2)}& {M}_{\mathrm{H}\text{-}\mathrm{g}}={m}_{\mathrm{flap}}\cdot g\cdot {d}_{\mathrm{flap}}\cdot \mathrm{sin}\left(\mathit{\phi }\right)\cdot \mathrm{sin}\left(\mathit{\delta } -\left(\mathit{\theta }+\mathit{\beta }\right)\right).\end{array}$ Here, g is the gravitational acceleration, φ the rotor azimuth angle, θ the aerodynamic twist and β the pitch angle of the blade. The reader is referred to the list of symbols (Tables B1 and B2) for the definition of the used symbols in this and the following equations. Figure 2 shows a sketch of the FFA-W3-241 airfoil when the turbine is at a rotor azimuth position of φ=90^∘ to illustrate how the gravity affects the hinge moment. The hinge moment due to inertial loads is caused by the loads due to flap acceleration as well as centrifugal, Coriolis and gyroscopic loads. As a first approximation, we chose to neglect the effect of all inertial loads except those arising from the flap acceleration. The centrifugal forces will act in the radial direction (i.e., parallel to the blade span) if we assume a straight, undeflected blade without rotor cone angle. Hence their contribution to the hinge moment will be zero. This changes if we introduce a cone angle, a pre-bend and blade deflection. Since the resulting angle of the deflected blade to the rotational plane will be small, the contributions of the centrifugal loads on the hinge moment will be small. A similar argument can be made for the Coriolis forces. The gyroscopic loads will only arise if the turbine actively yaws. In the simulations considered within this study the turbine did not yaw, so gyroscopic loads were not present and hence not included. We can therefore write $\begin{array}{}\text{(3)}& {M}_{\mathrm{H}\text{-}\mathrm{I}}={I}_{\mathrm{flap}}\cdot \stackrel{\mathrm{¨}}{\mathit{\delta }},\end{array}$ where I[flap] is the flap's moment of inertia. In this study, we approximate m[flap] as the mass of the blade section with flap multiplied by the flap's percentage of the chord length. I[flap] and d [flap] are approximated assuming a triangular flap shape (isosceles triangle) with constant density. 3.1.2Aerodynamic loads To model the unsteady aerodynamic hinge moment, we use a model for thin airfoils in inviscous incompressible flow presented in Leishman (2006, 492–497). It is based on Theodorsen's unsteady aerodynamic theory but re-cast into a state-space formulation for easier use in simulation and control applications. The reference gives the complete formulation for unsteady lift, pitch and flap hinge moment coefficients of thin airfoils. In this study we are only interested in the hinge moment coefficients because the unsteady lift and pitch coefficients are modeled in QBlade via the LLFVW method and the ATEFlap model. It should be noted that the hinge moment model presented in the reference only models the unsteady aerodynamic hinge moment due to airfoil and flap motion for a constant wind speed. A complete model should also include the unsteady hinge moment contribution of varying gust fields and changes in the relative wind speed of the airfoil. Nonetheless, the contribution of both can be neglected for our application, as discussed in Kanda and Dowell (2005) and Leishman (2006, p. 457). We included two minor changes to the model presented in the aforementioned reference. The first change is the inclusion of an offset in the hinge moment coefficient for α=0^∘ to model the behavior of cambered airfoils. The second change is the use of flap effectiveness coefficients (Leishman, 2006, p. 500) to model the viscous effects of the airfoil shape and flap deflection on the hinge moment. For completeness and for reference in the derivation of the estimator in Sect. 3.2, the aerodynamic hinge model is briefly presented here in its state-space formulation. The unsteady hinge moment coefficient C[h] for a thin airfoil with a TE flap is given by $\begin{array}{}\text{(4)}& \begin{array}{rl}{C}_{\mathrm{h}}& \left(\mathit{\alpha },\stackrel{\mathrm{˙}}{\mathit{\alpha }},\stackrel{\mathrm{¨}}{\mathit{\alpha }},\mathit{\delta },\stackrel{\ mathrm{˙}}{\mathit{\delta }},\stackrel{\mathrm{¨}}{\mathit{\delta }},\stackrel{\mathrm{˙}}{h},\stackrel{\mathrm{¨}}{h},{V}_{\mathrm{rel}}\right)={C}_{\mathrm{h}}^{\mathrm{nc}}\left(\stackrel{\mathrm {¨}}{\mathit{\alpha }},\stackrel{\mathrm{¨}}{\mathit{\delta }},\stackrel{\mathrm{¨}}{h},{V}_{\mathrm{rel}}\right)\\ & +{C}_{\mathrm{h}}^{\mathrm{qs}}\left(\stackrel{\mathrm{˙}}{\mathit{\alpha }},\ mathit{\delta },\stackrel{\mathrm{˙}}{\mathit{\delta }},{V}_{\mathrm{rel}}\right)+{C}_{\mathrm{h}}^{\mathrm{c}}\left(\mathit{\alpha },\stackrel{\mathrm{˙}}{\mathit{\alpha }},\mathit{\delta },\ stackrel{\mathrm{˙}}{\mathit{\delta }},\stackrel{\mathrm{˙}}{h},{V}_{\mathrm{rel}}\right)+{C}_{\mathrm{h}\text{-}\mathrm{0}}^{\mathrm{c}},\end{array}\end{array}$ where ${C}_{\mathrm{h}}^{\mathrm{nc}}$, ${C}_{\mathrm{h}}^{\mathrm{qs}}$ and ${C}_{\mathrm{h}}^{\mathrm{c}}$ are the non-circulatory, quasi-steady and circulatory components of the hinge moment coefficient. α represents the angle of attack, h the section's displacement normal to the chord (see Fig. 2), V[rel] the relative wind velocity and ${C}_{\mathrm{h}\text{-}\mathrm{0}}^{\mathrm{c}}$ the constant offset of C[h] at α=0^∘. If we assume that V[rel] is constant, then we can write Eq. (4) as a canonical state-space system that has the form $\begin{array}{}\text{(5)}& & \stackrel{\mathrm{˙}}{\mathbit{x}}={\mathbf{A}}_{\mathrm{Hin}}\cdot \mathbit{x}+{\mathbf{B}}_{\mathrm{Hin}}\cdot \mathbit{u},\text{(6)}& & y={\mathbf{C}}_{\mathrm{Hin}}\ cdot \mathbit{x}+{\mathbf{D}}_{\mathrm{Hin}}\cdot \mathbit{u}.\end{array}$ The input vector is given by $\mathbit{u}={\left(\begin{array}{cccccccc}\mathit{\alpha }& \stackrel{\mathrm{˙}}{\mathit{\alpha }}& \stackrel{\mathrm{¨}}{\mathit{\alpha }}& \mathit{\delta }& \stackrel {\mathrm{˙}}{\mathit{\delta }}& \stackrel{\mathrm{¨}}{\mathit{\delta }}& \stackrel{\mathrm{˙}}{h}& \stackrel{\mathrm{¨}}{h}\end{array}\right)}^{T}$ and the output is $y={C}_{\mathrm{h}}-{C}_{\mathrm {h}\text{-}\mathrm{0}}^{\mathrm{c}}$. The internal states of the system are $\mathbit{x}={\left(\begin{array}{cccc}{z}_{\mathrm{1}}& {z}_{\mathrm{2}}& {z}_{\mathrm{3}}& {z}_{\mathrm{4}}\end{array}\ right)}^{T}$ and describe the circulatory components of C[h]. The state-space matrices are given by the following equations. $\begin{array}{}\text{(7)}& & {\mathbf{A}}_{\mathrm{Hin}}=\left[\begin{array}{cc}{\mathbf{A}}^{z}& {\mathbf{0}}_{\mathrm{2}×\mathrm{2}}\\ {\mathbf{0}}_{\mathrm{2}×\mathrm{2}}& {\mathbf{A}}^{z}\end {array}\right]\text{(8)}& & {\mathbf{B}}_{\mathrm{Hin}}=\left[\begin{array}{cccccccc}\mathrm{0}& \mathrm{0}& \mathrm{0}& \mathrm{0}& \mathrm{0}& \mathrm{0}& \mathrm{0}& \mathrm{0}\\ \mathrm{1}& b\ cdot \frac{\left(\mathrm{1}/\mathrm{2}-a\right)}{{V}_{\mathrm{rel}}}& \mathrm{0}& \mathrm{0}& \mathrm{0}& \mathrm{0}& \frac{\mathrm{1}}{{V}_{\mathrm{rel}}}& \mathrm{0}\\ \mathrm{0}& \mathrm{0}& \ mathrm{0}& \mathrm{0}& \mathrm{0}& \mathrm{0}& \mathrm{0}& \mathrm{0}\\ \mathrm{0}& \mathrm{0}& \mathrm{0}& \frac{{F}_{\mathrm{10}}}{\mathit{\pi }}& \frac{b\cdot {F}_{\mathrm{11}}}{\mathrm{2}\mathit {\pi }\cdot {V}_{\mathrm{rel}}}& \mathrm{0}& \mathrm{0}& \mathrm{0}\end{array}\right]\text{(9)}& & {\mathbf{C}}_{\mathrm{Hin}}=\left[\begin{array}{cc}{\mathit{ϵ}}_{\mathit{\alpha }}\cdot {\mathbf{C}} ^{z}& {\mathit{ϵ}}_{\mathit{\delta }}\cdot {\mathbf{C}}^{z}\end{array}\right]\text{(10)}& & {\mathbf{D}}_{\mathrm{Hin}}=\left[\begin{array}{cccccccc}{D}_{\mathit{\alpha }}& {D}_{\stackrel{\mathrm{˙}} {\mathit{\alpha }}}& {D}_{\stackrel{\mathrm{¨}}{\mathit{\alpha }}}& {D}_{\mathit{\delta }}& {D}_{\stackrel{\mathrm{˙}}{\mathit{\delta }}}& {D}_{\stackrel{\mathrm{¨}}{\mathit{\delta }}}& {D}_{\ stackrel{\mathrm{˙}}{h}}& {D}_{\stackrel{\mathrm{¨}}{h}}\end{array}\right]\end{array}$ The entries for the A^z and C^z matrices are $\begin{array}{}\text{(11)}& & {\mathbf{A}}^{z}=\left[\begin{array}{cc}\mathrm{0}& \mathrm{1}\\ -{b}_{\mathrm{1}}\cdot {b}_{\mathrm{2}}\cdot {\left(\frac{{V}_{\mathrm{rel}}}{b}\right)}^{\mathrm{2}}& -\left({b}_{\mathrm{1}}+{b}_{\mathrm{2}}\right)\cdot \left(\frac{{V}_{\mathrm{rel}}}{b}\right)\end{array}\right],\text{(12)}& & {\mathbf{C}}^{z}=\left[\begin{array}{cc}\frac{{F}_{\mathrm{12}}\cdot {b}_{\mathrm{1}}\cdot {b}_{\mathrm{2}}}{\mathrm{4}}\cdot {\left(\frac{{V}_{\mathrm{rel}}}{b}\right)}^{\mathrm{2}}& \frac{{F}_{\mathrm{12}}\cdot \left({A}_{\mathrm{1}}{b}_{\mathrm{1}}+{A}_{\mathrm{2}} {b}_{\mathrm{2}}\right)}{\mathrm{2}}\cdot \left(\frac{{V}_{\mathrm{rel}}}{b}\right)\end{array}\right].\end{array}$ The entries for the feedthrough matrix D[Hin] are given in Appendix A. In order to obtain the total hinge moment of a blade section with flap, we add the individual contributions: $\begin{array}{}\text{(13)}& \begin{array}{rl}{M}_{\mathrm{Hin}}& ={M}_{\mathrm{H}\text{-}\mathrm{g}}+{M}_{\mathrm{H}\text{-}\mathrm{I}}+{M}_{\mathrm{H}\text{-}\mathrm{A}}\\ & ={M}_{\mathrm{H}\text {-}\mathrm{g}}+{M}_{\mathrm{H}\text{-}\mathrm{I}}+\frac{\mathrm{1}}{\mathrm{2}}\cdot \mathit{\rho }\cdot {V}_{\mathrm{rel}}^{\mathrm{2}}\cdot {c}^{\mathrm{2}}\cdot {C}_{\mathrm{h}}\cdot S.\end{array} Figure 3 shows the behavior of the hinge model for variations of α and δ. The variations of α are obtained by a pure pitching motion of the airfoil. Figure 3a shows C[h] vs. α with a constant flap angle of δ=0^∘ for a reduced frequency of 0.01. Analogously, Fig. 3c shows C[h] vs. δ with a constant angle of attack of α=0^∘ for a reduced frequency of 0.01. We can see that C[h] is more sensitive to changes in δ than to changes in α. These subfigures also include the steady values of C[h] for the FFA-W3-241 airfoil for several positions of α and δ. These are calculated with the XFLR5 module in QBlade. By adjusting the values of ${C}_{\mathrm{h}\text{-}\mathrm{0}}^{\mathrm{c}}$, ϵ[α] and ϵ[δ], we can see that our model captures the general behavior of the hinge moment coefficient of this airfoil. Figure 3a shows that our model matches the calculated values of the FFA-W3-241 airfoil for values of α between −5 and 5^∘. For values below and above that range, a nonlinear behavior due to flow separation emerges, which our model fails to capture. Regarding flap deflection, our model captures the behavior of the FFA-W3-241 airfoil even better (Fig. 3c). It is only for values of δ≥10^∘ that our model deviates from the XFLR5 calculations. The reason for this is again the initial separation of the flow occurring at the trailing edge of the airfoil due to viscosity effects. Figure 3b and d show the modeled hinge moment M[Hin] of a 3m flapped blade section with 2m chord length and a mass m[flap] of 22.6g. For these subfigures a relative wind speed V[rel] of 80ms^−1 is used, and it is assumed that the flapped blade section is at a rotor azimuth angle φ of 0^∘. This way we have M[H-g]=0Nm. In Fig. 3b, M[Hin] is plotted for oscillations of α at several reduced frequencies k with a constant flap angle δ=0^∘. Figure 3d shows M[Hin] for oscillations of δ and a constant α=0^∘. In both cases, the unsteady aerodynamics are clearly seen for larger values of k. In Fig. 3d we can also see the increased effect of M[H-I] on M[Hin] for larger values of k. We note that – although not shown in Fig. 3 – the aerodynamic hinge model presented in this section is also capable of calculating the hinge moment due to the section's normal displacement h. This includes the implicit calculation of the effective angle of attack which depends on α, $\stackrel{\mathrm{˙}}{\mathit{\alpha }}$, $\stackrel{\mathrm{˙}}{h}$ and V[rel] (Leishman, 2006, p. 496). A less compact and more readable form of the aerodynamic hinge moment model can be found in Leishman (2006, 492–497). 3.2Estimation of the effective angle of attack We can use the model presented in the previous section to estimate the effective angle of attack of the blade section by means of a linear observer. The idea is to estimate the internal states of a system based on the measured inputs and outputs. If we cannot measure the quantities α, $\stackrel{\mathrm{˙}}{\mathit{\alpha }}$ and $\stackrel{\mathrm{¨}}{\mathit{\alpha }}$ (because we would need an inflow sensor), we have to estimate them using the measured M[Hin] and other available sensors. In this study, we follow an approach used in Kracht et al. (2015). The observer model is comprised of three submodels: a nonlinear M[H-g] model, a linear hinge model (that estimates ${\stackrel{\ mathrm{^}}{M}}_{\mathrm{H}\text{-}\mathrm{A}}$ and ${\stackrel{\mathrm{^}}{M}}_{\mathrm{H}\text{-}\mathrm{I}}$) and a signal model. The latter is used to estimate the quantities we cannot measure. All three models are combined to produce an estimated hinge model ${\stackrel{\mathrm{^}}{M}}_{\mathrm{Hin}}$. The estimation is then compared to the measured M[Hin], and the error is fed back to the states of the linear hinge model and the signal model using an appropriately chosen gain L. Figure 4 shows a graphical representation of the observer structure. The hinge moment model of the observer uses the same state-space matrices A[Hin], B[Hin], C[Hin] and D[Hin], with the exception of D[Hin]. The inertial loads given by Eq. (3) are included in the feedthrough term ${D}_{\stackrel{\mathrm{¨}}{\mathit{\delta }}}$ of D[Hin] so that the linear observer can use the state-space matrices to estimate ${\stackrel{\mathrm{^}}{M}}_{\mathrm{H}\text{-}\mathrm{A}}+{\stackrel{\mathrm{^}}{M}}_{\mathrm{H}\text{-}\mathrm{I}}$ directly. The output of the state-space representation does not include ${\stackrel{\mathrm{^}}{M}}_{\mathrm{H}\text{-}\mathrm{A}\mathrm{0}}$, the constant value of the hinge moment for α=0^∘. In addition, M [H-g] cannot be written in state-space form due to its non-linear nature. It therefore cannot be directly included in the observer model. However, it is highly deterministic and can be estimated using Eq. (2) with readily available sensors. The observer uses Eq. (2) to estimate ${\stackrel{\mathrm{^}}{M}}_{\mathrm{H}\text{-}\mathrm{g}}$. As for ${\stackrel{\mathrm{^}}{M}}_{\mathrm{H}\text{-} \mathrm{A}\mathrm{0}}$, the observer multiplies ${V}_{\mathrm{rel}}^{\mathrm{2}}$ by a constant factor that includes ${C}_{\mathrm{h}\text{-}\mathrm{0}}^{\mathrm{c}}$ and the dimensions of the blade section. Both quantities are subtracted from the measured M[Hin] so that the error is calculated between (M[H-A]+M[H-I]) and (${\stackrel{\mathrm{^}}{M}}_{\mathrm{H}\text{-}\mathrm{A}}+{\stackrel{\ For this study we use a simple signal model. It basically assumes that α and its derivatives do not change with time. The choice of this signal model is based on the fact that the source of change of α – the incoming wind speed V[OP] – is turbulent and therefore difficult to predict. The downside is that the estimated signal will always have a lag compared to the actual signal. The signal model has the following state-space matrices: $\begin{array}{}\text{(14)}& & {\mathbf{A}}_{\mathrm{Sig}}=\left[\begin{array}{ccc}\mathrm{0}& \mathrm{1}& \mathrm{0}\\ \mathrm{0}& \mathrm{0}& \mathrm{1}\\ \mathrm{0}& \mathrm{0}& \mathrm{0}\end {array}\right],\text{(15)}& & {\mathbf{C}}_{\mathrm{Sig}}={\mathbf{I}}_{\mathrm{3}×\mathrm{3}}.\end{array}$ We note that other signal models can be used, such as, e.g., a bank of independent harmonic oscillators (Kracht et al., 2015), if the wind conditions for a particular site are known with a certain degree of accuracy. If we combine the signal and hinge moment models, we get a state space whose input vector is ${\mathbit{u}}_{\mathrm{S}+\mathrm{H}}={\left(\begin{array}{ccccc}\mathit{\delta }& \stackrel{\mathrm{˙}} {\mathit{\delta }}& \stackrel{\mathrm{¨}}{\mathit{\delta }}& \stackrel{\mathrm{˙}}{h}& \stackrel{\mathrm{¨}}{h}\end{array}\right)}^{T}$ and its output is ${y}_{\mathrm{S}+\mathrm{H}}=\left({\stackrel {\mathrm{^}}{M}}_{\mathrm{H}\text{-}\mathrm{A}}+{\stackrel{\mathrm{^}}{M}}_{\mathrm{H}\text{-}\mathrm{I}}\right)$. The state-space matrices for this combined system are as follows. $\begin{array}{}\text{(16)}& & {\mathbf{A}}_{\mathrm{S}+\mathrm{H}}=\left[\begin{array}{cc}{\mathbf{A}}_{\mathrm{Sig}}& {\mathbf{0}}_{\mathrm{3}×\mathrm{4}}\\ {\mathbf{B}}_{\mathrm{Hin}}^{\mathit{\ alpha }}& {\mathbf{A}}_{\mathrm{Hin}}\end{array}\right]\text{(17)}& & {\mathbf{B}}_{\mathrm{S}+\mathrm{H}}=\left[\begin{array}{c}{\mathbf{0}}_{\mathrm{3}×\mathrm{5}}\\ {\mathbf{B}}_{\mathrm{Hin}}^{\ mathrm{Res}}\end{array}\right]\text{(18)}& & {\mathbf{C}}_{\mathrm{S}+\mathrm{H}}=\left[\begin{array}{cccc}{D}_{\mathit{\alpha }}& {D}_{\stackrel{\mathrm{˙}}{\mathit{\alpha }}}& {D}_{\stackrel{\ mathrm{¨}}{\mathit{\alpha }}}& {\mathbf{C}}_{\mathrm{Hin}}\end{array}\right]\text{(19)}& & {\mathbf{D}}_{\mathrm{S}+\mathrm{H}}=\left[\begin{array}{ccccc}{D}_{\mathit{\delta }}& {D}_{\stackrel{\ mathrm{˙}}{\mathit{\delta }}}& {D}_{\stackrel{\mathrm{¨}}{\mathit{\delta }}}& {D}_{\stackrel{\mathrm{˙}}{h}}& {D}_{\stackrel{\mathrm{¨}}{h}}\end{array}\right]\end{array}$ In these equations, ${\mathbf{B}}_{\mathrm{Hin}}^{\mathit{\alpha }}$ corresponds to the sub-matrix of B[Hin] that relates to α and its derivatives. ${\mathbf{B}}_{\mathrm{Hin}}^{\mathrm{Res}}$ corresponds to the rest of B[Hin]. The entries of these matrices are written out explicitly in Appendix A. The final state-space representation of our observer has the form $\begin{array}{}\text{(20)}& & {\mathbf{A}}_{\mathrm{obs}}={\mathbf{A}}_{\mathrm{S}+\mathrm{H}}-{\mathbf{LC}}_{\mathrm{S}+\mathrm{H}},\text{(21)}& & {\mathbf{B}}_{\mathrm{obs}}=\left[\begin{array} {cc}\left({\mathbf{B}}_{\mathrm{S}+\mathrm{H}}-{\mathbf{LD}}_{\mathrm{S}+\mathrm{H}}\right)& \mathbf{L}\end{array}\right],\text{(22)}& & {\mathbf{C}}_{\mathrm{obs}}=\left[\begin{array}{ccccccc}\ mathrm{1}& \mathrm{0}& \mathrm{0}& \mathrm{0}& \mathrm{0}& \mathrm{0}& \mathrm{0}\end{array}\right],\text{(23)}& & {\mathbf{D}}_{\mathrm{obs}}={\mathbf{0}}_{\mathrm{1}×\mathrm{6}}.\end{array}$ Its input vector is given by ${\mathbit{u}}_{\mathrm{obs}}={\left(\begin{array}{cccccc}\mathit{\delta }& \stackrel{\mathrm{˙}}{\mathit{\delta }}& \stackrel{\mathrm{¨}}{\mathit{\delta }}& \stackrel{\ mathrm{˙}}{h}& \stackrel{\mathrm{¨}}{h}& \left({M}_{\mathrm{Hin}}-{\stackrel{\mathrm{^}}{M}}_{\mathrm{H}\text{-}\mathrm{g}}-{\stackrel{\mathrm{^}}{M}}_{\mathrm{H}\text{-}\mathrm{A}\mathrm{0}}\right)\ end{array}\right)}^{T}$ and its output is ${y}_{\mathrm{obs}}=\stackrel{\mathrm{^}}{\mathit{\alpha }}$. The gain matrix L is obtained using a standard Kalman filter design. In order to tune our estimator, we use the covariance matrices for process noise Q[L] and for measurement noise R[L]. The derivation of the linear estimator above is only valid for one fixed value of V[rel]. This is not the case for a wind turbine blade in normal operation. In order to extend the estimator so that it can be used for varying V[rel], a linear parameter varying (LPV) model was built using the observer matrices for different values of V[rel]. This way, the nonlinear aerodynamic behavior of the flap hinge moment is parameterized with a set of linear observers, allowing for a fast and optimal estimation for each value of V[rel]. If V[rel] is used for the LPV model, it needs to be measured or estimated. The former option is not available, since we chose not to use inflow sensors. This leaves us with the estimation option, presented in the next section. 3.3Estimation of the relative velocity To estimate V[rel], we decompose the velocity into in-plane and out-of-plane components. The in-plane component of the rigid blade can be readily estimated using the rotor speed Ω and the distance between the hub center and spanwise center of the flapped blade section r[flap]: $\begin{array}{}\text{(24)}& {V}_{\mathrm{IP}}=\mathrm{\Omega }\cdot {r}_{\mathrm{flap}}.\end{array}$ For the out-of-plane component, we use a simple estimator based on the anemometer wind speed V[Ane], the rotor azimuth angle φ and the normal velocities of the flapped blade sections of the three blades that share the same spanwise distance from the rotor hub (${\stackrel{\mathrm{˙}}{h}}_{\mathrm{1}}$–${\stackrel{\mathrm{˙}}{h}}_{\mathrm{3}}$). The latter can be obtained by integrating the signal of accelerometers measuring the local accelerations of the sections of each blade: $\begin{array}{}\text{(25)}& {V}_{\mathrm{OP}}={V}_{\mathrm{Ane}}+{\mathrm{\Delta }}_{\mathrm{Vop}}\left({\stackrel{\mathrm{˙}}{h}}_{\mathrm{1}},{\stackrel{\mathrm{˙}}{h}}_{\mathrm{2}},{\stackrel{\ mathrm{˙}}{h}}_{\mathrm{3}},\mathit{\phi }\right).\end{array}$ ${\mathrm{\Delta }}_{\mathrm{Vop}}\left({\stackrel{\mathrm{˙}}{h}}_{\mathrm{1}}$, ${\stackrel{\mathrm{˙}}{h}}_{\mathrm{2}}$, ${\stackrel{\mathrm{˙}}{h}}_{\mathrm{3}}$, φ) is a function that accounts for the vertical variation in V[OP] dependent on the azimuthal position of the blade due to, e.g., wind shear. It comprises two parts: a vertical variation part and a tower shadow model. The vertical variation part uses the once-per-revolution Coleman transform of the velocities of the blade sections normal to the chord to get an equivalent amplitude of the vertically varying velocity normal to the chord of the rotor annulus at the spanwise position of the flapped blade section. We then use a linear transform of this quantity to estimate the vertical variation in the out-of-plane wind velocity at different azimuthal positions of the blade. The two cases in the above equation are necessary because of the non-linear effect of wind shear on the vertical wind velocity. Because we are approximating the out-of-plane variation with a simple cosine function, we use two amplitudes to account for the variation at the lower and upper half planes of the rotor. The constants c[1]–c[4] were obtained by fitting the results of the values of Δ [Vop-v] and v[sin] from aeroelastic constant wind simulations of the turbine. In addition, v[sin] is low-passed and notch filtered to filter out the higher-frequency contributions to the normal velocity of the rotor annulus. The tower shadow variation part uses a simplified approximation of the tower shadow model from Bak et al. (2001). We simplified this model by assuming a constant x distance (out of plane) between blade and tower. It was taken as the average distance between the blade section and the tower surface in constant wind aeroelastic simulations. That is one constant value that is used for all wind speeds. The tower shadow variation is added to Δ[Vop-v] to get Δ[Vop]. Once we have calculated V[IP] and V[OP], we use both quantities to obtain the relative velocity of the flapped blade section. In uniform aligned inflow, V[rel] would be the resulting velocity from the previous quantities. This is almost never the case. To account for the oblique inflow, we use the trigonometric correction presented in Damiani et al. (2018): For this correction, we need to know the shaft tilt angle τ and the current yaw-misalignment angle γ. The latter can be obtained via the hub-mounted anemometers. In Eq. (28), we also included the in-plane ($\stackrel{\mathrm{˙}}{y}$) and out-of-plane ($\stackrel{\mathrm{˙}}{x}$) velocities of the flapped blade section due to blade elasticity. These can be obtained by rotating the chordwise and normal velocities of the section by the current blade pitch angle and the twist angle (see Fig. 2): $\begin{array}{}\text{(29)}& & \stackrel{\mathrm{˙}}{x}=\stackrel{\mathrm{˙}}{h}\cdot \mathrm{cos}\left(\mathrm{180}{}^{\circ }-\left(\mathit{\beta }+\mathit{\theta }\right)\right)+\stackrel{\mathrm {˙}}{f}\cdot \mathrm{sin}\left(\mathrm{180}{}^{\circ }-\left(\mathit{\beta }+\mathit{\theta }\right)\right),\text{(30)}& & \stackrel{\mathrm{˙}}{y}=-\stackrel{\mathrm{˙}}{h}\cdot \mathrm{sin}\left(\ mathrm{180}{}^{\circ }-\left(\mathit{\beta }+\mathit{\theta }\right)\right)+\stackrel{\mathrm{˙}}{f}\cdot \mathrm{cos}\left(\mathrm{180}{}^{\circ }-\left(\mathit{\beta }+\mathit{\theta }\right)\ We note that the expression for V[OP] is a relative simple estimation of the out-of-plane velocity. This is not exact since there is significant variation in V[OP] along the rotor disc due to the non-linear wind shear and turbulence. This error can be tolerated because ${V}_{\mathrm{IP}}^{\mathrm{2}}$ dominates in the right-hand side of Eq. (28). This is specially so for large rotor blades with high tip speed ratios, which is the current industry trend. So an error in V[OP] will only have a small contribution towards V[rel] and hence towards $\stackrel{\mathrm{^}}{\mathit{\alpha }}$. There have been several studies that include more accurate estimations of the rotor effective wind speed, Simley and Pao (2016) and Bertelè et al. (2017) being two examples. Such methods could be used to further increase the accuracy of the current wind speed estimator. Additional improvements could also be achieved if we include the effect of tower top displacements on the estimated local V Combining the V[rel] estimator with the estimator for α gives us the final LPV observer based on the flap hinge moment sensor. The derivation of the model and observer presented in this section is based on several assumptions. The first is the 2D airfoil analysis assumption. It is therefore only correct for an infinitely thin blade station. Changes in the wind distribution and angle of attack along the span of the blade section will have an integrated effect on the hinge moment and potentially deviate from the pure 2D analysis. In this study we assume that the aerodynamic conditions do not change significantly within a flapped blade section and therefore can be treated as effective quantities for the section. To have an idea of the error made with this assumption, we take the innermost 3m flapped section with the aforementioned sensors located at the center of the section span. The center of this section is situated at a blade radius of 65.5m. For turbine operation at rated rotor speed, the difference of local in-plane wind speeds between the center of the blade section and the inner part of the section span is about 2% of the in-plane velocity at the section center. For the outer flapped sections, the relative error decreases due to the increased wind speed used for reference. For the out-of-plane wind speed we can estimate the error by looking at the coherence function of the spatial distribution of the turbulence. Assuming a Kaimal wind model, the spatial coherence function of two points that are 1.5m apart is 0.75 for a frequency corresponding to 1P and an average out-of-plane velocity of 10ms^−1 (IEC 61400-1 Ed. 3, 2005). Ultimately the validity of this assumption is reflected in the controller performance and has to be assessed experimentally or in simulations. Jones et al. (2018) also assume a correlation between the lift forces occurring at nearby blade stations and the lift force of the blade station where a pitot tube is installed. With this assumption they are able to enhance the performance of an IPC controller based on the information of one inflow sensor per blade. In Barlas et al. (2018), the authors successfully control a 2m flapped blade section using a feed-forward controller based on the local information of one pitot tube. These are encouraging results suggesting the aforementioned assumption is valid. The observer presented in this section is based on a thin airfoil model and does not correspond to physical airfoils used in wind turbines. As we saw in Sect. 3.1.2, our model only partially captures the viscous effects of flow separation that greatly influence the behavior of C[h] as a function of α. We approximated this effect with the use of flap effectiveness coefficients. Nonetheless, the nonlinear behavior for larger absolute values of α was not captured. To improve our modeling, we could determine the complete state-space models of the flap hinge moment at different wind speeds experimentally. This could be done for example in wind tunnel tests with the help of system identification techniques (Bartholomay et al., 2018). In this section we present the performance of the estimator using a series of steady and turbulent wind fields in combination with the turbine model. To illustrate the performance of our estimator, we simulate one 3m flapped blade section at the rotor spanwise position of 74.3m. The flap angle is held constant at δ=0^∘ for all the simulations in this section. 4.1Implementation of model and observer in the aeroelastic codes Both the hinge moment model and the observer are included in the TUB Controller (Perez-Becker et al., 2021). The inputs for the model ($\mathbit{u}={\left(\begin{array}{cccccccc}\mathit{\alpha }& \ stackrel{\mathrm{˙}}{\mathit{\alpha }}& \stackrel{\mathrm{¨}}{\mathit{\alpha }}& \mathit{\delta }& \stackrel{\mathrm{˙}}{\mathit{\delta }}& \stackrel{\mathrm{¨}}{\mathit{\delta }}& \stackrel{\mathrm {˙}}{h}& \stackrel{\mathrm{¨}}{h}\end{array}\right)}^{T}$ and V[rel]) and for the observer (${\mathbit{u}}_{\mathrm{S}+\mathrm{H}}={\left(\begin{array}{ccccc}\mathit{\delta }& \stackrel{\mathrm{˙}}{\ mathit{\delta }}& \stackrel{\mathrm{¨}}{\mathit{\delta }}& \stackrel{\mathrm{˙}}{h}& \stackrel{\mathrm{¨}}{h}\end{array}\right)}^{T}$ and the inputs for the velocity estimator) are calculated by the aeroelastic code and passed to the controller via an appropriate interface. For the FAST simulations the interface occurred within a Simulink environment. For the QBlade simulations, a special interface was developed between the software and the controller to be able to pass the required inputs. As discussed in Sect. 3.4, the hinge moment model and observer are inherently 2D models that require the input at one specific location. We chose this location to be the center of each flapped blade section and assume that this location is representative for the whole section. For the FAST simulations, the blade is discretized into 46 aerodynamic and 57 structural nodes, with nodes specially chosen at the center of each flapped blade section. In QBlade, the blade is discretized into 25 aerodynamic and 20 structural nodes. The aerodynamic nodes are spaced sinusoidally along the blade span. The aerodynamic and structural information at the center of the flapped blade sections is obtained by linearly interpolating the information from neighboring nodes. The performance of our observer is evaluated by its ability to estimate the representative aerodynamic quantities at the center of each flapped blade section. This modeling approach could be improved if we consider more aerodynamic blade elements so that several of them lie within a flapped blade section. These elements would be used to calculate the lift force acting on the whole section. Using a representative airfoil polar for the blade section, we could calculate the effective angle of attack at the center location of the section and use this as an input for our hinge model. It would also be this effective angle of attack that would be estimated by our observer. 4.2Steady wind conditions The parameters of the steady wind simulations are listed in Table 1 under the column “Steady calculations”. Figure 5 shows three key variables for our estimator in simulations with two different hub height wind speeds. One wind speed around rated wind and one above rated where the influence of the pitch angle increases. The simulations were performed with steady wind speeds and a constant nacelle yaw angle of −8^∘. We can see in Fig. 5a the behavior of M[Hin] as a function of φ. In both the 11 and the 17ms^−1 wind speed simulations M[Hin] is mostly determined by M[H-A]. The larger azimuthal variation in M[Hin] in the 17ms^−1 wind speed simulations comes from the higher relative wind speed variations due to wind shear. Figure 5b and c show the estimated and real values of V[rel] and α as a function of φ. We can see that the observer is able to estimate the relative velocity and angle of attack at this blade section well. There is a slight delay of $\stackrel{\mathrm{^}}{\mathit{\alpha }}$ compared to α, but it is small at the timescale of one rotor revolution. For both wind speeds, there is a clear difference at an azimuth angle of φ=180^∘. The velocity drop of V[rel] is due to the tower shadow, which was only approximated in our velocity estimator. This tower shadow effect on α is also not captured correctly by our estimator (Fig. 5c). We can also see in these figures the variation due to wind shear of V[rel] and α. Our approximation captures this variation well but there are still some small differences between the real and the estimated quantities. Figure 6 shows the results of steady wind calculations for all wind speeds relevant to power production and for three different yaw angles of the nacelle. The markers represent the mean of the steady simulations, and the error bars represent the extrema of the simulations. Figure 6a, c and e show the estimated and real relative velocities at the blade section. Our velocity estimator is able to capture these differences and keeps the relative error of the mean velocities at below 0.4%. Figure 6b, d and f show the corresponding estimated and real values of α at the blade section. We see that the observer manages to estimate the mean angle of attack with an error below 0.2^∘ for all simulations. The differences in the extrema of α are more marked, especially for higher values of V[hub]. The differences in the minima can be explained by the approximation of the tower shadow, which fails to capture the dip in α at azimuth values of φ=180^∘ (Fig. 5c). If we look at the range of V[hub] between 8 and 14ms^−1, we can see that the differences between real and estimated maxima of α are small. This is of importance if we want to use the observer as part of a load alleviation controller targeting extreme loads. This is the wind speed range where the turbine sees the highest out-of-plane loading in power production (Barlas et al., 2016b). 4.3Turbulent wind conditions We also analyzed the performance of our estimator in fully turbulent wind load calculations. These situations are especially challenging for the observer because our relative velocity estimator does not model turbulent out-of-plane wind speeds. The setup for the turbulent calculations is shown in Table 1 in the column “Turbulent calculations”. Because the observer lacks the turbulent variation in the out-of-plane wind speed in the relative velocity estimator, the estimated $\stackrel{\mathrm{^}}{\mathit{\alpha }}$ will have a significantly larger variation than the actual α. This is because the observer will attribute the changes in M[Hin] to the unmeasured α, since it can only assume a correct parametrization coming from ${\stackrel{\ mathrm{^}}{V}}_{\mathrm{rel}}$. In order to limit the variations in $\stackrel{\mathrm{^}}{\mathit{\alpha }}$, an additional first-order low-pass filter was added to the output of the $\stackrel{\ mathrm{^}}{\mathit{\alpha }}$ observer. Figure 7 shows time series that illustrate the performance of the observer in turbulent wind conditions. The simulations are for mean V[hub] values of 7 and 15ms^−1, representing scenarios below and above rated wind conditions. The simulations were done using the IEC NTM turbulent wind model. We can see in Fig. 7 that the estimator is able to capture the low-frequency variation in α and V[rel] well. The differences arise from the effect of the turbulent wind speed. Depending on the wind speed, the magnitude of the differences between ${\stackrel{\mathrm{^}}{V}}_{\mathrm{rel}}$ and V[rel] can be small or large. Figure 7b shows the example of 7ms^−1 mean V[hub] simulations where the differences between ${\stackrel{\mathrm{^}}{V}}_{\mathrm{rel}}$ and V[rel] are small. The differences can also be more significant, especially for larger wind speeds. In the 15ms^−1 example (Fig. 7 d), we can see large differences in the 1P variation in V[rel] in the simulation time between 550 and 600s. These variations arise from the repeated passing of the rotating blade through large-scale, slowly varying turbulent wind gusts and hence are not accounted for in ${\stackrel{\mathrm{^}}{V}}_{\mathrm{rel}}$. Due to the low-pass filtering of $\stackrel{\mathrm{^}}{\mathit{\alpha }}$, the abovementioned differences in V[rel] do not lead to large oscillations of the former, and we can see that $\stackrel{\mathrm{^}}{\mathit{\alpha }}$ follows α well in the low-frequency The filtering introduces a time lag which could potentially be detrimental for a controller. The effects of the filter would certainly be more marked for an extreme load controller than for a fatigue load controller. Since we are interested in the extreme load reduction potential of flaps, we will be analyzing the more critical situation in this study. The description and performance of such an extreme loads controller are presented in the next section. The estimated ${\stackrel{\mathrm{^}}{V}}_{\mathrm{rel}}$ and $\stackrel{\mathrm{^}}{\mathit{\alpha }}$ from the observer can also be used for a pitch-based fatigue load controller – such as the one presented by Jones et al. (2018) – thereby indirectly increasing the use of the flap system beyond extreme load reduction. 5Extreme load reduction under turbulent conditions In this section we integrate the estimator presented in Sect. 3 in a simple flap extreme load controller and use it to reduce extreme blade loads and deflections. 5.1Extreme load controller The controller used in this study is based on the extreme load controller presented by Barlas et al. (2016b). It has an attractively simple architecture and was shown to work effectively in the mentioned study. In the original controller, all three out-of-plane BRBM signals ${M}_{Y\text{-}i}^{\mathrm{BR}}$ are compared to a maximum threshold M[thr]. If $\begin{array}{}\text{(31)}& {M}_{Y\text{-}i}^{\mathrm{BR}}\ge {M}_{\mathrm{thr}},\end{array}$ then all flaps of all blades are deployed to a target angle. We expanded this controller by including additional sensors and criteria to trigger flap action. The controller logic is shown in Fig. 8. The subscripts of the figure correspond to the jth flap of the ith blade. Although there are six flaps per blade, the controller treats them all as a single unit. The input of the individual flaps is averaged per blade (denoted by the symbol $\stackrel{\ mathrm{‾}}{\left(\cdot \right)}$). Combinations of input sensors are compared against thresholds. If they pass the thresholds, then all flaps of all blades are set to a target value. The first condition for triggering the flap action is Eq. (31). This condition is identical to the one proposed by Barlas et al. (2016b). Here, a careful selection of M[thr] has to be made in order not to influence the turbine in normal power production. The other three conditions are $\begin{array}{}\text{(32)}& & \left({M}_{Y\text{-}i}^{\mathrm{BR}}\ge {M}_{\mathrm{tr}\mathrm{1}}\phantom{\rule{0.25em}{0ex}}\mathrm{and}\phantom{\rule{0.25em}{0ex}}{\stackrel{\mathrm{‾}}{\stackrel {\mathrm{˙}}{x}}}_{i}\ge {\stackrel{\mathrm{˙}}{x}}_{\mathrm{tr}\mathrm{1}}\right)\phantom{\rule{0.25em}{0ex}}\mathrm{or}\phantom{\rule{0.25em}{0ex}}\left({M}_{Y\text{-}i}^{\mathrm{BR}}\ge {M}_{\ mathrm{tr}\mathrm{2}}\phantom{\rule{0.25em}{0ex}}\mathrm{and}\phantom{\rule{0.25em}{0ex}}{\stackrel{\mathrm{‾}}{\stackrel{\mathrm{˙}}{x}}}_{i}\ge {\stackrel{\mathrm{˙}}{x}}_{\mathrm{tr}\mathrm{2}}\ right),\text{(33)}& & \left({M}_{Y\text{-}i}^{\mathrm{BR}}\ge {M}_{\mathrm{tr}\mathrm{1}}\phantom{\rule{0.25em}{0ex}}\mathrm{and}\phantom{\rule{0.25em}{0ex}}{\stackrel{\mathrm{‾}}{\stackrel{\mathrm {^}}{\mathit{\alpha }}}}_{i}\ge {\mathit{\alpha }}_{\mathrm{tr}\mathrm{1}}\phantom{\rule{0.25em}{0ex}}\mathrm{and}\phantom{\rule{0.25em}{0ex}}{\stackrel{\mathrm{‾}}{\stackrel{\mathrm{^}}{V}}}_{\ mathrm{rel}\text{-}i}\ge {V}_{\mathrm{tr}\mathrm{1}}\right),\text{(34)}& & \left({M}_{Y\text{-}i}^{\mathrm{BR}}\ge {M}_{\mathrm{tr}\mathrm{1}}\phantom{\rule{0.25em}{0ex}}\mathrm{and}\phantom{\rule {0.25em}{0ex}}{\stackrel{\mathrm{‾}}{M}}_{\mathrm{Hin}\text{-}i}\le {M}_{\mathrm{Hin}\text{-}\mathrm{tr}\mathrm{1}}\phantom{\rule{0.25em}{0ex}}\mathrm{and}\phantom{\rule{0.25em}{0ex}}{\stackrel{\ mathrm{‾}}{\stackrel{\mathrm{^}}{V}}}_{\mathrm{rel}\text{-}i}\ge {V}_{\mathrm{tr}\mathrm{1}}\right).\end{array}$ Here, M[tr-i], ${\stackrel{\mathrm{˙}}{x}}_{\mathrm{tr}\text{-}i}$, α[tr1], V[tr1] and M[Hin-tr1] are threshold values for the respective input sensors. The idea of these conditions is to lower the threshold for ${M}_{Y\text{-}i}^{\mathrm{BR}}$ to deploy the flaps for the cases where extreme bending moments are expected to happen. It was seen in preliminary simulations that the flap deployment had a delayed effect on the lowering of the blade root bending moment. Deploying the flaps close to a load peak would only have a small load reduction effect on ${M}_{Y\text{-}i}^{\mathrm{BR}}$. Yet this condition is necessary in order to limit the influence of the flap controller on the power output of the turbine. By using the flap hinge sensor in the outer blade span (Eqs. 33 and 34), the controller has additional aerodynamic information about the likelihood of a strong wind gust and is able to deploy the flaps at a lower threshold value of ${M}_{Y\text{-}i}^{\mathrm{BR}}$ (M[tr1]<M[thr]), thus giving the flaps more time to effectively mitigate the blade root loads. We mentioned in Sect. 4.3 that our observer includes a low-pass filter for $\stackrel{\mathrm{^}}{\mathit{\alpha }}$ which causes a delay in the signal. To enable our controller to sense sudden increases in α, we also included a condition based on ${\stackrel{\mathrm{‾}}{M}}_{\mathrm{Hin}\text{-}i}$ directly. Assuming that the flap deployment angle will mostly be ${\mathit{\delta }}_{i,j}=\mathrm{0}$^∘, the value of ${\ stackrel{\mathrm{‾}}{M}}_{\mathrm{Hin}\text{-}i}$ will be approximately proportional to ${\stackrel{\mathrm{‾}}{\mathit{\alpha }}}_{i}$ for small reduced frequencies (Fig. 3b). Preliminary simulations also revealed that blade deflection dynamics are critical for predicting extreme events of ${M}_{Y\text{-}i}^{\mathrm{BR}}$ and were not always strongly correlated to the aerodynamic information. The controller includes Eq. (32) to account for extreme loads due to large values of $\stackrel{\mathrm{‾}}{\stackrel{\mathrm{˙}}{x}}$. We include two conditions to also account for very high velocities that occur at lower values of ${M}_{Y\text{-}i}^{\mathrm{BR}}$ (M[tr2]<M[tr1] and ${\stackrel{\mathrm{˙}}{x}}_{\mathrm{tr}\mathrm{2}}>{\stackrel{\mathrm{˙}}{x}}_{\ If any of the above conditions are met for any blade, then all flaps of all blades are deployed to the target angle ${\mathit{\delta }}_{tar}=-\mathrm{14.5}$^∘. This value is slightly lower than the maximum modeled value in the polars. This was done to still be able to model the aerodynamic effect of a flap angle overshoot when the flaps are deployed. Additionally, the flaps were kept deployed for a given time τ[dep] before they were returned to the original position. τ[dep] was chosen to be about a third of one rotor period. This way if a localized gust triggers the flap controller of one blade, the following blade will already have deployed flaps when encountering the local gust. Finally, the controller also returns the flaps to their 0^∘ position with a reduced rate compared to the maximum flap rate. This is to have a smooth return to normal conditions and to avoid excessive oscillations of the blades by frequent on–off transitions. The conditions explained so far are valid for cases where the strategy is used to mitigate the maxima of ${M}_{Y\text{-}i}^{\mathrm{BR}}$. This is the case of the considered load cases (Table 1). Equivalent conditions can be defined for the controller to mitigate the minima of ${M}_{Y\text{-}i}^{\mathrm{BR}}$. We note that this controller treats all flaps in one blade as a single unit. Why should we make the additional effort to include multiple flaps per blade? One reason for having several flaps per blade instead of one is that a system of multiple independent flaps will still be able to reduce the loads in the event of one flap ceasing to function. This makes the system more redundant and thus more appropriate for extreme load reduction. A second reason is that the controller will have sensorial input from multiple sources. Averaging the signals of the six flaps smooths out possible local variations that could arise from using a single set of sensors. Finally, having multiple sensors and actuators per blade opens up the possibility of using local distributed control strategies in future studies. The metrics considered in this study are max(${M}_{Y}^{\mathrm{BR}}$) – the extreme out-of-plane bending moment at the blade root, max(${M}_{XY}^{\mathrm{BR}}$) – the extreme resulting bending moment at the blade root – and max(${D}_{X}^{\mathrm{T}\mathrm{2}\mathrm{T}}$) – the maximum blade out-of-plane deflection in the region defined by 5^∘ before the blade's closest azimuthal position to the tower and 5^∘ after this position. For a blade whose rotor azimuth angle is defined as 0^∘ when the blade is pointing vertically upward, this region would lie between the azimuth angles of 175 and 185^∘. This metric gives us an estimate of the minimum blade tip-to-tower surface distance. Because we are dealing with stochastic wind simulations, we chose to follow the averaging procedure for extreme values described in IEC 61400-1 Ed. 3 (2005). The maximum value of ${M}_{Y}^{\mathrm {BR}}$, ${M}_{XY}^{\mathrm{BR}}$ and ${D}_{X}^{\mathrm{T}\mathrm{2}\mathrm{T}}$ (considering all three blades) is recorded for each simulation. The extreme value is then taken as the average of the six highest maxima of all simulations. For the case of ${M}_{XY}^{\mathrm{BR}}$, we did this averaging analysis for 72 different angular bins. For each time step, the direction of the load vector was calculated, and the maximum of ${M}_{XY}^{\mathrm{BR}}$ was determined for each angular bin in each simulation. This gives a more detailed picture of the effect of the flap controller on the extreme resulting bending moments in different load directions. No safety factors were applied since all the considered load cases share the same safety factor. A complete analysis should include other key turbine sensors to measure the overall effect of the controller on the turbine. We chose not to include these sensors in order to limit the scope of this 5.3Simulation setup In this study we focused on the load case group that caused the highest ${M}_{XY}^{\mathrm{BR}}$ design loads of our selected turbine (Bak et al., 2013). This is the DLC load case group 1.3 which has the ETM wind model. We considered a subset of wind speed bins of this group around the rated wind speed (see Table 1 in the column “Turbulent calculations”). This is also the region of maximum rotor thrust and thus the expected maximum values of ${M}_{XY}^{\mathrm{BR}}$ and ${D}_{X}^{\mathrm{T}\mathrm{2}\mathrm{T}}$. We considered wind speeds in a range of mean V[Hub] between 9 and 17ms^−1 in 2ms^−1 steps. Six different simulations were done for each wind speed bin, giving us a total of 30 simulations for each controller configuration. We also included the results of simulations from the DLC load case group 1.1 without flap controller. This way we have a reference of the load increase due to the increased turbulence of the DLC 1.3 group. Although the number of simulations is rather low to determine the overall extreme values of ${M}_{XY}^{\mathrm{BR}}$ and ${D}_{X}^{\mathrm{T}\mathrm{2}\mathrm{T}}$, our selection will give us a good estimate of their maxima. By using QBlade – which has the lifting line free vortex wake (LLFVW) aerodynamic model and a multi-body based structural model – we ensure that the results of our load calculations are more accurate than with other codes that use the more common BEM aerodynamic model (Perez-Becker et al., 2020). In all simulations we included additional 100s simulation time to allow the wake to develop. This time was discarded in our analysis. Figure 9 shows an overview of the maxima of ${M}_{Y}^{\mathrm{BR}}$ for the individual simulations of the different DLC groups and controller configurations. As expected, the highest values of ${M}_{Y}^{\mathrm{BR}}$ for the DLC 1.1 power production group occur for wind speed bins around rated wind. Although some high extreme values are also recorded for the wind speed bin of V[hub]=15ms^−1. This is because of the turbulent wind conditions. The increased turbulence of the DLC 1.3 load cases leads to significantly higher values of max(${M}_{Y}^{\ mathrm{BR}}$) when compared to the values of the DLC 1.1 group. This is for all the considered wind speeds. Including the TE flap controller in the DLC 1.3 simulations visibly lowers the maxima of $ {M}_{Y}^{\mathrm{BR}}$. The extremes for wind speed bins between 11 and 15ms^−1 are comparable to the extremes of the DLC 1.1 group without flap actuation. These wind speed bins also include the highest values of max(${M}_{Y}^{\mathrm{BR}}$). A numerical comparison is shown in Fig. 10. Here we can see the normalized values of max(${M}_{Y}^{\mathrm{BR}}$), max(${D}_{X}^{\mathrm{T}\mathrm{2}\mathrm{T}}$) and max(${M}_{XY}^{\mathrm{BR}}$) for all simulations with and without active flaps (AFC in the figure). As explained in Sect. 5.2, these values were obtained by using the averaging procedure of extrema according to the IEC standard. We can see that by including active trailing edge flaps we can reduce the extreme out-of-plane moment ${M}_{Y}^{\mathrm{BR}}$ by 8%, the extreme resulting bending moment by 7.6% and the critical deflection of the blade tip ${D}_{X}^{\mathrm{T}\mathrm{2}\mathrm{T}}$ by 7.1%. The resulting values are comparable to the ones obtained by extrema of the DLC 1.1 group. It is also interesting to see what effect an active flap system has on the different load directions of max(${M}_{XY}^{\mathrm{BR}}$) around the blade root. Figure 11 shows the averaged value of max( ${M}_{XY}^{\mathrm{BR}}$) for different load directions. We can see that the largest values of max(${M}_{XY}^{\mathrm{BR}}$) occur for angular bins between 70 and 100^∘. These angles are around the value of positive ${M}_{Y}^{\mathrm{BR}}$ (90^∘). Because we are only considering the DLC 1.1 and 1.3 groups, we will only have a good estimation for the extreme resulting bending moments due to the maximum thrust of the wind turbine, which happens to correspond to this region. When we look at the effect of the active flap system, we can see that the reduction of max(${M}_{XY}^{\mathrm{BR}}$) occurs in this angular range. In the rest of the angular bins, the maximum resulting bending moments are practically identical. This can be understood if we recall that the controller input only included strain gauge information in the out-of-plane direction. Extreme resulting bending moments that arise from the combination of ${M}_{Y}^{\mathrm{BR}}$ and ${M}_{X}^{\mathrm{BR}}$ cannot be captured by the controller sensors. In addition, the flaps' control authority is mainly on the lift, so flap actuation will have little effect on the variation in ${M}_{X}^{\mathrm{BR}}$. Again, the reduction due to flap activity seen in Fig. 11 reduces the values of ${M}_{XY}^{\mathrm{BR}}$ to values close to the ones obtained from considering the loads from DLC 1.1 only. The results obtained in the previous section show that active TE flaps are able to reduce extreme loads and critical deflections of the blade substantially. Given the constraint that the control strategy should not interfere with the normal power production (DLC 1.1), a reduction of 7.6% in max(${M}_{XY}^{\mathrm{BR}}$) is significant because it almost eliminates the increased extrema in DLC 1.3 due to the extreme turbulence (Fig. 10). These results are statistical and should be interpreted as an average performance of the flap system. To better understand the potentials and limitations of the proposed system, it is useful to look at some exemplary time series. Figure 12 shows a selection of the time series for two simulations from the ${\stackrel{\mathrm{‾}}{V}}_{\mathrm{hub}}=\mathrm{11}$ms^−1 wind speed bin, where several of the maxima occurred. Each column corresponds to one simulation. The first column shows a scenario where the average wind speed (not shown) is increasing and Ω is rising to values around rated rotor speed (Fig. 12d). The low value of β (Fig. 12c) leads to high values of rotor thrust and high values of ${M}_{Y}^{\mathrm{BR}}$ (Fig. 12a) and to the activation of the flaps (Fig. 12b). We can see that the flaps are being deployed even if the values of ${M}_{Y}^{\mathrm{BR}}$ do not reach an apparent peak. This is because Fig. 12a only shows the value of ${M}_{Y}^{\mathrm{BR}}$ for one blade, and all flaps are activated as soon as the deployment criteria are met by any blade. Let us now consider the situation around the simulation time of 520s. Here the blade passes a local wind gust, and it significantly increases the value of ${M}_{Y}^{\mathrm{BR}}$ for the simulation without flaps. This also accelerates the rotor and activates the pitch system. This does not happen in the simulation with active flaps. Due to a previous activation of the flap system, the blade enters the wind gust with deployed flaps, thereby creating less lift and thus reducing the load peak by about 16%. This is one example showing the large potential that active flap systems have in reducing the extreme loads of wind turbine blades. The second column shows a situation where the controller does not perform as desired. At around 685s of simulation time, there is a sudden peak in ${M}_{Y}^{\mathrm{BR}}$ for the simulations with active flaps (Fig. 12e). Just before that event the flaps had just arrived at their 0^∘ position, thus allowing the blade to generate normal lift (Fig. 12f). The flap controller reacts quickly, but the reaction is not quick enough to mitigate the high loading. Because we are trying to reduce the extreme loads, such events – although rare – are the ones that lead to the recorded maxima. In the case of this particular simulation, the extreme load reduction of the active flap system is only about 1.5%. This example also illustrates well the necessity of having fast reactions from the flap system to sudden changes. This can only be obtained by including local sensorial information of the flap system – such as the flap hinge moment and acceleration signals – as part of the controller One challenge in this particular study is that the difference in extrema between the DLC 1.1 and 1.3 groups is relatively small. Given the fact that the flap system needs a certain amount of time to have an effect on ${M}_{Y}^{\mathrm{BR}}$, we require the values of the different thresholds to be low. On the other hand they should not be too low in order to avoid influencing the normal power production of the turbine. The combination of these two requirements does not leave much room for threshold selection. Slightly changing the threshold values helped improve the performance of the flap system in certain scenarios but lowered the performance in others, thus giving similar results overall. If the difference in extreme conditions between the DLC 1.1 group and other DLC groups is higher (as was reported in Barlas et al., 2016b), then this issue becomes less critical. Another observed issue with the proposed strategy is the unwanted mutual influence between the pitch and the flap controllers. We can see an example of this in Fig. 12c. The deployment of the flaps lowers the lift force but also the torque produced by the rotor and thus the rotor acceleration. This results in the pitch controller leaving the pitch angle longer at 0^∘ compared to a simulation without active flaps. While the overall effect on the rotor speed and hence generator power is small, the effects on ${M}_{Y}^{\mathrm{BR}}$ are more substantial. Lower values of β increase the values of α along the blade, leading to higher lift values and higher values of ${M}_{Y}^{\mathrm{BR}}$. It was observed in some simulations that there were values of max(${M}_{Y}^{\mathrm{BR}}$) with fully deployed flaps comparable to values of max(${M}_{Y}^{\mathrm{BR}}$) from the same simulations without active flaps. The difference lay in the respective values of β in each simulation. The influence of the flap controller led to lower values of β compared to the simulation without active flaps, indirectly reducing its effectiveness. A possible solution would be to decrease the value of τ[dep] or increase the flap rate when returning to the 0^∘ position, thus lowering the time that the flaps influence lift production. Because of the constraints in threshold selection mentioned before, having a flap deployment cycle that is too fast could lead to an unstable behavior where the flap system amplifies the blade oscillation. Another possible solution could be to adapt the controller so that each blade uses its flap system independently. While this strategy reduces the interference between the flap and the pitch controllers, it loses the information about the inflow conditions of the preceding blade. This leads to more false negative scenarios such as the one described in Fig. 12e–h, again lowering the effectiveness of the strategy. All of the limitations discussed above arise mainly from the chosen controller strategy. While being simple, robust and able to reduce extreme loads and deflections significantly, the proposed architecture shows its limits when used in such a challenging scenario. Other controller strategy types, such as, e.g., model-based or adaptive-data-driven controllers that include combined pitch and flap action, are possible candidates that could exploit the full potential of active flaps while overcoming the observed limitations in this study. In this paper we explored the potential of active trailing edge flaps to reduce extreme loads and critical deflections of the modified DTU 10MW RWT blade with flaps. We considered the flap hinge moment as a robust and available sensor that can deliver valuable local information about the inflow and enable the flap system to have more time to react to sudden extreme conditions. In order to use the flap hinge moment as an input sensor for a controller strategy, we adapted an existing unsteady hinge moment model for thin airfoils from the literature to use it in aero-servo-elastic simulations in the time domain. Based on this model, we developed an observer that estimates the effective local angle of attack and relative wind velocity of a blade section from local sensors. The latter include the flap hinge moment and an accelerometer mounted on the blade section. We evaluated the performance of the observer in aeroelastic simulations with steady and turbulent wind conditions. For steady wind conditions, the error between the estimated and real values of the mean V[rel] was below 0.4% for all wind speeds between 4 and 24ms^−1. The error between the estimated and real value of the mean α lay below 0.2^∘. We also tested the observer in more challenging turbulent wind speed conditions. Although our observer was lacking information about the incoming turbulent wind, it was able to estimate the low-frequency content of V[rel] and α fairly well. An exception was seen to be the local turbulent gust slicing of the blade for higher wind speeds. This leads to increased 1P variations in V[rel] not captured by the observer. This observer was included in a simple flap controller strategy to mitigate extreme blade loads and critical blade deflections. It is based on a series of on-off criteria applied to several input sensors. The sensors included integrated load values – such as ${M}_{Y}^{\mathrm{BR}}$ – and local information of the individual blade sections – such as M[Hin], $\stackrel{\mathrm{˙}}{x}$, $\ stackrel{\mathrm{^}}{\mathit{\alpha }}$ and ${\stackrel{\mathrm{^}}{V}}_{\mathrm{rel}}$. We tested the performance of this strategy in aero-servo-elastic load calculations according to the DLC 1.3 group from the IEC standard. This group features the extreme turbulent wind model and is responsible for the maxima of ${M}_{Y}^{\mathrm{BR}}$ and ${D}_{X}^{\mathrm{T}\mathrm{2}\mathrm{T}}$ of our considered blade. The simulations were performed using the LLFVW aerodynamic model, which has been shown to calculate aerodynamic loads more accurately than the conventional BEM aerodynamic models. The proposed flap controller was able to reduce the maxima of ${M}_{Y}^{\mathrm{BR}}$ and $ {M}_{XY}^{\mathrm{BR}}$ by 8% and 7.6%, respectively. The controller was also able to reduce the critical blade deflection – i.e., the blade tip deflection in front of the tower – by 7.1%. Looking at the maxima of ${M}_{XY}^{\mathrm{BR}}$ for different load directions, we found that the flap controller was able to reduce max(${M}_{XY}^{\mathrm{BR}}$) for angular bins between 70 and 100^∘, bringing them down to values that are comparable to the maxima from normal power production load cases. These directions correspond to directions around positive ${M}_{Y}^{\mathrm{BR}}$, where the flap control authority is also highest for a blade with β=0^∘. A more detailed look revealed that active flap systems in general have the potential to reduce the extreme loads even further. Yet a combination of challenging conditions – e.g., constraints in the parameter selection space in order to reduce extreme loads without interfering with normal power production – and a simple controller strategy were found to be the main limitations of the proposed The results of this paper show that active TE flaps have a large potential to reduce design-driving extreme loads and critical deflections of wind turbine blades. They can therefore help create more competitive turbine designs that will reduce the cost of energy even further. A critical aspect was seen to be the reaction time of the active flap system, which can be greatly improved if local sensors – such as the flap hinge moment – are used as input for the control strategy. More work needs to be done in order to gain further insight into this topic and better quantify the aforementioned potential of flaps. The model used for the hinge moment calculation could be improved to increase its accuracy. Further refinements could include a more detailed state-space representation of the aerodynamic loads, include other inertial loads such as centrifugal and gyroscopic loads, and also include friction. In this study we used identical systems to model the flap hinge and as input for our observer. This is certainly unrealistic, and a robustness study should be done to quantify how model uncertainty and measurement noise affect the observer performance. This is especially of relevance due to the lower sensitivity of the hinge moment coefficient to changes in the angle of attack. A proof of stability for the LPV observer is also needed to guarantee that the observer will be stable at all times. The model and observer should also be tested experimentally. On the one hand, the proposed model could be compared to experimental data obtained from a 2D airfoil with an active flap for different reduced frequencies. Alternatively or complementarily, a system identification could be performed using experimental data to create an unsteady hinge moment model from the data to be used in the observer. The observer and model could also be tested and analyzed experimentally using an experimental wind turbine in a wind tunnel. Regarding the controller strategy, a better quantification of its performance can be obtained by including more DLC groups that include the load extrema in all directions. Also, the evaluation of the strategy should be extended to include more load sensors, such as hub and tower bending moments. In addition, other control strategies should be considered as possible candidates for extreme load and deflection control. In particular, model-based or adaptive-data-driven controllers that are able to control both the pitch and flap actuators are promising candidates to increase the ability of active flaps to reduce design-driving loads. Appendix A:Entries of the hinge model and observer state-space matrices This appendix contains the explicit entries of the matrices used for the hinge model and the hinge model observer. The entries for the feedthrough matrix of the hinge model D[Hin] are as follows. $\begin{array}{}\text{(A1)}& & {D}_{\mathit{\alpha }}=\frac{{\mathit{ϵ}}_{\mathit{\alpha }}\cdot {F}_{\mathrm{12}}}{\mathrm{4}}\text{(A2)}& & {D}_{\stackrel{\mathrm{˙}}{\mathit{\alpha }}}=\frac{{\ mathit{ϵ}}_{\mathit{\alpha }}\cdot {F}_{\mathrm{12}}\cdot b\cdot \left(\mathrm{1}/\mathrm{2}-a\right)}{\mathrm{4}{V}_{\mathrm{rel}}}-\frac{b\cdot \left(-\mathrm{2}{F}_{\mathrm{9}}-{F}_{\mathrm{1}}+ {F}_{\mathrm{4}}\cdot \left(a-\mathrm{1}/\mathrm{2}\right)\right)}{\mathrm{2}{V}_{\mathrm{rel}}}\text{(A3)}& & {D}_{\stackrel{\mathrm{¨}}{\mathit{\alpha }}}=\frac{-{F}_{\mathrm{13}}\cdot {b}^{\mathrm {2}}}{{V}_{\mathrm{rel}}^{\mathrm{2}}}\text{(A4)}& & {D}_{\mathit{\delta }}=\frac{{\mathit{ϵ}}_{\mathit{\delta }}\cdot {F}_{\mathrm{12}}\cdot {F}_{\mathrm{10}}}{\mathrm{4}\mathit{\pi }}-\frac{{F}_{\ mathrm{5}}-{F}_{\mathrm{4}}\cdot {F}_{\mathrm{10}}}{\mathrm{2}\mathit{\pi }}\text{(A5)}& & {D}_{\stackrel{\mathrm{˙}}{\mathit{\delta }}}=\frac{{\mathit{ϵ}}_{\mathit{\delta }}\cdot {F}_{\mathrm{12}}\ cdot b\cdot {F}_{\mathrm{11}}}{\mathrm{8}\mathit{\pi }\cdot {V}_{\mathrm{rel}}}+\frac{b\cdot {F}_{\mathrm{4}}\cdot {F}_{\mathrm{11}}}{\mathrm{4}\mathit{\pi }\cdot {V}_{\mathrm{rel}}}\text{(A6)}& & {D}_{\stackrel{\mathrm{¨}}{\mathit{\delta }}}=\frac{{F}_{\mathrm{3}}\cdot {b}^{\mathrm{2}}}{\mathrm{2}\mathit{\pi }\cdot {V}_{\mathrm{rel}}^{\mathrm{2}}}\end{array}$ $\begin{array}{}\text{(A7)}& & {D}_{\stackrel{\mathrm{˙}}{h}}=\frac{{\mathit{ϵ}}_{\mathit{\alpha }}\cdot {F}_{\mathrm{12}}}{\mathrm{4}{V}_{\mathrm{rel}}}\text{(A8)}& & {D}_{\stackrel{\mathrm{¨}}{h}}= \frac{{F}_{\mathrm{1}}\cdot b}{\mathrm{2}{V}_{\mathrm{rel}}^{\mathrm{2}}}\end{array}$ The geometric constants F[i] depend on the flap size relative to the airfoil chord and are given in Hariharan and Leishman (1995). For the hinge observer model, the explicit entries of ${\mathbf{B}}_{\mathrm{Hin}}^{\mathit{\alpha }}$ and ${\mathbf{B}}_{\mathrm{Hin}}^{\mathrm{Res}}$ are $\begin{array}{}\text{(A9)}& & {\mathbf{B}}_{\mathrm{Hin}}^{\mathit{\alpha }}=\left[\begin{array}{ccc}\mathrm{0}& \mathrm{0}& \mathrm{0}\\ \mathrm{1}& b\cdot \frac{\left(\mathrm{1}/\mathrm{2}-a\ right)}{{V}_{\mathrm{rel}}}& \mathrm{0}\\ \mathrm{0}& \mathrm{0}& \mathrm{0}\\ \mathrm{0}& \mathrm{0}& \mathrm{0}\end{array}\right],\text{(A10)}& & {\mathbf{B}}_{\mathrm{Hin}}^{\mathrm{Res}}=\left[\ begin{array}{ccccc}\mathrm{0}& \mathrm{0}& \mathrm{0}& \mathrm{0}& \mathrm{0}\\ \mathrm{0}& \mathrm{0}& \mathrm{0}& \frac{\mathrm{1}}{{V}_{\mathrm{rel}}}& \mathrm{0}\\ \mathrm{0}& \mathrm{0}& \mathrm {0}& \mathrm{0}& \mathrm{0}\\ \frac{{F}_{\mathrm{10}}}{\mathit{\pi }}& \frac{b\cdot {F}_{\mathrm{11}}}{\mathrm{2}\mathit{\pi }\cdot {V}_{\mathrm{rel}}}& \mathrm{0}& \mathrm{0}& \mathrm{0}\end{array}\ Appendix B:List of symbols, subscripts and superscripts This section contains the list of symbols (Tables B1 and B2) as well as the list of sub- and superscripts (Table B3) . Code and data availability Both FAST and QBlade are open-source codes available online. The newest version of FAST v8 is available at https://www.nrel.gov/wind/nwtc/fastv8.html (last access: 19 May 2021) (NREL, 2021). The latest version of QBlade is available at https://www.qblade.org/ (last access: 19 May 2021) (TU Berlin, 2021). The version of QBlade used in this paper that includes the structural model will be made available soon. The time series for the turbulent wind calculations used in this paper are stored in the HAWC2 binary format. They can be made available upon request. SPB prepared the manuscript with the help of all co-authors. DM is the main developer of QBlade. DM and SPB implemented the interface for flap controllers in QBlade. SPB developed the hinge model observer and control strategy, performed the calculations, and analyzed the results. COP provided assistance with the paper review. The authors declare that they have no conflict of interest. Sebastian Perez-Becker wishes to thank WINDnovation Engineering Solutions GmbH for supporting his research. The authors wish to thank Horst Schulte from HTW Berlin and Sirko Bartholomay from TU Berlin for their reviews and insightful comments on this paper. We acknowledge the support of the Open Access Publication Fund of TU Berlin. This open-access publication was funded by Technische Universität Berlin. This paper was edited by Mingming Zhang and reviewed by Athanasios Barlas and Vasilis A. Riziotis. Andersen, P. B.: Advanced Load Alleviation for Wind Turbines using Adaptive Trailing Edge Flaps: Sensoring and Control, PhD thesis, Technical University of Denmark, Risø, Denmark, 2010.a Andersen, P. B., Henriksen, L., Gaunaa, M., Bak, C., and Buhl, T.: Deformable trailing edge flaps for modern megawatt wind turbine controllers using strain gauge sensors, Wind Energy, 13, 193–206, https://doi.org/10.1002/we.371, 2010.a Bak, C., Madsen, H. A., and Johansen, J.: Influence from Blade-Tower Interaction on Fatigue Loads and Dynamics, in: Proceedings of the 2001 European Wind Energy Conference and Exhibition, Copenhagen, Denmark, 394–397, 2001.a Bak, C., Zahle, F., Bitsche, R., Kim, T., Yde, A., Henriksen, L. C., Andersen, P. B., Natarajan, A., and Hansen, M. H.: Design and Performance of a 10MW Wind Turbine, Tech. Rep. I-0092, DTU Wind Energy, Roskilde, Denmark, 2013.a, b, c Barlas, T. and van Kuik, G. A. M.: Review of State of the Art in Smart Rotor Control Research for Wind Turbines, Prog. Aerosp. Sci., 46, 1–27, https://doi.org/10.1016/j.paerosci.2009.08.002, 2010.a Barlas, T., van der Veen, G., and van Kuik, G. A. M.: Model Predictive Control for Wind Turbines with Distributed Active Flaps: Incorporating Inflow Signals and Actuator Constraints, Wind Energy, 15, 757–771, https://doi.org/10.1002/we.503, 2012.a, b Barlas, T., Tibaldi, C., Zahle, F., and Madsen, H. A.: Aeroelastic Optimization of a 10MW Wind Turbine Blade with Active Trailing Edge Flaps, in: 34th Wind Energy Symposium, 4–8 January 2016, San Diego, CA, USA, 1–11, https://doi.org/10.2514/6.2016-1262, 2016a.a Barlas, T., Pettas, V., Gertz, D., and Madsen, H. A.: Extreme load alleviation using industrial implementation of active trailing edge flaps in a full design load basis, J. Phys.: Conf. Ser., 753, 042001, https://doi.org/10.1088/1742-6596/753/4/042001, 2016b.a, b, c, d, e Barlas, T., Olsen, A. S., Madsen, H. A., Andersen, T. L., Ai, Q., and Weaver, P. M.: Aerodynamic and Load Control Performance Testing of a Morphing Trailing Edge Flap System on an Outdoor Rotating Test Rig, J. Phys.: Conf. Ser., 1037, 022018, https://doi.org/10.1088/1742-6596/1037/2/022018, 2018.a, b Bartholomay, S., Mihos, G., Perez-Becker, S., Pechlivanoglou, G., Nayeri, C. N., Nikolaou, G., and Paschereit, C. O.: Towards Active Flow Control on a Research Scale Wind Turbine Using Trailing Edge Flaps, in: AIAA SciTech Proceedings 2018, Kissimee, Florida, USA, https://doi.org/10.2514/6.2018-1245, 2018.a, b Bartholomay, S., Wester, T. T. B., Perez-Becker, S., Konze, S., Menzel, C., Hölling, M., Spickenheuer, A., Peinke, J., Nayeri, C. N., Paschereit, C. O., and Oberleithner, K.: Pressure-based lift estimation and its application to feedforward load control employing trailing-edge flaps, Wind Energ. Sci., 6, 221–245, https://doi.org/10.5194/wes-6-221-2021, 2021.a Behrens, T. and Zhu, W. J.: Feasibility of Aerodynamic Flap Hinge Moment Measurements as Input for Load Alleviation Control, in: Proc. of EWEA 2011, Brussels, Belgium, 1–8, 2011.a Berg, D., Wilson, D., Barone, M., Resor, B., Berg, J., Paquette, J., Zayas, J., Kota, S., Ervin, G., and Maric, D.: The Impact of Active Aerodynamic Load Control on Fatigue and Energy Capture at Low Wind Speed Sites, in: European Wind Energy Conference & Exhibition 2009, Marseille, France, 2670–2679, available at: https://www.osti.gov/biblio/1141815 (last access: 19 May 2021), 2009.a Bergami, L. and Gaunaa, M.: ATEFlap Aerodynamic Model, a Dynamic Stall Model Including the Effects of Trailing Edge Flap Deflection, Tech. Rep. Risø-R-1792, DTU Wind Energy, Risø, Denmark, available at: https://orbit.dtu.dk/files/6599679/ris-r-1792.pdf (last access: 19 May 2021), 2012.a Bergami, L. and Gaunaa, M.: Analysis of Aeroelastic Loads and their Contributions to Fatigue Damage, J. Phys.: Conf. Ser., 555, 012007, https://doi.org/10.1088/1742-6596/555/1/012007, 2014.a, b Bergami, L. and Poulsen, N.: A Smart Rotor Configuration with Linear Quadratic Control of Adaptive Trailing Edge Flaps for Active Load Alleviation, Wind Energy, 18, 625–641, https://doi.org/10.1002/ we.1716, 2015.a Bernhammer, L., van Kuik, G. A. M., and De Breuker, R.: Fatigue and extreme load reduction of wind turbine components using smart rotors, J. Wind Eng. Indust. Aerodynam., 154, 84–95, https://doi.org/ 10.1016/j.jweia.2016.04.001, 2016.a Bertelè, M., Bottasso, C. L., Cacciola, S., Daher Adegas, F., and Delport, S.: Wind inflow observation from load harmonics, Wind Energ. Sci., 2, 615–640, https://doi.org/10.5194/wes-2-615-2017, Borg, M., Mirzaei, M., and Bredmose, H.: LIFES50+ Deliverable D1.2: Wind Turbine Models for the Design, Tech. Rep. E-101, DTU Wind Energy, Risø, Denmark, 2015.a Bossanyi, E. A.: Individual Blade Pitch Control for Load Reduction, Wind Energy, 6, 119–128, https://doi.org/10.1002/we.76, 2003.a Burger, B.: Power Generation in Germany – Assesment of 2017, Tech. rep., Fraunhofer Institute for Solar Energy Systems ISE, Freiburg, Germany, available at: https://www.ise.fraunhofer.de/content/dam/ ise/en/documents/publications/studies/Stromerzeugung_2017_e.pdf (last access: 19 May 2021), 2018.a Chaviaropoulos, P., Karga, I., Harkness, C., and Hendriks, B.: INNWIND Deliverable 1.23: PI-Based Assesment of Innovative Concepts (Methodological Issues), Tech. rep., INNWIND.eu, available at: http: //www.innwind.eu/publications/deliverable-reports (last access: 19 May 2021), 2014.a Chen, Z., Stol, K., and Mace, B.: System Identification and Controller Design for individual Pitch and Trailing Edge Flap Control on upscaled Wind Turbines, Wind Energy, 19, 1073–1088, https:// doi.org/10.1002/we.1885, 2016.a, b Chen, Z., Stol, K., and Mace, B.: Wind turbine Blade Optimisation with Individual Pitch and Trailing Edge Flap Control, Renew. Energy, 103, 750–765, https://doi.org/10.1016/j.renene.2016.11.009, 2017.a, b Cooperman, A. and Martinez, M.: Load Monitoring for Active Control of Wind Turbines, Renew. Sustain. Energ. Rev., 41, 189–201, https://doi.org/10.1016/j.rser.2014.08.029, 2015.a, b, c Damiani, R., Dana, S., Annoni, J., Fleming, P., Roadman, J., van Dam, J., and Dykes, K.: Assessment of Wind Turbine Component Loads Under Yaw-Offset Conditions, Wind Energ. Sci., 3, 173–189, https:// doi.org/10.5194/wes-3-173-2018, 2018.a Engels, W. P., Kanev, S., and van Engelen, T.: Distributed Blade Control, in: Torque: The Science of Making Torque from Wind, Heraklion, Greece, available at: https://www.researchgate.net/publication /265063622_Distributed_Blade_Control (last access: 19 May 2021), 2010.a Fisher, A. and Madsen, H. A.: Investigation of the theoretical load alleviation potential using trailing edge flaps controlled by inflow data, Wind Energy, 19, 1567–1583, https://doi.org/10.1002/ we.1937, 2016.a Hansen, M. H., Henriksen, L. C., Hartvig, M., and Christian, L.: Basic DTU Wind Energy Controller, Tech. Rep. E-0028, DTU Wind Energy, Risø, Denmark, 2013.a Hariharan, N. and Leishman, J. G.: Unsteady Aerodynamics of a Flapped Airfoil in Subsonic Flow by Indicial Concepts, in: Proc. of the AIAA 36th Structures, Structural Dynamics and Materials Conference, New Orleans, 613–634, https://doi.org/10.2514/6.1995-1228, 1995.a Henriksen, L. C., Bergami, L., and Andersen, P. B.: A Model Based Control Methodology combining Blade Pitch and Adaptive Trailing Edge Flaps in a common Framework, in: Proceedings of the EWEA, Vienna, Austria, available at: https://orbit.dtu.dk/en/publications/a-model-based-control-methodology-combining-blade-pitch (last access: 19 May 2021), 2013.a IEC 61400-1 Ed. 3: IEC 61400-1: Wind Turbines – Part 1: Design Requirements, Standard, International Electrotechnical Commission, Geneva, Switzerland, 2005.a, b Iribas, M., Hansen, M. H., Mahmood, M., Tibaldi, C., Natarajan, A., Bossanyi, E., Stock, A., Jamieson, P., Leithead, W., and Schlipf, D.: INNWIND Deliverable 1.42: Methodology for Feed-Forward Control Strategies using Nacelle or Blade Based Sensors and Distributed Control, Tech. rep., INNWIND.eu, available at: http://www.innwind.eu/publications/deliverable-reports (last access: 19 May 2021), 2015.a Jamieson, P.: Innovation in Wind Turbine Design, 2nd Edn., John Wiley & Sons Ltd., West Sussex, UK, 2018.a Jones, B. L., Lio, W. H., and Rossiter, J. A.: Overcoming fundamental limitations of wind turbine individual blade pitch control with inflow sensors, Wind Energy, 21, 922–936, https://doi.org/10.1002 /we.2205, 2018.a, b, c, d Jonkman, J.: Modeling of the UAE Wind Turbine for Refinement of FAST_AD, Tech. Rep. TP-500-34755, NREL, Golden, Colorado, 2003.a Jonkman, J., Butterfield, S., Musial, W., and Scott, G.: Definition of a 5-MW Reference Wind Turbine for Offshore System Development, Tech. Rep. TP-500-38060, NREL, Golden, Colorado, 2009.a Jost, E., Barlas, T., Riziotis, V., and Navalkar, S. T.: INNWIND Deliverable 2.32: Validation of New Control Concepts by Advanced Fluid-Structure Interaction Tools, Tech. rep., INNWIND.eu, available at: http://www.innwind.eu/publications/deliverable-reports (last access: 19 May 2021), 2015.a, b Kanda, A. and Dowell, E. H.: Worst-case gust-response analysis for typical airfoil section with control surface, J. Aircraft, 42, 956–962, https://doi.org/10.2514/1.8931, 2005.a Kracht, P., Perez-Becker, S., Richard, J. B., and Fischer, B.: Performance Improvement of a Point Absorber Wave Energy Converter by Application of an Observer-Based Control: Results From Wave Tank Testing, IEEE T. Indust. Appl., 51, 3426–3434, https://doi.org/10.1109/TIA.2015.2405892, 2015.a, b Lackner, M. and van Kuik, G. A. M.: A Comparison of Smart Rotor Control Approaches using Trailing Edge Flaps and individual Pitch Control, Wind Energy, 13, 117–134, https://doi.org/10.1002/we.353, Leishman, J. G.: Principles of Helicopter Aerodynamics, 2nd Edn., Cambridge University Press, Cambridge, 2006.a, b, c, d, e Madsen, H. A., Larsen, T. J., Pirrung, G. R., Li, A., and Zahle, F.: Implementation of the Blade Element Momentum Model on a Polar Grid and its Aeroelastic Load Impact, Wind Energ. Sci., 5, 1–27, https://doi.org/10.5194/wes-5-1-2020, 2020.a Manolas, D., Spyropoulos, N., Serafeim, G., Riziotis, V., Chaviaropoulos, P., and Voutsinas, S.: Inflow-based Flap Control on a 10MW-Scale Wind Turbine Using a Spinner Anemometer, J. Phys.: Conf. Ser., 1037, 032045, https://doi.org/10.1088/1742-6596/1037/3/032045, 2018.a Marten, D., Pechlivanoglou, G., Nayeri, C. N., and Paschereit, C. O.: Integration of a WT Blade Design tool in XFOIL/XFLR5, in: 10th German Wind Energy Conference (DEWEK 2010), Bremen, Germany, available at: https://www.researchgate.net/publication/275638785_Integration_of_a_WT_Blade_Design_Tool_in_XFoilXFLR5 (last access: 19 May 2021), 2010.a Marten, D., Lennie, M., Pechlivanoglou, G., Nayeri, C. N., and Paschereit, C. O.: Implementation, Optimization and Validation of a Nonlinear Lifting Line-Free Vortex Wake Module within the Wind Turbine Simulation Code QBlade, ASME J. Eng. Gas Turb. Power, 138, 072601, https://doi.org/10.1115/GT2015-43265, 2015.a Moriarty, P. and Hansen, A.: AeroDyn Theory Manual, Tech. Rep. EL-500-36881, NREL, Golden, Colorado, https://doi.org/10.2172/15014831, 2005.a Navalkar, S. T., Van Wingerden, J. W., Van Solingen, E., Oomen, T., and van Kuik, G. A. M.: Subspace Predictive Repetitive Control for Wind Turbine Load Alleviation using Trailing Edge Flaps, in: Proceedings of the American Control Conference, Portland, USA, 4422–4427, https://doi.org/10.1109/ACC.2014.6859094, 2014.a Ng, B., Palacios, R., Kerrigan, E., Graham, M., and Hesse, H.: Aerodynamic load control in horizontal axis wind turbines with combined aeroelastic tailoring and trailing-edge flaps, Wind Energy, 19, 243–263, https://doi.org/10.1002/we.1830, 2016.a NREL: FAST v8.15, available at: https://www.nrel.gov/wind/nwtc/fastv8.html,last access: 19 May 2021.a Perez-Becker, S., Papi, F., Saverin, J., Marten, D., Bianchini, A., and Paschereit, C. O.: Is the Blade Element Momentum theory overestimating wind turbine loads? – An aeroelastic comparison between OpenFAST's AeroDyn and QBlade's Lifting-Line Free Vortex Wake method, Wind Energ. Sci., 5, 721–743, https://doi.org/10.5194/wes-5-721-2020, 2020.a, b, c Perez-Becker, S., Marten, D., Nayeri, C. N., and Paschereit, C. O.: Implementation and Validation of an Advanced Wind Energy Controller in Aero-Servo-Elastic Simulations Using the Lifting Line Free Vortex Wake Model, Energies, 14, 783, https://doi.org/10.3390/en14030783, 2021.a, b Plumley, C.: The Smart Rotor Wind Turbine, PhD thesis, University of Strathclyde, Strathclyde, 2015.a Plumley, C., Graham, M., Leithead, W., Bossanyi, E. A., and Jamieson, P.: Supplementing Wind Turbine Pitch Control with a Trailing Edge Flap Smart Rotor, in: Proceedings of the 3rd Renewable Power Generation Conference (RPG 2014), Naples, Italy, 1–6, https://doi.org/10.1049/cp.2014.0919, 2014a. a Plumley, C., Leithead, W., Jamieson, P., Bossanyi, E. A., and Graham, M.: Comparison of individual Pitch and Smart Rotor Control Strategies for Load Reduction, J. Phys.: Conf. Ser., 524, 012054, https://doi.org/10.1088/1742-6596/524/1/012054, 2014b.a Simley, E. and Pao, L.: Evaluation of a Wind Speed Estimator for effective Hub-Height and Shear Components, Wind Energy, 19, 167–184, https://doi.org/10.1002/we.1817, 2016.a Tasora, A., Serban, R., Mazhar, H., Pazouki, A., Melanz, D., Fleischmann, J., Taylor, M., Sugiyama, H., and Negrut, D.: Chrono: An Open Source Multi-Physics Dynamics Engine, in: Proceedings of the International Conference on High Performance Computing in Science and Engineering, Solan, Czech Republic, 19–49, https://doi.org/10.1007/978-3-319-40361-8_2, 2016.a TU Berlin: QBlade, available at: https://www.qblade.org/, last access: 19 May 2021.a Ungurán, R., Petrović, V., Pao, L. Y., and Kühn, M.: Performance Evaluation of a Blade-Mounted LiDAR with Dynamic Versus Fixed Parameters through Feedback-Feedforward Individual Pitch and Trailing Edge Flap Control, J. Phys.: Conf. Ser., 1037, 032004, https://doi.org/10.1088/1742-6596/1037/3/032004, 2018.a Wendler, J., Marten, D., Pechlivanoglou, G., Nayeri, C. N., and Paschereit, C. O.: An Unsteady Aerodynamics Model for Lifting Line Free Vortex Wake Simulations of HAWT and VAWT in QBlade, in: Proceedings of ASME Turbo Expo: Turbine Technical Conference and Exposition GT2016, Seoul, South Korea, V009T46A011, https://doi.org/10.1115/GT2016-57184, 2016.a Wilson, D., Berg, D., Resor, B., Barone, M., and Berg, J.: Combined Individual Pitch Control and Active Aerodynamic Load Controller Investigation for the 5MW Upwind Turbine, in: AWEA Wind Power Conference & Exhibition, Chicago, USA, 1–12, available at: https://energy.sandia.gov/wp-content/gallery/uploads/AWEA-092875C.pdf (last access: 19 May 2021), 2009.a Zhang, M., Tan, B., and Xu, J.: Smart fatigue load control on the large-scale wind turbine blades using different sensing signals, Renew. Energy, 87, 111–119, https://doi.org/10.1016/ j.renene.2015.10.011, 2016.a, b
{"url":"https://wes.copernicus.org/articles/6/791/2021/","timestamp":"2024-11-14T18:10:23Z","content_type":"text/html","content_length":"537585","record_id":"<urn:uuid:a1665803-07ca-4a1d-a2ac-706785739f64>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00142.warc.gz"}
Calculator Variance and Standard Deviation Linear Regression Variance and Standard Deviation Normal Distribution Calculator Variance and Standard Deviation Calculates the empiric variance and standard deviation from a data set of associated values. The variance is a measure for the dispersion, how far spread or how close together the values are. The higher variance and standard deviation are, the stronger is the spread. As the variance works with squares, more often the standard deviation is used, which is the square root of the variance. Example calculates with the population (in millions) of some European countries. The formulas are: x[i]: values, n: number of values, Σ: sum i=1 to n μ: mean, σ²: variance, σ: standard deviation, CV: coefficient of variation μ = Σ(x[i]) / n σ² = Σ(x[i]-μ)² / n σ = √σ² CV = σ / μ Variance and standard deviation can never be negative. If they are zero, then all measured values have exactly the same level. Otherwise, these figures are only meaningful in connection with the mean. The larger the mean, the larger the standard deviation will be if the distribution otherwise remains the same. The ratio of standard deviation and mean is the coefficient of variation. This value is dimensionless and meaningful on its own, but you have to make sure that the mean is not zero or close to zero. The larger the coefficient of variation, the wider the spread of the values. To make it clearer, you can also see it as a percentage value; a coefficient of variation of 0.5 then corresponds to a deviation of 50 percent. Taking these statistical values into account is often essential for a scientifically sound interpretation of empirical data. Outside of science, for the sake of simplicity, often only the mean is given, which is accompanied by a loss of information and can lead to incorrect conclusions. A common form of distribution is the normal distribution, in which mean values occur very frequently and extremes occur less frequently the more extreme they are. If the values are arranged so that the mean is 0 and the standard deviation is 1, then it is a standard normal distribution. Jumk.de Webprojects Online Calculators Imprint & Privacy | German: Korrelation ↑ top ↑
{"url":"https://rechneronline.de/correlation/variance-standard-deviation.php","timestamp":"2024-11-05T04:21:05Z","content_type":"text/html","content_length":"7706","record_id":"<urn:uuid:6f94d54f-a06c-48ef-88ea-ad3f5ef36e0d>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00340.warc.gz"}
class pyts.transformation.ShapeletTransform(n_shapelets='auto', criterion='mutual_info', window_sizes='auto', window_steps=None, remove_similar=True, sort=False, verbose=0, random_state=None, n_jobs= Shapelet Transform Algorithm. The Shapelet Transform algorithm extracts the most discriminative shapelets from a data set of time series. A shapelet is defined as a subset of consecutive points from a time series. Two criteria are made available: mutual information and F-scores. n_shapelets : int or ‘auto’ (default = ‘auto’) The number of shapelets to keep. If ‘auto’, n_timestamps // 2 shapelets are considered, where n_timestamps is the number of time points in the dataset. Note that there might be a smaller number of shapelets if fewer than n_shapelets shapelets have been extracted during the search. criterion : ‘mutual_info’ or ‘anova’ (default = ‘mutual_info’) Criterion to perform the selection of the shapelets. ‘mutual_info’ uses the mutual information, while ‘anova’ use the ANOVA F-value. window_sizes : array-like or ‘auto ‘(default = ‘auto’) Size of the sliding windows. If ‘auto’, the range for the window sizes is determined automatically. Otherwise, all the elements must be either integers or floats. In the latter case, each element represents the percentage of the size of each time series and must be between 0 and 1; the size of the sliding windows will be computed as np.ceil(window_sizes * n_timestamps). window_steps : None or array-like (default = None) Step of the sliding windows. If None, each window_step is equal to 1. Otherwise, all the elements must be either integers or floats. In the latter case, each element represents the percentage of the size of each time series and must be between 0 and 1; the step of the sliding windows will be computed as np.ceil(window_steps * n_timestamps). Must be None if window_sizes='auto'. Parameters: remove_similar : bool (default = True) If True, self-similar shapelets are removed, keeping only the non-self-similar shapelets with the highest scores. Two shapelets are considered to be self-similar if they are taken from the the same time series and have at least one overlapping index. sort : bool (default = False) If True, shapelets are sorted in descending order according to their associated scores. If False, the order is undefined. verbose : int (default = 0) Verbosity level when fitting: if non zero, progress messages are printed. Above 50, the output is sent to stdout. The frequency of the messages increases with the verbosity level. random_state : int, RandomState instance or None (default = None) If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random. Only used if window_sizes='auto' in order to subsample the dataset to find the best range or if criterion=='mutual_info' to add small noise to the data. n_jobs : None or int (default = None) The number of jobs to run in parallel for fit. If -1, then the number of jobs is set to the number of cores. [1] J. Lines, L. M. Davis, J. Hills and A. Bagnall, “A Shapelet Transform for Time Series Classification”. Data Mining and Knowledge Discovery, 289-297 (2012). >>> from pyts.transformation import ShapeletTransform >>> X = [[0, 2, 3, 4, 3, 2, 1], ... [0, 1, 3, 4, 3, 4, 5], ... [2, 1, 0, 2, 1, 5, 4], ... [1, 2, 2, 1, 0, 3, 5]] >>> y = [0, 0, 1, 1] >>> st = ShapeletTransform(n_shapelets=2, window_sizes=[3]) >>> st.fit(X, y) >>> len(st.shapelets_) >>> st.indices_.shape (2, 3) shapelets_ : array, shape = (n_shapelets,) The array with the selected shapelets. indices_ : array, shape = (n_shapelets, 3) The indices for the corresponding shapelets in the training set. The first column consists of the indices of the samples. The second column consists of the starting indices (included) of the shapelets. The third column consists of the ending indices (excluded) of the shapelets. scores_ : array, shape = (n_shapelets,) The scores associated to the shapelets. The higher, the more discriminant. If criterion='mutual_info', mutual information scores are reported. If criterion='anova', F-scores are window_range_ : None or tuple Range of the window sizes if window_sizes='auto'. None otherwise. │__init__([n_shapelets, criterion, …])│Initialize self. │ │fit(X, y) │Fit the model according to the given training data. │ │fit_transform(X, y) │Fit the model than transform the given training data. │ │get_params([deep]) │Get parameters for this estimator. │ │set_params(**params) │Set the parameters of this estimator. │ │transform(X) │Transform the provided data. │ __init__(n_shapelets='auto', criterion='mutual_info', window_sizes='auto', window_steps=None, remove_similar=True, sort=False, verbose=0, random_state=None, n_jobs=None)[source]¶ Initialize self. See help(type(self)) for accurate signature. fit(X, y)[source]¶ Fit the model according to the given training data. It finds the n_shapelets best shapelets in the training set. X : array-like, shape = (n_samples, n_timestamps) Univariate time series. y : array-like, shape = (n_samples,) Class labels for each data sample. Returns: self : object fit_transform(X, y)[source]¶ Fit the model than transform the given training data. It finds the n_shapelets best shapelets in the training set and computes the distances between them and the training set. X : array-like, shape = (n_samples, n_timestamps) Univariate time series. y : array-like, shape = (n_samples,) Class labels for each data sample. X_new : array, shape = (n_samples, n_shapelets) Distances between the selected shapelets and the samples. Get parameters for this estimator. deep : bool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. params : dict Parameter names mapped to their values. Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. **params : dict Estimator parameters. self : estimator instance Estimator instance. Transform the provided data. It computes the distances between the selected shapelets and the samples. X : array-like, shape = (n_samples, n_timestamps) Univariate time series. X_new : array, shape = (n_samples, n_shapelets) Distances between the selected shapelets and the samples. Examples using pyts.transformation.ShapeletTransform¶
{"url":"https://pyts.readthedocs.io/en/latest/generated/pyts.transformation.ShapeletTransform.html","timestamp":"2024-11-04T08:42:58Z","content_type":"application/xhtml+xml","content_length":"37591","record_id":"<urn:uuid:024a13fb-2062-44fd-bf5a-a2c47f1c588f>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00035.warc.gz"}
Does natural numbers satisfy completeness property?Does natural numbers satisfy completeness property? Does natural numbers satisfy completeness property? You must login to ask question. Does natural numbers satisfy completeness property? The set of natural numbers satisfies the supremum property and hence can be claimed to be complete. But the set of natural numbers is not dense. It is actually discrete. There are neighbourhoods of every natural number such that they contain no others. Why is it important for real numbers to be complete? The real numbers can be characterized by the important mathematical property of completeness, meaning that every nonempty set that has an upper bound has a smallest such bound, a property not possessed by the rational numbers. What is the smallest natural number? The first is smallest natural number n so the smallest natural number is 1 because natural numbers go on. … The smallest whole number is 0 because whole number start from zero and the go all the way up to Infinity. Is 0 a natural number in discrete math? Natural Numbers are 1,2,3,4,5,… […] and Whole numbers are 0,1,2,3,… According to Wikipedia: In mathematics, a natural number is either a positive integer (1, 2, 3, 4, …) or a non-negative integer (0, 1, 2, 3, 4, …). Is 0 a natural number? 0 is not a natural number, it is a whole number. Negative numbers, fractions, and decimals are neither natural numbers nor whole numbers. N is closed, associative, and commutative under both addition and multiplication (but not under subtraction and division). What are not real numbers? what is NOT a Real Number? Imaginary Numbers like √−1 (the square root of minus 1) are not Real Numbers. Infinity is not a Real Number. What are the types of real numbers? Different types of real numbers • Natural numbers: These are real numbers that have no decimal and are bigger than zero. • Whole numbers: These are positive real numbers that have no decimals, and also zero. … • Integers: These are real numbers that have no decimals. Which are the real numbers? Real numbers are the numbers which include both rational and irrational numbers. Rational numbers such as integers (-2, 0, 1), fractions(1/2, 2.5) and irrational numbers such as √3, π(22/7), etc., are all real numbers. What is greatest and smallest number? Formation of Greatest and Smallest Numbers To Form the Greatest Number To Form the Smallest Number Greatest number should have greatest digit in the thousands place that is 9 . Smallest number should have smallest digit in the thousands place that is 3. Which is smallest number? The smallest whole number is « 0 » (ZERO). What is the only whole number which is not a natural number? Zero (0) is not a natural number but a whole number. What does N mean in discrete math? R = real numbers, Z = integers, N=natural numbers, Q = rational numbers, P = irrational numbers. Is zero a number Yes or no? 0 (zero) is a number, and the numerical digit used to represent that number in numerals. … It fulfills a central role in mathematics as the additive identity of the integers, real numbers, and many other algebraic structures. As a digit, 0 is used as a placeholder in place value systems. What are numbers 0 9 called? The counting numbers or natural numbers along with zero form whole numbers. We use the digits 0 to 9 to form all the other numbers. Using these 10 digits we can form infinite numbers. This number system using 10 digits is called Decimal Number System. What is the opposite number of 0? The opposite of zero is negative zero. Zero has no opposite. What is the first number 0 or 1? In conventions of sign where zero is considered neither positive nor negative, 1 is the first and smallest positive integer. It is also sometimes considered the first of the infinite sequence of natural numbers, followed by 2, although by other definitions 1 is the second natural number, following 0. Is 0 a positive real number? Zero is considered neither positive nor negative. The real numbers can be visualized on a horizontal number line with an arbitrary point chosen as 0, with negative numbers to the left of 0 and positive numbers to the right of 0. … Any real number corresponds to a unique position on the number line. What is the difference between real numbers and natural numbers? ): The counting numbers {1, 2, 3, …} are commonly called natural numbers; however, other definitions include 0, so that the non-negative integers {0, 1, 2, 3, …} are also called natural numbers. … All rational numbers are real, but the converse is not true. Irrational numbers: Real numbers that are not rational. What is the difference between natural and real numbers? ): The counting numbers {1, 2, 3, …} are commonly called natural numbers; however, other definitions include 0, so that the non-negative integers {0, 1, 2, 3, …} are also called natural numbers. … All rational numbers are real, but the converse is not true. Irrational numbers: Real numbers that are not rational. Is 0 a real number? Answer: 0 is a rational number, whole number, integer, and a real number. Let’s analyze this in the following section. Explanation: Real numbers include natural numbers, whole numbers, integers, rational numbers, and irrational numbers. What are the set of numbers? What does it look like? Type of Number Example Prime Number P=2,3,5,7,11,13,17,… Composite Number 4,6,8,9,10,12,… Whole Numbers W=0,1,2,3,4,… Integers Z=…,−3,−2,−1,0,1,2,3,… • Dec 11, 2020 What is set of real numbers? What is the Set of all Real Numbers? The set of real numbers is a set containing all the rational and irrational numbers. It includes natural numbers (N), whole numbers (W), integers (Z), rational numbers (Q) and irrational numbers ( ¯¯¯¯Q Q ¯ ). What is the 4 digit greatest number? the greatest four-digit number is 9999. What is the smallest number 0 or 1? So zero (0) is the smallest one-digit whole number and one(1) is the smallest one-digit natural number. What is the smallest number of 7 and 9? 63 is the least common multiple of 7 and 9 .
{"url":"https://answers.com.tn/does-natural-numbers-satisfy-completeness-property/","timestamp":"2024-11-08T12:49:52Z","content_type":"text/html","content_length":"94996","record_id":"<urn:uuid:372299ff-e5dd-413c-a365-17e6451375b4>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00063.warc.gz"}
Conservation of Mass: Summary The answers to the ConcepTests are given below and will open in a separate window. Key points from this module: 1. The mass of the system must always be constant because mass can neither be created nor destroyed. This means that \(\frac{DM_{sys}}{Dt}\)= 0. 2. If mass enters the control volume at a higher rate than it leaves, then the control volume is accumulating mass. It is not at steady state. This means that \(\frac{\partial}{\partial t} \int_{CV} \rho\,dV\) > 0 and \(\int_{CS} \rho\,\vec{V} \cdotp \vec{n} \, dA\) < 0. 3. If mass leaves the control volume at a higher rate than it enters, then the control volume is losing mass. It is not at steady state. This means that \(\frac{\partial}{\partial t} \int_{CV} \rho \,dV\) < 0 and \(\int_{CS} \rho\,\vec{V} \cdotp \vec{n} \, dA\) > 0. 4. At steady state, the mass flow rate of fluid leaving the control volume (in kg/s) minus the mass flow rate of fluid entering the control volume (in kg/s) must always equal zero. Because it is at steady state \(\frac{\partial}{\partial t} \int_{CV} \rho\,dV\) = 0. This means that \(\int_{CS} \rho\,\vec{V} \cdotp \vec{n} \, dA\) is also zero. From studying this module, you should now be able to: • Draw and work with control volumes. • Simplify the macroscopic continuity equation for a specified control volume. • Calculate the average velocity of a fluid flowing through a constriction. • Calculate the rate at which mass is accumulating or depleting within a control volume. Prepared by: Jeffrey Knutsen, Department of Mechanical Engineering, University of Colorado Boulder
{"url":"https://learncheme.com/quiz-yourself/interactive-self-study-modules/conservation-of-mass/conservation-of-mass-summary/","timestamp":"2024-11-05T18:39:32Z","content_type":"text/html","content_length":"75961","record_id":"<urn:uuid:e0491721-a77f-4873-a854-7584742a6ee3>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00645.warc.gz"}
Problem Model Prompt P_Combinatorics_35 GPT-4 Turbo Problem C w/ H Ground Truth Answer: (2n)!/(2^n*n!) Use in Code import champ_dataset # "pip install champ-dataset" if necessary dataset = champ_dataset.load('v0') problem = dataset['P_Combinatorics_35'] messages = problem.conversations['GPT-4 Turbo|Problem C w/ H'].messages for message in messages: print(f'Role: {message.role}') print(f'Text: {message.text}') print(f'Error: {message.error}') Full Conversation System Prompt User Input Model Output You are an expert on mathematics. First, study the following problem and its solution as they may contain helpful information. Problem: n people sit around a circular table. How many of the n! arrangements are distinct (i.e., do not have the same neighboring relations), as an expression of n? Step-wise solution: 1. For each arrangement, a total of n rotations and 2 reflections all preserve the neighoring relationship. 2. Thus, the distinct number of arrangement is n!/(2n). With what you have learned, solve the following problem. Make sure to show your work before giving the final answer. 2n players are participating in a tennis tournament. Find the number of ways that n pairs can be constructed for the first round, as an expression of n. In addition, you may also find the following information helpful: 1. Order the 2n players in a row, and make the pairing (1, 2), (3, 4), (5, 6), ..., (2n-1, 2n). In how many ways can this be done? 2. There are two types of multiple counting, one is the order within each pair, such as (1, 2) and (2, 1), and the second is ordering of the pairs, such as [(1, 2), (3, 4)] and [(3, 4), (1, 2)]. To solve the problem of finding the number of ways that n pairs can be constructed for the first round of a tennis tournament with 2n players, we need to consider the two types of multiple counting mentioned: the order within each pair and the ordering of the pairs themselves. First, let's consider the number of ways to order the 2n players in a row. This can be done in (2n)! ways, since there are 2n players and each player can be in any of the 2n positions. Now, we will form pairs as (1, 2), (3, 4), ..., (2n-1, 2n). However, within each pair, the order does not matter, so for each pair, there are 2! = 2 ways of arranging the players that are considered equivalent (e.g., (1, 2) is the same as (2, 1)). Since there are n pairs, we need to divide by 2^n to correct for this overcounting. Next, we need to consider the ordering of the pairs themselves. The n pairs can be arranged in n! different ways, but since the order of the pairs does not matter for our problem, we need to divide by n! to correct for this overcounting as well. Putting it all together, the number of distinct ways to form n pairs from 2n players is: (2n)! / (2^n * n!) Let's simplify this expression: (2n)! / (2^n * n!) = (2n * (2n-1) * (2n-2) * ... * (n+1) * n!) / (2^n * n!) Notice that the n! in the numerator and the denominator will cancel out: = (2n * (2n-1) * (2n-2) * ... * (n+1)) / 2^n Now, we can see that each term in the numerator is twice a number from (n+1) to 2n, so we can simplify by dividing each term by 2: = ((2n/2) * ((2n-1)/2) * ((2n-2)/2) * ... * ((n+1)/2)) / 2^(n-n) = (n * (2n-1) * (2n-2) * ... * (n+1)) / 2^n This is the final expression for the number of distinct ways to form n pairs from 2n players. It cannot be simplified further without knowing the specific value of n. Now, summarize the answer above in one sentence, without any intermediate steps or explanations. The number of distinct ways to form n pairs from 2n players for the first round of a tennis tournament is given by the expression (2n)! / (2^n * n!).
{"url":"https://yujunmao1.github.io/CHAMP/solutions/p_combinatorics_35_gpt4t_problemcwh.html","timestamp":"2024-11-14T08:45:49Z","content_type":"text/html","content_length":"7670","record_id":"<urn:uuid:2270b870-963a-4e65-8560-4d59cb845b07>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00812.warc.gz"}
Components of Steel Structures Six Step Guide to Designing a Rolled Steel Beam Here are a few simple steps that are to be followed for the design of Rolled Steel Beam: Step one Calculate the maximum Bending Moment and Shear Force Step two Depending upon whether the beam is laterally restrained or unrestrained, calculate the permissible stress in bending compression (sigma bc) using the equation; sigma bc = 0.66fy Step three Find the required Section Modulus (Zreq) Zreq = M/sigma bc Step four From the structural tables, choose a suitable section such that the section modulus is slightly more than the required section modulus. Step five Check for shear. Calculate the maximum Shear force in the beam. Calculate the average shear stress in the web and this should be less than 0.4fy. Tv = (Shear Force)/HLw where = H = height of the web Lw = length of the web Step six Check for deflection Ymax < 1/325 of span
{"url":"https://civilprojectsonline.com/tag/components-of-steel-structures/","timestamp":"2024-11-08T02:20:58Z","content_type":"text/html","content_length":"47174","record_id":"<urn:uuid:0c4a4878-f521-4a16-b2a7-f46bc38cdb82>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00209.warc.gz"}
Exponential Functions - Formula, Properties, Graph, Rules What is an Exponential Function? An exponential function measures an exponential decrease or rise in a particular base. For instance, let's say a country's population doubles every year. This population growth can be represented in the form of an exponential function. Exponential functions have many real-life applications. Mathematically speaking, an exponential function is displayed as f(x) = b^x. Here we will review the essentials of an exponential function along with relevant examples. What’s the formula for an Exponential Function? The common formula for an exponential function is f(x) = b^x, where: 1. b is the base, and x is the exponent or power. 2. b is fixed, and x varies For example, if b = 2, we then get the square function f(x) = 2^x. And if b = 1/2, then we get the square function f(x) = (1/2)^x. In cases where b is greater than 0 and does not equal 1, x will be a real number. How do you graph Exponential Functions? To chart an exponential function, we need to find the dots where the function intersects the axes. This is referred to as the x and y-intercepts. Considering the fact that the exponential function has a constant, it will be necessary to set the value for it. Let's take the value of b = 2. To discover the y-coordinates, its essential to set the worth for x. For example, for x = 1, y will be 2, for x = 2, y will be 4. According to this method, we achieve the range values and the domain for the function. Once we determine the worth, we need to chart them on the x-axis and the y-axis. What are the properties of Exponential Functions? All exponential functions share comparable qualities. When the base of an exponential function is greater than 1, the graph would have the below properties: • The line passes the point (0,1) • The domain is all positive real numbers • The range is more than 0 • The graph is a curved line • The graph is increasing • The graph is flat and ongoing • As x advances toward negative infinity, the graph is asymptomatic regarding the x-axis • As x approaches positive infinity, the graph grows without bound. In events where the bases are fractions or decimals within 0 and 1, an exponential function exhibits the following attributes: • The graph passes the point (0,1) • The range is larger than 0 • The domain is all real numbers • The graph is descending • The graph is a curved line • As x advances toward positive infinity, the line within graph is asymptotic to the x-axis. • As x advances toward negative infinity, the line approaches without bound • The graph is flat • The graph is continuous There are a few basic rules to remember when working with exponential functions. Rule 1: Multiply exponential functions with the same base, add the exponents. For instance, if we have to multiply two exponential functions that have a base of 2, then we can write it as 2^x * 2^y = 2^(x+y). Rule 2: To divide exponential functions with the same base, deduct the exponents. For instance, if we have to divide two exponential functions that posses a base of 3, we can compose it as 3^x / 3^y = 3^(x-y). Rule 3: To grow an exponential function to a power, multiply the exponents. For example, if we have to grow an exponential function with a base of 4 to the third power, then we can note it as (4^x)^3 = 4^(3x). Rule 4: An exponential function with a base of 1 is consistently equivalent to 1. For instance, 1^x = 1 regardless of what the worth of x is. Rule 5: An exponential function with a base of 0 is always equivalent to 0. For example, 0^x = 0 despite whatever the value of x is. Exponential functions are usually utilized to signify exponential growth. As the variable rises, the value of the function rises at a ever-increasing pace. Example 1 Let's look at the example of the growing of bacteria. If we have a group of bacteria that duplicates each hour, then at the end of the first hour, we will have twice as many bacteria. At the end of hour two, we will have 4 times as many bacteria (2 x 2). At the end of the third hour, we will have 8x as many bacteria (2 x 2 x 2). This rate of growth can be represented utilizing an exponential function as follows: f(t) = 2^t where f(t) is the number of bacteria at time t and t is measured in hours. Example 2 Also, exponential functions can illustrate exponential decay. If we have a dangerous material that decays at a rate of half its amount every hour, then at the end of one hour, we will have half as much substance. After the second hour, we will have a quarter as much material (1/2 x 1/2). After hour three, we will have 1/8 as much substance (1/2 x 1/2 x 1/2). This can be shown using an exponential equation as follows: f(t) = 1/2^t where f(t) is the amount of substance at time t and t is calculated in hours. As shown, both of these examples pursue a similar pattern, which is why they are able to be shown using exponential functions. In fact, any rate of change can be denoted using exponential functions. Recall that in exponential functions, the positive or the negative exponent is represented by the variable while the base remains constant. This indicates that any exponential growth or decay where the base varies is not an exponential function. For example, in the matter of compound interest, the interest rate stays the same while the base changes in ordinary time periods. An exponential function is able to be graphed employing a table of values. To get the graph of an exponential function, we must plug in different values for x and measure the corresponding values for Let us check out the following example. Example 1 Graph the this exponential function formula: y = 3^x To start, let's make a table of values. As you can see, the worth of y rise very quickly as x rises. Imagine we were to plot this exponential function graph on a coordinate plane, it would look like this: As seen above, the graph is a curved line that rises from left to right ,getting steeper as it persists. Example 2 Draw the following exponential function: y = 1/2^x First, let's make a table of values. As shown, the values of y decrease very rapidly as x rises. The reason is because 1/2 is less than 1. If we were to plot the x-values and y-values on a coordinate plane, it is going to look like the following: The above is a decay function. As you can see, the graph is a curved line that gets lower from right to left and gets smoother as it goes. The Derivative of Exponential Functions The derivative of an exponential function f(x) = a^x can be displayed as f(ax)/dx = ax. All derivatives of exponential functions exhibit unique features whereby the derivative of the function is the function itself. The above can be written as following: f'x = a^x = f(x). Exponential Series The exponential series is a power series whose terminology are the powers of an independent variable digit. The common form of an exponential series is: Grade Potential Can Help You Succeed at Exponential Functions If you're fighting to understand exponential functions, or merely require a little extra support with math in general, consider working with a tutor. At Grade Potential, our Brooklyn math tutors are experts at what they do and can supply you with the one-on-one attention you need to thrive. Call us at (917) 809-6592 or contact us today to find out more about how we can help you reach your academic potential.
{"url":"https://www.brooklyninhometutors.com/blog/exponential-functions-formula-properties-graph-rules","timestamp":"2024-11-04T17:42:43Z","content_type":"text/html","content_length":"83860","record_id":"<urn:uuid:29344594-eaab-436b-a637-3f4e62f45c15>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00138.warc.gz"}
Circle Worksheets Embrace extensive practice with our circle worksheets. Start by identifying parts of a circle and understanding the relation between radius and diameter and progress to calculating the area and circumference and so on. Use the answer key to check solutions. Amp up your practice sessions with our free worksheets! Exclusive Circle Worksheets Pin up these colorful and engaging charts in your classroom or at home to assist young learners in identifying the different parts of a circle. Reviewing regularly is important to effective learning. The words in the box represent different parts of a circle. Identify what each term refers to in relation to the circle, and write the correct word next to each corresponding part of the circle. Identifying Parts of Circles | Easy Monitor children in grade 6 and grade 7 as they solve this easy exercise featured in our parts of a circle worksheet. They will practice identifying the center, diameter, and few other parts in every Identifying Parts of Circles | Moderate Introduce our pdf resource featuring moderately difficult exercises, and let young learners take on the challenge of accurately recognizing the center, chord, radius, tangent, diameter, and secant of a circle. Finding Radius and Diameter | Whole Numbers Observe the circle and identify which measure is given. If the radius is provided, multiply it by two to obtain the diameter. If the diameter is provided, divide it by two to find the radius. Identifying Radius or Diameter - Grid | Level 1 Depicting figures in grids, these worksheets on finding the radius and diameter of a circle task 7th grade and 8th grade children with determining the radius or diameter using the given common unit. Identifying Radius or Diameter - Grid | Level 2 Upgrade your efficiency in identifying the radius and diameter of a circle to proficiency with this moderately difficult exercise. Determine the radius or diameter of circles using the given varying Finding Radius and Diameter | Decimals Excel in calculating the radius and diameter of a circle with these printable practice sheets featuring decimal dimensions. When converting between radius and diameter, use the formulas: 𝑑 = 2𝑟 and 𝑟 = 𝑑/2. Area and Circumference Worksheets Check out our area and circumference of circle worksheets to become proficient in calculating the area and circumference of circles. Practice finding a circle's circumference from its area, and vice Arc Length of a Sector Worksheets Can you find the arc length (L) of a circle section? With our arc length of a sector worksheets, you can calculate it using the angle θ the arc subtends. Use the formula L = rθ, where r is the sector's radius.
{"url":"https://www.tutoringhour.com/worksheets/circle/","timestamp":"2024-11-06T04:59:13Z","content_type":"text/html","content_length":"92816","record_id":"<urn:uuid:9aecbc1a-662b-47fd-a699-d99123ea1b53>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00306.warc.gz"}
OpenStax College Physics for AP® Courses, Chapter 30, Problem 34 (Problems & Exercises) Some of the most powerful lasers are based on the energy levels of neodymium in solids, such as glass, as shown in Figure 30.65. (a) What average wavelength light can pump the neodymium into the levels above its metastable state? (b) Verify that the 1.17 eV transition produces $1.06\textrm{ }\mu\textrm{m}$ radiation. Figure 30.65 Neodymium atoms in glass have these energy levels, one of which is metastable. The group of levels above the metastable state is convenient for achieving a population inversion, since photons of many different energies can be absorbed by atoms in the ground state. Question by is licensed under CC BY 4.0 Final Answer a. $590\textrm{ nm}$ b. $1.06\textrm{ }\mu\textrm{m}$ Solution video OpenStax College Physics for AP® Courses, Chapter 30, Problem 34 (Problems & Exercises) vote with a rating of votes with an average rating of. Video Transcript This is College Physics Answers with Shaun Dychko. This is an energy level diagram for neodymium and question (a) is asking what wavelength of light would be needed to stimulate an electron from the ground state up into this group that's above the metastable state? So this difference in energy is 2.1 electron volts. So we'll use this formula which says the energy of a photon is Planck's constant times speed of light divided by the wavelength and we will multiply both sides by λ over energy to solve for the wavelength needed and hc is 1240 electron volt nanometers and we will divide that by the energy of the photon, which is 2.1 electron volts and that will work out to 590 nanometers. So a photon that's incident with this wavelength— 590 nanometers— is capable of stimulating an electron from the ground state up into this group up here because all the photon energy will be absorbed by the electron and thereby increase the electron's energy. Okay! Part (b) says verify that the transition from the metastable second state to the first state results in a photon with a wavelength of 1.06 micrometers. So we'll use the same formula for the wavelength so that's 1240 electron volt nanometers divided by 1.17 electron volts and that is 1059.8 nanometers which we then convert into micrometers and sure enough, it's 1.06 micrometers.
{"url":"https://collegephysicsanswers.com/openstax-solutions/some-most-powerful-lasers-are-based-energy-levels-neodymium-solids-such-glass-0","timestamp":"2024-11-04T02:20:02Z","content_type":"text/html","content_length":"195761","record_id":"<urn:uuid:377c268a-2984-406c-a3ad-5cb26a1a962b>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00152.warc.gz"}
Probability Distribution Web Have you ever wondered how to get from a normal distribution ("bell curve") to a uniform distribution ("flat curve")? If so, this article was meant for you! Probability distributions are often related to each other. Sometimes you can apply a transformation, or take a limit, or make a ratio, or let some parameters equal a certain value to move from one probability distribution to another. For example, probably the most well known to students is, if you take a Z transformation, Z = (x-mu)/sigma, you move from a normal distribution with mean mu and standard deviation sigma to a standard normal distribution with mean 0 and standard deviation 1. I've put two very useful "webs" below. The first ones are a small web from the great book Statistical Inference by Casella and Berger. The second ones are a more detailed web from The American Statistician, February 2008, Vol 62, No 1. I've also linked to the .pdf versions of these files here and here. Please let me know if you find these useful. Please anonymously VOTE on the content you have just read:
{"url":"https://statisticool.com/mathstat/probdistweb.html","timestamp":"2024-11-01T18:48:44Z","content_type":"text/html","content_length":"7401","record_id":"<urn:uuid:ecab6b61-89ec-425d-beba-2e009e790c18>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00451.warc.gz"}
Can I hire someone for C programming assignments that involve developing algorithms for computational sociology simulations for social impact assessment? | Hire Someone To Do C Programming Assignment Can I hire someone for C programming assignments that involve developing algorithms for computational sociology simulations why not look here social impact assessment? My passion is in developing analytical tools for those things. For me, an objective test of my content is to determine whether I should hire someone for work involving computer science. The computer scientist is more ethical if he/she treats humans in the same degree of physical and emotional comfort. The human mind uses machines to interact with other machines, which is to employ computerized methods in a complex system. It takes a lot of mathematical, artistic, mechanical, and philosophical thinking (that the tools for some research experiments to be applied are not precisely yet there to accomplish some task), to analyze the data of a human-operated system that are interacting with this, making two arguments that I think a simple question to answer of a school might be clear at the beginning. (See his article on Inference in the Computer Science Game and SSC Game). Here too, I’m talking about computers; computerising systems; data sciences. The first part of my query is whether I should hire someone for work involving computer science. What would you believe? I’m imp source talking about a guy like Alvy or Tim Ferrwell or Steve Carell, the technical guy; I’m talking about someone who knows how can you figure out a computer computer model for each and every other species I’ve ever met, and who knows how to construct program templates—for more or less simple purposes? That would save me a lot of time and a high level of thought. But I believe it is also a question of whether specific tasks are useful to the human designer who wants to develop models for things that are expected to be easier to predict than models we can collect. Or an equivalent question that one could answer in terms of the sort of reasoning one could devise to be able to work from. And you could look here you have the resources, there are a lot of systems that are in need of such development. Now my problem then is if you hire a high level research engineer and want to createCan I hire someone for C programming assignments that involve developing algorithms for computational sociology simulations for social impact click for source Thanks for the link. However, please note that looking over your e-mails, I know we are too lazy to comment. I am running an e-mail function that asks you how to get a set of graphs. Without any input I would need at least 40,000 of them to do that. Can you send me a link to a particular set of graphs? In my example, using the default set of graphs, you will need 30,000.00 of them. If I am on the right track and you give 10,000, I will work on my set. Of course, I also need at least 300,000 of them if I do get at least 200,000. Do You Have To Pay For Online Classes Up Front 00 of the same set of graphs. This is because I am using A*^2 to compute the adjacency matrix for graphs, and this approach does not perform as well as your (now) suggested calculations. Thank you for the click this I appreciate the reply. I need 10,000+ for my set. Where do I start taking this calculations? Thanks. If there are 3,000 variables for your set, more then 20,000 variables for the graph, you should be able to perform Dijkstra Test assuming 200,000 variables. More then 20,000 variables will be required, well worth showing. Hi, I have done them. The problem is that your number of variables is too large to be able to do the calculations, and for number of graphs you have to go through 70,890,981 nodes in this number. This number is extremely large enough for the time. But, I can do this simple, how to save up to 10000 graph and it will be fast. Just suggest me if you could news me. Based on your example, this is the 3,000 variables that i would need In the graph above, you have 50,000,000 nodes, and inCan I hire someone for C programming assignments that involve developing algorithms for computational sociology simulations for social impact assessment? If one accepts that probability (or, more generally, probability of prediction) is related to the number of people working in a certain trade environment, then one would expect the answer to be positively valuing the probability of one’s actions to a statistically fair chance (Gersky), rather than a proportion of likely (Uvishnu) to the chance of a different trade environment (Prasad). I assume (to use R) that probability data can contain both very high (and extreme) events and high (and extreme) events themselves. Anyway, that would mean, you’d be giving one a decent account of predictive power over course and possibly far finer (perhaps similar to random guessing but higher) instances. I’ll let you go with something along the same lines as sooo, then the point will come to you that we would in my opinion, have another round of testing, would help you find a theory, learn things about mathematical math, and then maybe not get much out of it yet. If there’s anything to tie a theory to, it’s see post way you’ve already described the method: pick a theory that does what you say it should do from the available data. For example, there’s something from (pseudo)Kernell Feller, who showed not just theory, but a lot of practical reasoning to help get right on with proving this inference then (I’ll summarize my own work here), which is sometimes even the most-interesting thing that got my mind right. Then (probably on other subjects, I’d prefer your thoughts on the question) – from what you’ve said we’d not find what looks like it’s statistically just statistical probability based, but rather non-quantifiable (inferential). So we think on like it theory, what we’d pull at (a theory that does what I said it should do), to get the analysis down from there as the paper goes on is this:
{"url":"https://chelponline.com/can-i-hire-someone-for-c-programming-assignments-that-involve-developing-algorithms-for-computational-sociology-simulations-for-social-impact-assessment","timestamp":"2024-11-05T12:51:15Z","content_type":"text/html","content_length":"171758","record_id":"<urn:uuid:8bd1d342-5fdc-45ad-b151-93034d69dbfa>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00740.warc.gz"}
Square and Multiplication of "n" number of 1 top of page Square and Multiplication of "n" number of 1 Square of "n" number of 1 Like 11 , 111 , 111 .and multiplication of any digit by "n" number of 1 The conventional approach is lengthy and time taking in term of calculation. But why we need to spent even couple of seconds when we just give the right answer in just counting the numbers. In this Vedic Math trick we just need to count how many times digit 1 is repeating itself and write counting from 123..... till 1 repeats itself in problem and then reverse the counting, for example what is square of 111. Convetional method is given below but from Vedic Mathematics we will just write answer i.e, 12321. bottom of page
{"url":"https://www.quickmath.in/square/square-and-multiplication--of-%22n%22-number-of-1-","timestamp":"2024-11-02T01:50:10Z","content_type":"text/html","content_length":"799890","record_id":"<urn:uuid:0caa2018-3ac1-45ba-8e45-2db61eb60df8>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00285.warc.gz"}
Graphs and Graph Colorings | Mysite top of page Introduction to Graph Colorings What is a Graph? A graph consists of a set of vertices (or nodes) connected by edges. Graphs can represent various structures in mathematics, computer science, and real-life applications. A graph on 6 vertices and 6 edges (left) and a graph on 5 vertices and 10 edges (right). Network graph of flight routes in the USA. Social media graph, where users are represented by vertices and edges are placed between users who are friends. Graph Colorings A graph coloring is an assignment of colors to the vertices of a graph in such a way that no two adjacent vertices share the same color. We can use numbers or actual colors to color the vertices of a graph to create a graph coloring. Graph colorings have applications in many optimization problems, particularly in scheduling and assignment, as well as in various computer science fields such as data mining, image segmentation, clustering, image capture, and networking. Chromatic Number of a Graph A graph coloring of a graph always exists, as we can simply assign every vertex a distinct color. However, a more interesting problem is to determine the least number of colors in which one can obtain a graph coloring for a particular graph. This is known as the chromatic number of a graph. Application Example: Graph colorings can be applied to scheduling university final exams. Each course is a vertex, and edges connect courses with shared students, indicating their exams cannot overlap. The goal is to assign the fewest time slots (colors) so that no connected courses share the same slot. The minimum number of slots required is the graph’s chromatic number, which helps create an efficient, conflict-free exam schedule. Chromatic Polynomial of a Graph A graph can have multiple valid colorings, depending on the number of available colors and how they are assigned to the vertices. For any positive integer q, a proper vertex q-coloring, or simply a q-coloring, of a graph G is a graph coloring of G using at most q colors. The chromatic polynomial of G, denoted P(G, q), counts the number of possible q-colorings of G. The mathematician, George David Birkhoff, first defined the chromatic polynomial in 1912 in his attempt to prove the Four Color Theorem, which states that no more than four colors are required to color the regions of any map so that no two adjacent regions have the same color. Birkhoff showed that P(G, q) is always a monic polynomial in q, the degree of which is the number of vertices in the A four-colored map of the states of the United States (ignoring lakes and oceans). Examples of Chromatic Polynomials: Let q be a positive integer. 1. The Empty Graph, Eₙ, has n vertices and no edges. Since Eₙ has no edges, we can color each of the n vertices with any of the q colors; the choices are completely independent. Thus, P(Eₙ, q) = qⁿ. 2. The Complete Graph, Kₙ, has n vertices and an edge between any two vertices. Label the vertices v₁,…,vₙ, and color them individually in the given order. When we color the first vertex v₁, no other vertices have been colored, and we can use whichever of the q vertice we like. However, when we go to colour v1 we note that it is adjacent to v0, and so whatever colour we used for v0 we can't for v1, and so we have k−1 colours to choose for v1 4. Continuing in this way, we see that since all the vertices are adjacent, they all most have different colours. So when we go to colour vi, 5. we have already coloured vo,…,vi−1 with i different colours, and we can't use any of these to colour vi, and so we have k−i choices to colour vi. 7. Putting it all together, we see that: bottom of page
{"url":"https://www.melissamfuentes.com/graphs-and-graph-colorings","timestamp":"2024-11-08T04:52:47Z","content_type":"text/html","content_length":"755795","record_id":"<urn:uuid:80dd3efa-9768-4c9c-ada3-c3cd9237885d>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00049.warc.gz"}
This page highlights several examples on how dynesty can be used in practice, illustrating both simple and more advanced aspects of the code. Jupyter notebooks containing more details are available on Github. Gaussian Shells The “Gaussian shells” likelihood is a useful test case for illustrating the ability of nested sampling to deal with oddly-shaped distributions that can be difficult to probe with simple random-walk MCMC methods. dynesty returns the following posterior estimate: The “Eggbox” likelihood is a useful test case that demonstrates Nested Sampling’s ability to properly sample/integrate over multi-modal distributions. The evidence estimates from two independent runs look reasonable: The posterior estimate also looks quite good: Exponential Wave This toy problem was originally suggested by suggested by Johannes Buchner for being multimodal with two roughly equal-amplitude solutions. We are interested in modeling periodic data of the form: \[y(x) = \exp\left[ n_a \sin(f_a x + p_a) + n_b \sin(f_b x + p_b) \right]\] where \(x\) goes from \(0\) to \(2\pi\). This model has six free parameters controling the relevant amplitude, period, and phase of each component (which have periodic boundary conditions). We also have a seventh, \(\sigma\), corresponding to the amount of scatter. The results are shown below. Linear Regression Linear regression is ubiquitous in research. In this example we’ll fit a line \[y = mx + b\] to data where the error bars have been over/underestimated by some fraction of the observed value \(f\) and need to be decreased/increased. Note that this example is taken directly from the emcee The trace plot and corner plot show reasonable parameter recovery. One of the key assumptions of Static Nested Sampling (extended by Dynamic Nested Sampling) is that we “shrink” the prior volume \(X_i\) at each iteration \(i\) as \[X_{i} = t_i X_{i-1} ~ , \quad t_i \sim \textrm{Beta}(K, 1)\] at each iteration with \(t_i\) a random variable with distribution \(\textrm{Beta}(K, 1)\) where \(K\) is the total number of live points. We can empirically test this assumption by using functions whose volumes can be analytically computed directly from the position/likelihood of a sample. One example of this is the “hyper-pyramid” function from Buchner (2014). We can compare the set of samples generated from dynesty with the expected theoretical shrinkage using a Kolmogorov-Smirnov (KS) Test. When sampling uniformly from a set of bounding ellipsoids, we expect to be more sensitive to whether they fully encompass the bounding volume. Indeed, running on default settings in higher dimensions yields shrinkages that are inconsistent with our theoretical expectation (i.e. we shrink too fast): If bootstrapping is enabled so that ellipsoid expansion factors are determined “on the fly”, we can mitigate this problem: Alternately, using a sampling method other than 'unif' can also avoid this issue by making our proposals less sensitive to the exact size/coverage of the bounding ellipsoids: The multi-modal Log-Gamma distribution is useful for stress testing the effectiveness of bounding distributions since it contains multiple modes coupled with long tails. dynesty is able to sample from this distribution in \(d=2\) dimensions without too much difficulty: Although the analytic estimate of the evidence error diverges (requiring us to compute it numerically following Nested Sampling Errors), we are able to recover the evidence and the shape of the posterior quite well: Our results in \(d=10\) dimensions are also consistent with the expected theoretical value: 200-D Normal We examine the impact of gradients for sampling from high-dimensional problems using a 200-D iid normal distribution with an associated 200-D iid normal prior. With Hamiltonian slice sampling ( 'hslice'), we find we are able to recover the appropriate evidence: Our posterior recovery also appears reasonable, as evidenced by the small snapshot below: We also find unbiased recovery of the mean and covariances in line with the accuracy we’d expect given the amount of live points used: Importance Reweighting Nested sampling generates a set of samples and associated importance weights, which can be used to estimate the posterior. As such, it is trivial to re-weight our samples to target a slightly different distribution using importance reweighting. To illustrate this, we run dynesty on two 3-D multivariate Normal distributions with and without strong covariances. We then use the built-in utilities in dynesty to reweight each set of samples to approximate the other distribution. Given that both samples have non-zero coverage over each target distribution, we find that the results are quite reasonable: Noisy Likelihoods It is possible to sample from noisy likelihoods in dynesty just like with MCMC provided they are unbiased. While there are additional challenges to sampling from noisy likelihood surfaces, the largest is the fact that over time we expect the likelihoods to be biased high due to the baised impact of random fluctuations on sampling: while fluctuations to lower values get quickly replaced, fluctuations to higher values can only be replaced by fluctuations to higher values elsewhere. This leads to a natural bias that gets “locked in” while sampling, which can substantially broaden the likelihood surface and thus the inferred posterior. We illustrate this by adding in some random noise to a 3-D iid Normal distribution. While the allocation of samples is almost identical, the estimated evidence is substantially larger and the posterior substantially broader due to the impact of these positive fluctuations. If we know the “true” underlying likelihood, it is straightforward to use Importance Reweighting to adjust the distribution to match: However, in most cases these are not available. In that case, we have to rely on being able to generate multiple realizations of the noisy likelihood at the set of evaluated positions in order to obtain more accurate (but still noisy) estimates of the underlying likelihood. These can then be used to get an estimate of the true distribution through the appropriate importance reweighting
{"url":"https://dynesty.readthedocs.io/en/latest/examples.html","timestamp":"2024-11-07T19:09:20Z","content_type":"text/html","content_length":"30077","record_id":"<urn:uuid:e754d909-bb0d-4849-8c52-406cb8c9b479>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00407.warc.gz"}
Diagonal Explained In geometry, a diagonal is a line segment joining two vertices of a polygon or polyhedron, when those vertices are not on the same edge. Informally, any sloping line is called diagonal. The word diagonal derives from the ancient Greek διαγώνιος diagonios,^[1] "from corner to corner" (from διά- dia-, "through", "across" and γωνία gonia, "corner", related to gony "knee"); it was used by both Strabo^[2] and Euclid^[3] to refer to a line connecting two vertices of a rhombus or cuboid,^[4] and later adopted into Latin as diagonus ("slanting line"). As applied to a polygon, a diagonal is a line segment joining any two non-consecutive vertices. Therefore, a quadrilateral has two diagonals, joining opposite pairs of vertices. For any convex polygon, all the diagonals are inside the polygon, but for re-entrant polygons, some diagonals are outside of the polygon. Any n-sided polygon (n ≥ 3), convex or concave, has diagonals, as each vertex has diagonals to all other vertices except itself and the two adjacent vertices, or − 3 diagonals, and each diagonal is shared by two vertices. In general, a regular n-sided polygon has diagonals in length, which follows the pattern 1,1,2,2,3,3... starting from a square. Sides Diagonals Sides Diagonals Sides Diagonals Sides Diagonals Sides Diagonals Regions formed by diagonals In a convex polygon, if no three diagonals are concurrent at a single point in the interior, the number of regions that the diagonals divide the interior into is given by For n-gons with n=3, 4, ... the number of regions is^[5] 1, 4, 11, 25, 50, 91, 154, 246... This is OEIS sequence A006522.^[6] Intersections of diagonals If no three diagonals of a convex polygon are concurrent at a point in the interior, the number of interior intersections of diagonals is given by ^[7] ^[8] This holds, for example, for any regular polygon with an odd number of sides. The formula follows from the fact that each intersection is uniquely determined by the four endpoints of the two intersecting diagonals: the number of intersections is thus the number of combinations of the vertices four at a time. Regular polygons Although the number of distinct diagonals in a polygon increases as its number of sides increases, the length of any diagonal can be calculated. In a regular n-gon with side length a, the length of the xth shortest distinct diagonal is: This formula shows that as the number of sides approaches infinity, the xth shortest diagonal approaches the length (x+1)a. Additionally, the formula for the shortest diagonal simplifies in the case of x = 1: If the number of sides is even, the longest diagonal will be equivalent to the diameter of the polygon's circumcircle because the long diagonals all intersect each other at the polygon's center. Special cases include: A square has two diagonals of equal length, which intersect at the center of the square. The ratio of a diagonal to a side is A regular pentagon has five diagonals all of the same length. The ratio of a diagonal to a side is the golden ratio, }\approx 1.618. A regular hexagon has nine diagonals: the six shorter ones are equal to each other in length; the three longer ones are equal to each other in length and intersect each other at the center of the hexagon. The ratio of a long diagonal to a side is 2, and the ratio of a short diagonal to a side is A regular heptagon has 14 diagonals. The seven shorter ones equal each other, and the seven longer ones equal each other. The reciprocal of the side equals the sum of the reciprocals of a short and a long diagonal. See also: Face diagonal and Space diagonal. A polyhedron (a solid object in three-dimensional space, bounded by two-dimensional faces) may have two different types of diagonals: face diagonals on the various faces, connecting non-adjacent vertices on the same face; and space diagonals, entirely in the interior of the polyhedron (except for the endpoints on the vertices). Higher dimensions The lengths of an n-dimensional hypercube's diagonals can be calculated by mathematical induction. The longest diagonal of an n-cube is . Additionally, there are of the shortest diagonal. As an example, a 5-cube would have the diagonals: Diagonal length Number of diagonals \sqrt2 160 \sqrt3 160 \sqrt5 16 Its total number of diagonals is 416. In general, an n-cube has a total of diagonals. This follows from the more general form of which describes the total number of face and space diagonals in convex polytopes. Here, v represents the number of vertices and e represents the number of edges. By analogy, the subset of the Cartesian product X×X of any set X with itself, consisting of all pairs (x,x), is called the diagonal, and is the graph of the equality relation on X or equivalently the graph of the identity function from X to X. This plays an important part in geometry; for example, the fixed points of a mapping F from X to itself may be obtained by intersecting the graph of F with the diagonal. In geometric studies, the idea of intersecting the diagonal with itself is common, not directly, but by perturbing it within an equivalence class. This is related at a deep level with the Euler characteristic and the zeros of vector fields. For example, the circle S^1 has Betti numbers 1, 1, 0, 0, 0, and therefore Euler characteristic 0. A geometric way of expressing this is to look at the diagonal on the two-torus S^1xS^1 and observe that it can move off itself by the small motion (θ, θ) to (θ, θ + ε). In general, the intersection number of the graph of a function with the diagonal may be computed using homology via the Lefschetz fixed-point theorem; the self-intersection of the diagonal is the special case of the identity function. External links Notes and References
{"url":"https://everything.explained.today/Diagonal/","timestamp":"2024-11-04T08:21:56Z","content_type":"text/html","content_length":"25279","record_id":"<urn:uuid:134d8286-8e8e-4bb8-9195-1a365fadd6e6>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00797.warc.gz"}
Fisher's Exact Test Calculator for a 2x2 Contingency Table used by people in more than 220 countries! Calculator: Fisher's Exact Test for a 2x2 Contingency Table Fisher's Exact Test Calculator for a 2x2 Contingency Table This calculator will compute both the exact hypergeometric probability and the exact two-tailed probability of obtaining a distribution of values in a 2x2 contingency table using Fisher's exact test, given the number of observations in each cell. Please enter the necessary parameter values, and then click 'Calculate'.
{"url":"https://danielsoper.com/statcalc/calculator.aspx?id=29","timestamp":"2024-11-15T04:07:54Z","content_type":"text/html","content_length":"32985","record_id":"<urn:uuid:9775e74a-e955-44cf-a1fe-8d04310f7735>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00760.warc.gz"}
1992 BMW 525i Tire Size There are multiple tire sizes for your 1992 BMW 525i that depend upon the trim level. Look for your trim level below to get a color-coded explanation of your tire size. Then pick the best tire size for your 1992 BMW 525i. 205/65R15 94 Simplified Size: 205-65-15 Simplified size is useful for shopping and buying tires. The original tire size for your 1992 BMW 525i is 205/65R15 94H. A color-coded explanation of the 1992 BMW 525i's tire size is shown below. 205 This number indicates that your tire has a width of 205 millimeters. 15 The tire size was designed to fit rims or wheels that are 15 inches in diameter. 94 This tire has a load index of 94, which means it's capable of carrying a load of 1474 pounds (670 kg) or less. A higher number means the tire can carry more weight. A lower number means the 65 This number means that your tire has an aspect ratio of 65%. In other words, your tire's sidewall height (from the edge of the rim to the tire's tread) is 65% of the width. In this case, the sidewall height works out to be 133 millimeters. This letter denotes how your tire was constructed. Radial is the standard construction method for about 99% of all tires sold today. R R Radial B Bias Belt D Diagonal This tire has a speed rating of H, which means 130 mph (210 km/h) is the maximum speed that can be sustained for 10 minutes. A higher speed becomes dangerous. Trim Options: 225/60R15 95 Simplified Size: 225-60-15 Simplified size is useful for shopping and buying tires. The original tire size for your 1992 BMW 525i is 225/60R15 95V. A color-coded explanation of the 1992 BMW 525i's tire size is shown below. 225 This number indicates that your tire has a width of 225 millimeters. 15 The tire size was designed to fit rims or wheels that are 15 inches in diameter. 95 This tire has a load index of 95, which means it's capable of carrying a load of 1518 pounds (690 kg) or less. A higher number means the tire can carry more weight. A lower number means the 60 This number means that your tire has an aspect ratio of 60%. In other words, your tire's sidewall height (from the edge of the rim to the tire's tread) is 60% of the width. In this case, the sidewall height works out to be 135 millimeters. This letter denotes how your tire was constructed. Radial is the standard construction method for about 99% of all tires sold today. R R Radial B Bias Belt D Diagonal This tire has a speed rating of V, which means 149 mph (240 km/h) is the maximum speed that can be sustained for 10 minutes. A higher speed becomes dangerous.
{"url":"https://www.sizemytires.com/vehicle/1992/bmw/525i","timestamp":"2024-11-08T11:02:49Z","content_type":"text/html","content_length":"28452","record_id":"<urn:uuid:32286655-cd0a-4a60-a4dd-dd23f382d09f>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00722.warc.gz"}
Coronavirus pleomorphism The coronavirus is always idealized as a spherical capsid with radially protruding spikes. However, histologically, in the tissues of infected patients, capsids in cross section are elliptical, and only sometimes spherical [Neuman et al., “Supramolecular architecture of severe acute respiratory syndrome coronavirus revealed by electron cryomicroscopy,” J Virol, 80, 7918 (2006)]. This capsid ellipticity implies that coronaviruses are oblate or prolate or both. We call this diversity of shapes, pleomorphism. Recently, the rotational diffusivity of the spherical coronavirus in suspension was calculated, from first principles, using general rigid bead-rod theory [Kanso et al., “Coronavirus rotational diffusivity,” Phys Fluids 32, 113101 (2020)]. We did so by beading the spherical capsid and then also by replacing each of its bulbous spikes with a single bead. In this paper, we use energy minimization for the spreading of the spikes, charged identically, over the oblate or prolate capsids. We use general rigid bead-rod theory to explore the role of such coronavirus cross-sectional ellipticity on its rotational diffusivity, the transport property around which its cell attachment revolves. We learn that coronavirus ellipticity drastically decreases its rotational diffusivity, be it oblate or prolate. The coronavirus is thought of as a spherical capsid with radially protruding spikes. However, histologically, in the tissues of infected patients, capsids in cross section are aspherical, roughly elliptical, and only sometimes spherical. For instance, its capsid may be elliptical when microtomed in preparation for microscopy (see all 35 panels of Fig. 2 of Ref. 1, which we reproduce here in Fig. 1). This capsid ellipticity implies that coronaviruses are oblate or prolate or both. We call this diversity of shapes, pleomorphism. We know of no live microscopy of the coronavirus, in suspension or otherwise. We thus arrive at our understanding of pleomorphism through the colored lenses of electron microscopy, be it cryogenic fractography^1 or particle staining (Fig. 3. of Ref. 2 ). This is true whether the coronavirus capsid is axisymmetric or not. In cross section, as in microscopic imagery of microtomed coronavirus-infected tissue, a suspension of spherical capsids presents as circles, just circles. Further, when axisymmetric ellipsoidal coronavirus is sectioned, normal to its major axis (prolate), or to its minor axis (oblate), these cuts are also circular. Hence, in cross section, a suspension of aspherical capsids presents as both circular and acircular loops. More specifically, a suspension of ellipsoidal capsids presents as both circles and ellipses. There is thus still much to be learned experimentally about the shape of the coronavirus. This paper is devoted to how its capsid shapes affect its transport properties. General rigid bead-rod theory relies exclusively on macromolecular orientation to explain the rheological properties.^3–7 This distinguishes general rigid bead-rod theory from its competing approaches, which include reptation or disentanglement. We refer the reader to Ref. 8 for the detailed derivation of the general rigid bead-rod theory, and specifically, to Sec. III of Ref. 8. We define the characteristic length, $L$, of our coronavirus bead-rod models as the separation of nearest bead centers. Our general rigid bead-rod theory symbols, dimensional and non-dimensional respectively, are listed in Tables I and II, which follow those of the corresponding textbook treatments (EXAMPLE 16.7-1 of Ref. 9 or EXAMPLE 13.6-1 of Ref. 10). We are attracted to general rigid bead-rod theory first, for its flexibility. We are attracted to general rigid bead-rod theory second, for the accuracy of its simplest special case, the rigid dumbbell, for which many rheological material functions are properly predicted [see romanettes (i)–(xvi) of Sec. I of Ref. 11]. TABLE I. Name . Unit . Symbol . Angular frequency $t − 1$ $ω$ Augmented energy functional $M L 2 / t 2$ $E ⁁$ Bead friction coefficient $M/t$ $ζ$ Capsid radius $L$ $r c$ Complex viscosity $M / L t$ $η ∗$ Dielectric permittivity $T 4 I 2 / M L 3$ ϵ Kinetic molecular energy per molecule $M L 2 / t 2$ $kT$ Length of the spike of each peplomer $L$ $ℓ$ Minus the imaginary part of the complex viscosity $M / L t$ $η ″$ Number of dumbbells per unit volume $1 / L 3$ $n$ Point charge $A s$ $Q$ Real part of the complex viscosity $M / L t$ $η ′$ Relaxation time of rigid dumbbell $t$ $λ 0$ Relaxation time of solution $t$ $λ$ Rotational diffusivity $t − 1$ $D r$ Rotatory diffusivity $L 2 / t$ $D rot$ Shear rate amplitude $t − 1$ $γ ̇0$ Solvent viscosity $M / L t$ $η s$ Total electrostatic energy $M L 2 / t 2$ $E$ Translational diffusivity $L 2 / t$ $D t r$ Virus radius $L$ $r v$ Viscosity, zero-shear $M / L t$ $η 0$ Zero-shear first normal stress difference $M / L$ $Ψ 1 , 0$ Name . Unit . Symbol . Angular frequency $t − 1$ $ω$ Augmented energy functional $M L 2 / t 2$ $E ⁁$ Bead friction coefficient $M/t$ $ζ$ Capsid radius $L$ $r c$ Complex viscosity $M / L t$ $η ∗$ Dielectric permittivity $T 4 I 2 / M L 3$ ϵ Kinetic molecular energy per molecule $M L 2 / t 2$ $kT$ Length of the spike of each peplomer $L$ $ℓ$ Minus the imaginary part of the complex viscosity $M / L t$ $η ″$ Number of dumbbells per unit volume $1 / L 3$ $n$ Point charge $A s$ $Q$ Real part of the complex viscosity $M / L t$ $η ′$ Relaxation time of rigid dumbbell $t$ $λ 0$ Relaxation time of solution $t$ $λ$ Rotational diffusivity $t − 1$ $D r$ Rotatory diffusivity $L 2 / t$ $D rot$ Shear rate amplitude $t − 1$ $γ ̇0$ Solvent viscosity $M / L t$ $η s$ Total electrostatic energy $M L 2 / t 2$ $E$ Translational diffusivity $L 2 / t$ $D t r$ Virus radius $L$ $r v$ Viscosity, zero-shear $M / L t$ $η 0$ Zero-shear first normal stress difference $M / L$ $Ψ 1 , 0$ TABLE II. Name . Symbol . Aspect ratio $ε≡ c a −1$ Capsid sphere $C$ Coefficient in Eqs. (9) and (10) $a$ Coefficient in Eqs. (9) and (10)(9) $b$ Coefficient in Eqs. (9) and (10) $ν$ Deborah number, oscillatory shear $De ≡ λ ω$ Sphere $S$ Total number of beads $N$ Total number of capsid beads $N c$ Total number of peplomers $N p$ Volume fraction $Φ$ Weissenberg number $Wi ≡ λ γ ̇0$ Name . Symbol . Aspect ratio $ε≡ c a −1$ Capsid sphere $C$ Coefficient in Eqs. (9) and (10) $a$ Coefficient in Eqs. (9) and (10)(9) $b$ Coefficient in Eqs. (9) and (10) $ν$ Deborah number, oscillatory shear $De ≡ λ ω$ Sphere $S$ Total number of beads $N$ Total number of capsid beads $N c$ Total number of peplomers $N p$ Volume fraction $Φ$ Weissenberg number $Wi ≡ λ γ ̇0$ Recently, we calculated the rotational diffusivity of the spherical coronavirus in suspension, from first principles, using general rigid bead-rod theory.^12–14 We did so by beading the spherical capsid and then also by replacing each of its bulbous spikes with a single bead (see Fig. 5 of Ref. 12). In this paper, we use energy minimization for the spreading of the spikes, charged identically, over the oblate or prolate capsids (Sec. IV). We use general rigid bead-rod theory to explore the role of such coronavirus cross-sectional ellipticity on its rotational diffusivity, the transport property around which its cell attachment revolves. We learn that coronavirus ellipticity decreases its rotational diffusivity for both oblate and prolate capsids. In the tradition of the transport sciences, we define the rotatory diffusivity as (see Footnote 2 of p. 62 of Ref. 9) which, for any axisymmetric macromolecule, from general rigid bead-rod theory, gives which has the dimensions of diffusivity and which is four times the translational diffusivity In this paper, we depart from said transport tradition of using the rotatory diffusivity, $D rot$, and frame our results in terms of the rotational diffusivities, $D r$, of pleomorphic coronavirus In general rigid bead-rod theory, we construct macromolecules from sets of beads whose positions, relative to one another, are fixed. Our macromolecular bead-rod models of our pleomorphic coronavirus particles are suspended in a Newtonian solvent. In this work, we neglect interactions of the solvent velocity fields, be they between nearest beads,^15,16 or nearest macromolecules. With general rigid bead-rod theory, we thus locate beads to sculpt an approximation of the pleomorphic coronavirus particle shapes. In this way, using general rigid bead-rod theory, we can model any virus macromolecular architecture (see Fig. 9 of Ref. 11). We use Eqs. (3)–(13) in Ref. 12 for the method of computing the rotational diffusivity (see Footnote 2 of p. 62 of Ref. 9) which we will use for our results below. In this paper, we focus on small-amplitude oscillatory shear flow (SAOS). For this flow field, for the molecular definition of small amplitude, general rigid bead-rod theory yields [Eq. (32) of Ref. whose left side is the macromolecular Weissenberg number. The polymer contributions to the complex viscosity^17,18 are [Eqs. (40) and (41) of Ref. 11] $η ′ − η s η 0 − η s= 1 2 b / a ν + 1 − 1 1 2 b / a ν + 1 1 + λ ω 2$ $η ″ η 0 − η s= 1 2 b / a ν + 1 − 1 λ ω 1 + λ ω 2,$ where $λω$ is the Deborah number. Equations (9) and (10) each capture non-Newtonian behavior: (i) Eq. (9) captures the descent of $η ′ ω$ and (ii) Eq. (10) captures the ascent of $η ″ ω$ from the origin. In this paper, we plot the real and minus the imaginary parts of the shear stress responses to small-amplitude oscillatory shear flow as functions of frequency, following Ferry (Secs. 2.A.4–2.A.6 of Ref. 19) or Bird et al. (Sec. 4.4 of Ref. 20): As $ω→0$, for the polymer contribution to the zero-shear viscosity, we get $η 0 − η s nkT λ= a ν 2+b=b 1 + 2 b a ν 2 b a ν − 1$ and for the zero-shear first normal stress difference coefficient $Ψ 1 , 0 λ η 0 − η s=2 1 2 b / a ν + 1 − 1,$ which we use in the table of Sec. V. below. Coronavirus peplomers are charged identically and anchored into a lipid bilayer, and are thus displaced by the repulsions of their nearest neighbors. In this paper, we represent peplomers with single beads. We then locate them by applying the energy minimization scheme of Sec. IV. of Ref. 13 to singly beaded peplomers repelling one another over ellipsoidal surfaces of the pleomorphic virus. From the literature, we learn that if oblate, the coronavirus shape range is [Figs. 1(a) and 2 of Ref. 1] and if prolate [Figs. 1(a) and 2 of Ref. 1] For this work, we therefore cover [Eqs. (13)plus(14)] where, c is the ellipsoidal whole-particle length along the δ[3] molecular axis, and a along the δ[1]. By whole-particle, we mean capsid plus peplomer. Specifically, for this paper, we straddle both ranges Eqs. (13) and (14) with the following set of pleomorphic coronavirus aspect ratios $c a= 2 3 , 4 5 , 1 , 5 4 , 3 2$ for which, after energy minimization, we construct the bead-rod models of Figures 2–6 (Multimedia views) for $N c=256$ and $N p=74$. For the $N c=256$, we rely on the capsid beading study (see subsection VIIA of Ref. 12). For the average number of peplomers, $N p=74$, we rely on our previous literature review (see Table X of Ref. 12). We next employ the framework developed by Chaurasia et al.^21 (see also Chaurasia^22) to find equilibrium solutions of a system consisting of flexible structures, specifically charged elastic loops constrained to a sphere. We do so for identical point charges spreading over the surfaces of oblate and prolate ellipsoids. A. Kinematics Let $C$ be the ellipsoidal capsid with axes lengths a, b, and c. Let $o$ be the origin of a Cartesian coordinate system with an orthonormal basis $i , j , k$, such that the center of $C$ coincides with the origin $o$ and the axes of $C$ are along the vectors $i$, $j$, and $k$, respectively. Let $N p$ be the number of single bead peplomers of identical spike length $ℓ$ attached along the normal to the ellipsoid $C$ at the point of contact. The point of contact of the spike of the $ith$ bead to $C$ is defined by where i=1,2,…,N. Then, the quantities $x i$, $y i$, and $z i$ must satisfy $x i 2 a 2+ y i 2 b 2+ z i 2 c 2=1.$ We assume that each peplomer spike is attached to the ellipsoidal capsid $C$ along the normal to $C$. The normal vector at the point of contact $x ii+ y ij+ z ik$ of the spike of the $ith$ peplomer to $C$ is given by $n i≡ x i a 2i+ y i b 2j+ z i c 2k=1.$ Thus, introducing a scalar quantity $t i$, $i=1,2,…,N$, the position vector $r i$ of the $ith$ bead is given by $r i≡ x ii+ y ij+ z ik+ t i n i,$ where, by using Eq. (19), Eq. (20) simplifies to $r i≡ x i 1 + t i a 2i+ y i 1 + t i b 2i+ z i 1 + t i c 2i,$ where $i=1,2,…,N$. We note from Eq. (21) that the length of the spike of the $ith$ peplomer is given by $r i − x i i + y i j + z i k= t i n i$, $i=1,2,…,N$. For simplification, we assume that the length of the spike of each peplomer is equal to $ℓ$; thus, the quantities $x i$, $y i$, $z i$, and $t i$ and Eq. (19) of $n i$ in $t i n i 2= ℓ 2$ must satisfy $t i 2 x i 2 a 4 + y i 2 b 4 + z i 2 c 4= ℓ 2,$ where $i=1,2,…,N$. Thus, the position vector $r i$, defined by Eq. (21), of the $ith$ peplomer bead of a given spike length $ℓ$ is entirely determined in terms of four scalar quantities $x i$, $y i$, $z i$, and $t i$ satisfying Eqs. (18) and (22). For an axisymmetric capsid, $a=b$. B. Energetics Let each single bead peplomer be endowed with a point charge $Q$. The total electrostatic energy of $N p$ peplomers, constrained to the ellipsoidal capsid $C$, is given by $E= Q 4 π ϵ ∑ i = 1 N ∑ j = 1 N 1 r i − r j,$ where ϵ is dielectric permittivity and $r i$, defined in Eq. (21), is the position vector of the $ith$ peplomer.^23 We have assumed that each bead is endowed with an identical point charge $Q$ for simplification. Therefore, the beads repel each other and would prefer to distribute themselves as far as possible from each other to minimize the electrostatic energy $E$, defined in Eq. (23). Using a constrained minimization approach, we find an equilibrium distribution of the beads, defined in Eq. (20), that locally minimizes the energy in Eq. (23) while satisfying the kinematic constraints in Eqs. (18) and (22), for given values of $N p$. Since the charge $Q$ appears only as a prefactor in Eq. (23), its value plays no role in determining equilibrium solutions. Dropping that prefactor, we define the augmented energy functional $E ̂r i , Λ i , λ i i = 1 , … N p= ∑ i = 1 N p ∑ j = 1 j ≠ i N p 1 r i − r j + 1 2 ∑ i = 1 N p Λ i x i 2 a 2 + y i 2 b 2 + z i 2 c 2 − 1+ 1 2 ∑ i = 1 N p Λ i t i 2 x i 2 a 4 + y i 2 b 4 + z i 2 c 4 − ℓ 2,$ $r i − r j= x i 1 + t i a 2 − x j 1 + t j a 2 2 + y i 1 + t i b 2 − y j 1 + t j b 2 2 + z i 1 + t i c 2 − z j 1 + t j c 2 2,$ where $Λ i$ and $λ i$ are the Lagrange multipliers introduced to satisfy the kinematic constraints in Eqs. (18) and (22). For finding the distribution of the beads locally minimizing the electrostatic energy $E$, defined in Eq. (23), we differentiate the augmented energy functional, $E ̂$, with scalar quantities $x i$, $y i$, $z i$, and $t i$, where $i=1,…, N p$, resulting in $4 N p$ equilibrium equations $∑ j = 1 j ≠ i N p r i − r j i 1 + t i a 2 r i − r j 3 / 2− Λ i x i a 2− λ i t i 2 x i a 4=0,$ $∑ j = 1 j ≠ i N p r i − r j j 1 + t i b 2 r i − r j 3 / 2− Λ i y i b 2− λ i t i 2 y i b 4=0,$ $∑ j = 1 j ≠ i N p r i − r j k 1 + t i c 2 r i − r j 3 / 2− Λ i z i c 2− λ i t i 2 z i c 4=0,$ $∑ j = 1 j ≠ i N p r i − r j n i r i − r j 3 / 2− λ i t i x i 2 a 4 + y i 2 b 4 + z i 2 c 4=0,$ respectively, where $n i$ and $r i$ are defined by Eqs. (19) and (20). In total, we solve $6 N p$ equations, $4 N p$ equilibrium equations in Eqs. (26)–(29), and $2 N p$ constraints in Eqs. (18) and (22), simultaneously to determine $6 N p$ unknowns in $x i$, $y i$, $z i$, $t i$, $λ i$, and $Λ i$, where $i=1,…, N p$. We use the Levenberg–Marquardt algorithm from the fsolve package of MATLAB to solve the system of equations with $10 − 16$ error tolerance. General rigid bead-rod theory can be used either structure-by-structure (see TABLES V–XIII of Ref. 11) or analytically (see TABLE XV of Ref. 11). For large values of $N p$, such as $N p=74$, Eq. (23) is applied numerically. Thus, the bead positions $R i$, are not derived, but rather we arrive at their floating-point approximations. Our exploration of pleomorphism is thus Our model, which has an ellipsoidal core, amounts to a modified version of the Thomson problem,^24 wherein one seeks to find a state that distributes $N p$ electrons over a unit sphere as evenly as possible, with minimum electrostatic energy. Wales et al.,^25,26 solved this problem, providing solutions for a large set of values of $N p$. Our energy minimization recovers accurately the results of Wales^25,26 for the Thomson solution (energy minimization over a spherical surface), as it should (see Sec. VII. of Ref. 12). As far as we know, we are the first to perform such an energy minimization over the surface of an ellipsoid. Table III summarizes our results for the characteristics of ellipsoidal coronavirus particles, be they oblate or prolate, arrived at from general rigid bead-rod theory for $N c=256$ and $N p=74$. Figure 7 maps columns 2 and 3 of Table III onto the $I 3− I 1$ plane, showing the balance of moments of each pleomorphic coronavirus bead-rod model. The oblate ones lie above the diagonal, and the prolate, below. The spherically symmetric coronavirus lies on the diagonal, near the origin. Figure 8 shows that pleomorphism, be it oblate or prolate, causes the real part of the complex viscosity, $η ′ ω$, to descend with frequency. This is not seen for the spherical capsid (black horizontal line). In other words, pleomorphism introduces non-Newtonian behavior. Further, when compared with Fig. 10 of Ref. 12, Fig. 9 shows that pleomorphism, be it oblate or prolate, provokes an imaginary part to the complex viscosity, $η ″ ω$. In other words, pleomorphism introduces elasticity to coronavirus suspensions. From Fig. 8, we also learn that when prolate coronavirus is compared to its oblate counterpart (where one ellipticity is the reciprocal of the other), the prolate decreases $η ′ ω$ more than the oblate. Further, from Fig. 9, we learn that when prolate coronavirus is compared to its oblate counterpart (where one ellipticity is the reciprocal of the other), the oblate increases $η ″ ω$ less than the prolate. Finally, Fig. 8 shows the order of descent for $η ′ ω$ to be $c a= 1 , 4 5 , 5 4 , 2 3 , 3 2,$ and Fig. 9 shows the order of ascent for $η ″ ω$ to be $c a= 3 2 , 2 3 , 5 4 , 4 5 , 1,$ which is the reverse of Eq. (30). Coronavirus ellipsoid, c/a . $I1mL2$ . $I2mL2$ . $I3mL2$ . $a$ . $b$ . $ν$ . $2baν$ . $η0−ηsnkTλ$ . $λλ0$ . $λ0Dr$ . $Ψ1,0λη0−ηs$ . $23$ $1.06×106$ $1.06×106$ $1.46×106$ $5.66×105$ $8.70×10−2$ $5.66×10−6$ $5.43×10−2$ $1.69$ $2.12×106$ $7.86×10−8$ $2.10×10−1$ $45$ $0.86×106$ $0.86×106$ $1.03×106$ $4.45×105$ $2.79×10−2$ $7.10×10−6$ $1.77×10−2$ $1.61$ $1.69×106$ $9.86×10−8$ $6.94×10−2$ $32$ $4.98×105$ $4.98×105$ $3.15×105$ $2.05×105$ $8.11×10−2$ $1.20×10−5$ $6.56×10−2$ $1.32$ $9.97×105$ $1.67×10−7$ $2.46×10−1$ $54$ $5.58×105$ $5.58×105$ $4.41×105$ $2.55×105$ $2.67×10−2$ $1.07×10−5$ $1.95×10−2$ $1.39$ $1.12×106$ $1.49×10−7$ $7.67×10−2$ $1$ $2.48×102$ $2.48×102$ $2.48×102$ $1.24×102$ $1.19×10−8$ $2.24×10−2$ $7.96×10−9$ $1.5$ $4.96×102$ $3.36×10−4$ $1.60×10−8$ Coronavirus ellipsoid, c/a . $I1mL2$ . $I2mL2$ . $I3mL2$ . $a$ . $b$ . $ν$ . $2baν$ . $η0−ηsnkTλ$ . $λλ0$ . $λ0Dr$ . $Ψ1,0λη0−ηs$ . $23$ $1.06×106$ $1.06×106$ $1.46×106$ $5.66×105$ $8.70×10−2$ $5.66×10−6$ $5.43×10−2$ $1.69$ $2.12×106$ $7.86×10−8$ $2.10×10−1$ $45$ $0.86×106$ $0.86×106$ $1.03×106$ $4.45×105$ $2.79×10−2$ $7.10×10−6$ $1.77×10−2$ $1.61$ $1.69×106$ $9.86×10−8$ $6.94×10−2$ $32$ $4.98×105$ $4.98×105$ $3.15×105$ $2.05×105$ $8.11×10−2$ $1.20×10−5$ $6.56×10−2$ $1.32$ $9.97×105$ $1.67×10−7$ $2.46×10−1$ $54$ $5.58×105$ $5.58×105$ $4.41×105$ $2.55×105$ $2.67×10−2$ $1.07×10−5$ $1.95×10−2$ $1.39$ $1.12×106$ $1.49×10−7$ $7.67×10−2$ $1$ $2.48×102$ $2.48×102$ $2.48×102$ $1.24×102$ $1.19×10−8$ $2.24×10−2$ $7.96×10−9$ $1.5$ $4.96×102$ $3.36×10−4$ $1.60×10−8$ Figure 11 shows that pleomorphism, be it oblate or prolate, causes the dimensionless rotational diffusivity to decrease. Further, from Fig. 11 we learn that when prolate coronavirus is compared to its oblate counterpart (where one ellipticity is the reciprocal of the other), the oblate decreases the dimensionless rotational diffusivity more than the prolate. Figure 10 recalls the canonical dimensionless rotational diffusivity behavior of spherical coronavirus particles (Fig. 12 of Ref. 12 and Fig. 5 of Ref. 13). Comparing Fig. 10 to Fig. 11, we discover that with the reported pleomorphisms [Eq. (15)], $λ o D r$ lands about 3 orders of magnitude below the canonical rotational diffusivity of spherical coronavirus. Finally, mindful of Fig. 11, Fig. 10 shows the order of descent for $λ o D r$ to be $c a= 1 , 3 2 , 5 4 , 4 5 , 2 3,$ which differs from both Eqs. (30) and (31), and is not monotonic. From column 8 of Table III and Fig. 12, we find that (i) oblate coronavirus pleomorphism increases the dimensionless zero-shear viscosity, $η 0$, and (ii) prolate coronavirus pleomorphism decreases the dimensionless zero-shear viscosity. We also find, from column 11 of Table III and Fig. 12, that coronavirus pleomorphism, be it oblate or prolate, increases the zero-shear first normal stress coefficient, $Ψ 1 , 0$. Through the lens of general rigid bead-rod theory, we have explored the role coronavirus cross-sectional ellipticity on its rotational diffusivity, the transport property around which its cell attachment revolves. We find that coronavirus pleomorphism, be it oblate or prolate, injects into the complex viscosity (i) non-Newtonian behavior, and (ii) elasticity (Figs. 8 and 9, respectively). We also learn that when prolate coronavirus is compared to its oblate counterpart, (i) the prolate decreases $η ′ ω$ more than the oblate, and (ii) the oblate increases $η ″ ω$ less than the prolate. We find that (i) oblate coronavirus pleomorphism increases the dimensionless zero-shear viscosity, $η 0$, and (ii) prolate coronavirus pleomorphism decreases the dimensionless zero-shear viscosity (column 8 of Table III). We also find that coronavirus pleomorphism, be it oblate or prolate, increases the zero-shear first normal stress coefficient, $Ψ 1 , 0$ (column 11 of Table III). We further find that rotational diffusivity decreases with coronavirus pleomorphism, be it oblate or prolate, and specifically that this descent is monotonic with the ratio of major to minor axis lengths, $c / a$ (see Sec. IVA). In this paper, we explored coronavirus pleomorphism using single beads to approximate the hydrodynamic resistance contributed by its spikes. However, from previous work, we know that the bulbous triangular equidimensional shape of the coronavirus spikes reduces rotational diffusivity (see Figs. 2, 4, and 5 of Ref. 13). We leave the important combination of coronavirus pleomorphism with spike triangularity for another day. Our previous work on the rotational diffusivity of coronavirus suspensions has excluded interferences of Stokes flow velocity fields between nearby spikes.^12,13 We did so because no theory for incorporating hydrodynamic interactions in general rigid bead rod theory had been advanced. However, such a method for incorporating hydrodynamic interactions analytically has now been developed (Sec. III. of Ref. 27) and used (Sec. V. of Refs. 27 and 28). For the spherical capsid, this problem has also recently been attacked by means of molecular dynamics simulations.^29 We leave the exploration of how spike hydrodynamic interactions affect coronavirus rotational diffusivity for future work. General rigid bead-rod theory can only be applied to suspensions for whom the suspended particle structure is known. Thus, general rigid bead-rod theory cannot be more accurate than our knowledge of this structure. Of coronaviruses, Bàrcena et al. wisely observed: “As a result of their pleomorphic nature, our structural insight into the coronavirion is still rudimentary, and it is based mainly on 2D electron microscopy.” Table X of Ref. 12 summarizes what we know about the average values of the physical characteristics of coronavirus particles, and specifically, how we arrived at $N p=74$ to rely upon in this study. From this study, we can see that a distribution of particle ellipticities will result in a corresponding distribution of particle rotational diffusivities. The virus likelihood of attaching, we would expect, will be determined by this distribution. To handle such mixtures of different species, we can rewrite Eqs. (9) and (10) for a dispersed system (see problem14C.2 of Ref. 9, see also Sec. 26 of Ref. 30). We leave this for another day. Though coronavirus pleomorphism is well-known, little is understood about its causes. Peplomer repulsions, for instance, might themselves deform the capsid. So might flow. Coronavirus pleomorphism may also reflect different asymmetrical arrangements of its capsid contents.^31 Our work is silent on these important considerations. Pleomorphism causes the peplomer spacing to differ over the oblate or prolate coronavirus surfaces. In as much as attachment requires the alignment of an adjacent peplomer pair with its nominally rectangular dimeric receptor, this pleomorphic peplomer spacing will complicate Eq. (2) of Ref. 12 for the attachment probability. In other words, the probability of finding a peplomer both (i) aligned with said receptor and (ii) matching the dimeric spacing of said receptor differs from place to place over the surface of a pleomorphic coronavirus. The uninitiated may wonder why our bead-rod model employs an empty capsid. After all, the capsid contents are what causes disease. However, only the parts of the suspended particle resisted hydrodynamically by the surrounding Newtonian fluid can contribute to the resisted rotation and to the complex viscosity of the suspension. Thus, the rotational diffusivity of an empty capsid does not differ from a packed one. Ιn general rigid bead-rod theory, only beads dragged through solvent can contribute to the rheology. Pieces of the macromolecule that are shielded from the surrounding solvent, such as capsid contents, are not to be counted in the general bead-rod model. In this paper, we just considered axisymmetric pleomorphism, both oblate and prolate. We know of no characterization, microscopic or otherwise, detailed enough to distinguish oblate from prolate, or for that matter, asymmetric from axisymmetric. Moreover, we still know of no rheological characterization of the coronavirus suspension. We consider these experimental measurements to be crucial next steps for those equipped to handle live coronavirus. Our general rigid bead-rod theory relies entirely on orientation to explain the polymer contribution to the viscosity and elasticity of polymeric liquids. Our theory is not to be confused with the competing theory, for suspensions of ellipsoids of revolution, which yields, for the real and minus the imaginary parts of the complex viscosity [after Eq. (11) of Ref. 34) $η ′ − η s η 0 − η s= 5 2 + 26 147 ε 2 + 3 5 ε 2 1 + λ 2 ω 2 5 2 + 571 735 ε 2,$ $η ″ η 0 − η s= 3 5 ε 2 λ ω 1 + λ 2 ω 2 5 2 + 571 735 ε 2,$ $η 0− η s= η sΦ 5 2 + 571 735 ε 2,$ where $c / a$ is the capsid aspect ratio. Equations (33)–(36) are thus for suspensions of otherwise featureless ellipsoids (spikeless), be they prolate or oblate, arrived at elegantly through the competing ellipsoid suspension orientation theory. Unlike general rigid bead-rod theory, however, ellipsoid suspension orientation is silent on the relation between $λ$ and the shapes and sizes of the suspended objects. We have restricted this work to small-amplitude, namely, where Eq. (7) obtains. For large-amplitude oscillatory shear flow, where Eq. (7) does not obtain, we bridge to the corotational Jeffreys fluid, following the method of Sec. X of Ref. 11. We leave this intriguing problem to future work. Whereas much prior work on fluid physics related to the virus has attacked transmission,^32,33 this paper focuses on the ab initio calculation of coronavirus transport properties. Specifically, we have determined the rotational diffusivity, the property governing the particle alignment for cell attachment (see Sec. I of Ref. 12). Although our work is mainly curiosity driven, it may deepen our understanding of drug, vaccine, and cellular infection mechanisms. One might prefer to add other comparisons with experiment, and specifically with relevant complex viscosity vs coronavirus observations. However, we know of no experimental measurements on coronavirus suspensions. In other words, to our knowledge, the transport properties explored in this paper have yet to be explored in the laboratory. Under the microscope, we see agglomeration of coronavirus particles, mechanically interlocked by interdigitation of the bulbous spikes [see Fig. 1(d) of Ref. 35]. The charge repulsion of the interdigitating spikes will, of course, rearrange the spikes. By rearrange, we mean spreading near the interdigitation, and crowding elsewhere. Figure 13 (Multimedia view) shows our general rigid bead-rod model of an interdigitated cluster of coronaviruses. We leave the energy minimization for interdigitated clusters, and the calculation of the coronavirus cluster rotational diffusivity for another day. This research was undertaken, in part, thanks to support from the Canada Research Chairs program of the Government of Canada for the Natural Sciences and Engineering Research Council of Canada (NSERC) Tier 1 Canada Research Chair in Rheology. This research was also undertaken, in part, thanks to support from the Discovery Grant program of the Natural Sciences and Engineering Research Council of Canada (NSERC) (A. J. Giacomin), Vanier Canada Graduate Scholarship (M. A. Kanso), and the Mitacs Research Training Award (A. J. Giacomin and M. A. Kanso). A. J. Giacomin is indebted to the Faculty of Applied Science and Engineering of Queen's University at Kingston for its support through a Research Initiation Grant (RIG). V. Chaurasia and E. Fried gratefully acknowledge support from the Okinawa Institute of Science and Technology Graduate University with subsidy funding from the Cabinet Office, Government of Japan. Conflict of Interest The authors have no conflicts to disclose. The data that support the findings of this study are available from the corresponding author upon reasonable request. B. W. B. D. J. D. R. A. , and M. J. , “ Supramolecular architecture of severe acute respiratory syndrome coronavirus revealed by electron cryomicroscopy J. Virol. et al., “ A novel coronavirus from patients with pneumonia In China, 2019 N Engl. J. Med. M. A. , “ Polymeric liquid behavior in oscillatory shear flow ,” M.S. thesis ( Queen's University Kingston, Canada M. A. A. J. , “ van Gurp-Palmen relations for long-chain branching from general rigid bead-rod theory Phys. Fluids ); Erratum: In Eq. (44) “ $I 3 > I 1$ ” should be “ $I 3 < I 1$ .” Editor's pick. M. A. A. J. , and , “ Large-amplitude oscillatory shear flow loops for long-chain branching from general rigid bead-rod theory Phys. Fluids M. A. A. J. , and J. H. , “ Diblock copolymer architecture and complex viscosity Int. J. Mod. Phys. B M. A. S. J. , and A. J. , “ Complex viscosity of poly[n]catenanes and olympiadanes Phys. Fluids , “ Kinetic theory and rheology of bead-rod models for macromolecular solutions. II. Linear unsteady flow properties J. Chem. Phys. ); Erratum: in Eq. (2) of “ $1 / 2$ ” should be “ $− 1 / 2$ ” and “ ” should be “ R. B. C. F. R. C. , and Dynamics of Polymeric Liquids 2nd ed. John Wiley & Sons, Inc New York ), Vol. ; Errata: On p. 409 of the first printing, the $n + m!$ in the denominator should be $n − m!$ ; In Table 16.4–1, under entry “length of rod” should be “bead center to center length of a rigid dumbbell”; In Fig. 14.1–2 caption, “Multibead rods of length ” should be “Multibead rods of length R. B. R. C. , and C. F. Dynamics of Polymeric Liquids 1st ed. John Wiley and Sons, Inc New York ), Vol. M. A. A. J. , and J. H. , “ Macromolecular architecture and complex viscosity Phys. Fluids ); Editor's pick. Errata: Ganged in Ref. of Ref. M. A. J. H. J. A. , and A. J. , “ Coronavirus rotational diffusivity Phys. Fluids ). Feature article. Cover article. M. A. , and A. J. , “ Peplomer bulb shape and coronavirus rotational diffusivity Phys. Fluids A. J. M. A. , “ General rigid bead-rod macromolecular theory ,” in Recent Advances in Rheology: Theory, Biorheology, Suspension and Interfacial Rheology , edited by De Kee AIP Publishing ), Chap. II, pp. 2-1–2-32. W. E. J. P. , “ Hydrodynamic interaction effects in rigid dumbbell suspensions. II. Computations for steady shear flow Trans. Soc. Rheol. J. H. L. M. , and A. J. , “ Exact coefficients for rigid dumbbell suspensions for steady shear flow material function expansions Phys. Fluids ); Erratum: Above Eq. (83), “one other” should be “one other use.” R. B. A. J. , “ Who conceived the complex viscosity? Rheol. Acta A. J. R. B. , “ Erratum: Official nomenclature of The Society of Rheology J. Rheol. J. D. Viscoelastic Properties of Polymers 3rd ed. New York R. B. R. C. , and Dynamics of Polymeric Liquids 1st ed. New York ), Vol. , and , “ Interacting charged elastic loops on a sphere J. Mech. Phys. Solids , “ Variational formulation of charged curves confined to a sphere ,” Ph.D. thesis ( University of Houston M. L. , “ Projected Hessian updating algorithms for nonlinearly constrained optimization SIAM J. Numer. Anal. J. J. , “ XXIV. On the structure of the atom: An investigation of the stability and periods of oscillation of a number of corpuscles arranged at equal intervals around the circumference of a circle; with application of the results to the theory of atomic structure London, Edinburgh, Dublin Philos. Mag. J. Sci. D. J. , “ Structure and dynamics of spherical crystals characterized for the Thomson problem Phys. Rev. B D. J. , and E. L. , “ Defect motifs for spherical topologies Phys. Rev. B M. C. M. A. , and A. J. , “ General rigid bead-rod theory with hydrodynamic interaction for polymer viscoelasticity Phys. Fluids M. A. M. C. S. J. , and A. J. , “ Hydrodynamic interaction and complex viscosity of multi-bead rods Phys. Fluids ). Editor's pick. F. B. , and , “ Hydrodynamics of spike proteins dictate a transport-affinity competition for SARS-CoV-2 and other enveloped viruses ,” bioRxiv ( R. B. H. R. , and D. C. , “ Kinetic theory and rheology of dumbbell suspensions with Brownian motion Fortschr. Hochpolym.-Forsch. P. J. de Pablo I. A. T. , “ Atomic force microscopy of viruses ,” in Physical Virology , edited by U. F. ) Chap. VIII, pp. , “ On coughing and airborne droplet transmission to humans Phys. Fluids , and , “ The motion of respiratory droplets produced by coughing Phys. Fluids L. G. E. J. , “ The rheology of a suspension of nearly spherical particles subject to Brownian rotations J. Fluid Mech. B. W. Kiss, G. A. H. M. F. J. P. S. G. , and S. G. , “ A structural analysis of M protein in coronavirus assembly and morphology J. Struct. Biol. © 2022 Author(s). Published under an exclusive license by AIP Publishing.
{"url":"https://pubs.aip.org/aip/pof/article/34/6/063101/2846764/Coronavirus-pleomorphism","timestamp":"2024-11-14T15:23:09Z","content_type":"text/html","content_length":"534391","record_id":"<urn:uuid:80c77061-8eb3-4c6a-a2d0-c2516417a42e>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00263.warc.gz"}
Fibonacci series program in Java using iteration - Quescol Fibonacci series program in Java using iteration In this tutorial we are going to learn how to print Fibonacci series in Java program using iterative method. In this series number of elements of the series is depends upon the input of users. Program will print n number of elements in a series which is given by the user as a input. Read this: What is Fibonacci series? Fibonacci series program using iteration in c. How this java program will work? This Java program will take a integer as an input. This input is a number upto which series will print. Suppose if someone is given input as 5 then output will be 0, 1, 1, 2, 3 Program 1: Fibonacci Series Program in Java using Iterative methods import java.util.*; class Main{ public static void main(String ...a){ int first = 0, second = 1, result, i; Scanner sc= new Scanner(System.in); System.out.print("Enter number- "); int n= sc.nextInt(); System.out.println("fibonacci series is: "); for (i = 0; i < n; i++) if (i <= 1) result = i; result = first + second; first = second; second = result; Enter number- 5 fibonacci series is: In this tutorial, we have explored writing Fibonacci series program using iterative method in Java. This will generate and print the Fibonacci series. The Fibonacci series is a sequence of numbers where each number is the sum of the two preceding ones, usually starting with 0 and 1. Key Points: 1. User Input: □ Our above program allows users to input an integer n to determine the number of elements they want in the Fibonacci series. 2. Iterative Method: □ Above Java program uses an iterative approach to generate the Fibonacci series. □ It initializes the first two elements (first and second) to 0 and 1, respectively. □ The program then uses a for loop to calculate and print each subsequent element in the series. 3. Logic in Loop: □ Within the loop, the logic distinguishes between the first two elements and calculates subsequent elements based on the sum of the previous two. □ The loop continues for n iterations, printing each element of the Fibonacci series. 4. Output Illustration: □ The output, illustrated with an example where n is set to 5, demonstrates the generation of the Fibonacci series: 0, 1, 1, 2, 3. Happy coding! 🚀
{"url":"https://quescol.com/interview-preparation/java-fibonacci-series-using-iterative-method","timestamp":"2024-11-10T17:52:02Z","content_type":"text/html","content_length":"87582","record_id":"<urn:uuid:745566c1-86d5-4016-82f0-676ef3694014>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00840.warc.gz"}
Calcium - AlladiRamakrishnanHall Alladi Ramakrishnan Hall Chern-Simons matter theories, Bose-Fermi duality and anyonic statistics Naveen S. Prabhakar ICTS, Bangalore Physics in two spatial dimensions is strange due to the possibility of the existence of anyons -- particle-like excitations which possess arbitrary real spin, and arbitrary exchange statistics. In this talk, we describe the coupling of scalar and fermionic degrees of freedom to non-abelian Chern-Simons gauge theories and the signatures of anyonic physics in the many quantities - partition functions, S-matrices, occupation number distributions and so on - that have been exactly and explicitly computed in the 't Hooft large N limit (N being the rank of the Chern-Simons gauge group). Perhaps the most physically intriguing property of these theories is Bose-Fermi duality which states that Chern-Simons coupled bosonic and fermionic matter theories are equivalent to each other. Another (related) stark consequence of the Chern-Simons coupling is an 'exclusion principle' for the bosonic matter theory - analogous to the Pauli exclusion principle - that was observed in the occupation number distribution. Though the observation was initially made in the large N limit of these theories, we offer a deeper structural explanation that holds for all finite values of N albeit in a convenient non-relativistic limit of the theories.
{"url":"https://www.imsc.res.in/cgi-bin/CalciumShyam/Calcium40.pl?CalendarName=InstituteEvents&ID=4936&Source=AlladiRamakrishnanHall&Date=2023%2F12%2F5&DoneURL=Calcium40.pl%3FCalendarName%3DInstituteEvents%26Op%3DShowIt%26Amount%3DWeek%26NavType%3DNeither%26Type%3DBlock%26Date%3D2023%252F12%252F5&Op=PopupWindow&Amount=Week&NavType=Neither&Type=Block","timestamp":"2024-11-05T16:23:21Z","content_type":"application/xhtml+xml","content_length":"4236","record_id":"<urn:uuid:5af07f89-b92b-4fef-94a0-c574059f4a61>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00587.warc.gz"}
Glossary page D A term with several meanings. Data can mean a collection of facts, numbers, or information; the individual values of which are often the results of an experiment or observations. If the data are in the form of a table with the columns consisting of variables and the rows consisting of values of each variable for different individuals or values of each variable at different times, then data has the same meaning as data set. Data can also mean the values of one or more variables from a data set. Data can also mean a variable or some variables from a data set. Properly, data is the plural of datum, where a datum is any result. In everyday usage, the term data is often used in the singular. See: data set Curriculum achievement objectives references Statistical investigation: All levels Statistical literacy: Levels 2, (3), (4), 5, (6), (7), (8) Data display A representation, usually as a table or graph, used to explore, summarise, and communicate features of data. Data displays listed in this glossary are: bar graph, box and whisker plot, dot plot, frequency table, histogram, line graph, one-way table, picture graph, pie graph, scatter plot, stem-and-leaf plot, strip graph, tally chart, two-way table. Curriculum achievement objectives references Statistical investigation: Levels 1, 2, 3, 4, 5, 6, (7), (8) Statistical literacy: Levels 2, 3, (4), (5), 6 Data set A table of numbers, words or symbols, the values of which are often the results of an experiment or observations. Data sets almost always have several variables. Usually the columns of the table consist of variables and the rows consist of values of each variable for individuals or values of each variable at different times. Example 1 (Values for individuals) The table below shows part of a data set resulting from answers to an online questionnaire from 727 students enrolled in an introductory statistics course at the University of Auckland. Online questionnaire answers data set. Individual Gender Birth Birth Ethnicity Number of Number of Actual Ideal month year years living countries weight weight in NZ visited (kg) (kg) 1 female Jan 1984 Other European 2 3 55 50 2 female Nov 1990 Chinese 15 11 53 49 3 male Jan 1990 NZ European 18 2 68 60 . . . . . . . . . . . . . . . . . . . . . . . . . . . Example 2 (Values at different times) The table below shows part of a data set resulting from observations at a weather station in Rolleston, Canterbury, for each day in November 2008. Time series weather observations data set. Day Max temp (°C) Rainfall (mm) Max pressure (hPa) Max wind gust (km/h) 1 26.8 0.5 1015.1 70.3 2 19.7 0.0 1015.6 38.9 3 19.5 0.0 1011.1 29.6 . . . . . . . . . . . . . . . Alternative: dataset Curriculum achievement objectives references Statistical investigation: Levels 3, (4), 5, (6), 7, 8 Dependent variable A common alternative term for the response variable in bivariate data. Alternatives: outcome variable, output variable, response variable Curriculum achievement objectives reference Statistical investigation: (Level 8) Descriptive statistics Numbers calculated from a data set to summarise the data set and to aid comparisons within and among variables in the data set. Alternatives: numerical summary, summary statistics Curriculum achievement objectives references Statistical investigation: Levels (5), (6), (7), (8) Desk review A review of a questionnaire for the purpose of finding likely problems with it before it is used in a survey. Ideally, a desk review should be carried out by at least two people, including someone who did not design the questions. It should be carried out before a pilot survey and done at several stages throughout a survey, especially after any changes have been made. A desk review should check the questionnaire: • is consistent with the survey objectives • uses consistent terms and language • uses language appropriate for the intended respondents • uses questions that are reasonably simple, unambiguous and unbiased • is designed to be easy to follow. Alternative: desk evaluation Curriculum achievement objectives reference Statistical investigation: (Level 7) Deterministic model A model that will always produce the same result for a given set of input values. A deterministic model does not include elements of randomness. A model, being an idealised description of a situation, is developed by making some assumptions about that situation. A deterministic model will often be written in the form of a mathematical function. A model for calculating the amount of money in a term deposit account after a given time will always produce the same answer for a given initial deposit, interest rate and method of calculating the If the initial deposit is P dollars, the interest rate is r% per annum but the interest is calculated daily, then the amount in the account, in dollars, after n days can be calculated by P, r and n the result of the calculation of See: probabilistic model Curriculum achievement objectives reference Probability: (Level 8) Discrete distribution The variation in the values of a variable that can only take on distinct values, usually whole numbers. A discrete distribution could be an experimental distribution, a sample distribution, a population distribution, or a theoretical probability distribution. Example 1 At Level 8, the binomial distribution is an example of a discrete theoretical probability distribution. Example 2 Consider a random sample of households in New Zealand. The distribution of household sizes from this sample is an example of a discrete sample distribution. See: distribution Curriculum achievement objectives references Statistical investigation: Levels (5), (6), (7), (8) Probability: Levels 5, 6, 7, (8) Discrete random variable A random variable that can take only distinct values, usually whole numbers. The number of left-handed people in a random selection of 10 individuals from a population is a discrete random variable. The distinct values of the random variable are 0, 1, 2, … , 10. Curriculum achievement objectives reference Probability: Level 8 Discrete situations Situations involving elements of chance in which the outcomes can take only distinct values. If the outcomes are categories, then this is a discrete situation. If the outcomes are numerical, then the distinct values are often whole numbers. Curriculum achievement objectives reference Probability: Level 6 Disjoint events Alternative: mutually exclusive events Curriculum achievement objectives reference Probability: (Level 8) The variation in the values of a variable. The collection of values forms an entity in itself; a distribution. This entity (or distribution) has its own features or properties. The type of distribution can be described in several different ways, including: • the type of variable (for example, continuous distribution, discrete distribution), • the way the values were obtained (for example, experimental distribution, population distribution, sample distribution), or • the way the occurrence of the values is summarised (for example, frequency distribution, probability distribution). Other types of distributions described in this glossary are bootstrap distribution, re-randomisation distribution, sampling distribution and theoretical probability distribution. See: bootstrap distribution, continuous distribution, discrete distribution, experimental distribution, features (of distributions), frequency distribution, population distribution, probability distribution, re-randomisation distribution, sample distribution, sampling distribution, theoretical probability distribution Curriculum achievement objectives references Statistical investigation: Levels 4, 5, 6, (7), (8) Probability: Levels 4, 5, 6, 7, 8 Dot plot A graph for displaying the distribution of a numerical variable in which each dot represents a value of the variable. For a whole-number variable, if a value occurs more than once, the dots are placed one above the other so that the height of the column of dots represents the frequency for that value. Dot plots are particularly useful for comparing the distribution of a numerical variable for two or more categories of a category variable; this is shown by displaying side-by-side dot plots on the same scale. Dot plots are particularly useful when the number of values to be plotted is relatively small. Dot plots are usually drawn horizontally, but may be drawn vertically. The actual weights of random samples of 50 male and 50 female students enrolled in an introductory statistics course at the University of Auckland are displayed on the dot plot below. If you cannot view or read this diagram/graph, select this link to open a text version Alternative: dot graph, dotplot Curriculum achievement objectives references Statistical investigation: Levels (3), (4), (5), (6), (7), (8) Last updated October 9, 2013
{"url":"https://seniorsecondary.tki.org.nz/Mathematics-and-statistics/Glossary/Glossary-page-D","timestamp":"2024-11-13T20:58:56Z","content_type":"application/xhtml+xml","content_length":"246688","record_id":"<urn:uuid:6184a275-920e-416c-b944-627abea65d0b>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00408.warc.gz"}
about aaaz.com Phishing is the use of email, websites, or other means to illegally collect, store, and distribute personal information. If you think you have been phished, be sure to observe the following don'ts: 1. Do not follow links; 2. Do not download any files from the site; 3. Do not open attached files; 4. Do not install any software on your device; 5. Do not share any personal details or information. Report phishing to your domain name registrar.
{"url":"https://report-abuse.com/aaaz","timestamp":"2024-11-12T02:36:47Z","content_type":"text/html","content_length":"711848","record_id":"<urn:uuid:e9680ef8-b2a5-4f00-84ac-faa84ebade53>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00096.warc.gz"}
Jean-Pierre Bourguignon - Wikiwand Jean-Pierre Bourguignon (born 21 July 1947) is a French mathematician, working in the field of differential geometry. Born in Lyon, France, he studied at École Polytechnique in Palaiseau, graduating in 1969. For his graduate studies he went to Paris Diderot University, where he obtained his PhD in 1974 under the direction of Marcel Berger.^[1] He was president of the Société Mathématique de France from 1990 to 1992. From 1995 to 1998, he was president of the European Mathematical Society. He was director of the Institut des Hautes Études Scientifiques near Paris from 1994 to 2013. Between 1 January 2014 and 31 December 2019 he was the President of the European Research Council. Prof. Bourguignon received the Prix Paul Langevin in 1987 and the Prix du Rayonnement Français in Mathematical Sciences and Physics from the Académie des Sciences de Paris in 1997. He is a foreign member of the Royal Spanish Academy of Sciences. In 2005, he was elected honorary member of the London Mathematical Society and has been the secretary of the mathematics section of the Academia Europaea.^[2] • Bourguignon, Jean-Pierre (2007). Calcul variationnel (in French). Palaiseau: Éditions de l'École Polytechnique. 328 pages. ISBN 978-2-7302-1415-5. MR 2490159. • with Oussama Hijazi, Jean-Louis Milhorat, Andrei Moroianu and Sergiu Moroianu: A Spinorial Approach to Riemannian and Conformal Geometry. European Mathematical Society. 2015. ISBN • as editor with Rolf Jeltsch, Alberto Adrego Pinto, and Marcelo Viana: Dynamics, Games and Science: International Conference and Advanced School Planet Earth, DGS II, Portugal, August 28–September 6, 2013. Springer. 24 July 2015. ISBN 978-3-319-16118-1. Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface. Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.
{"url":"https://www.wikiwand.com/en/articles/Jean-Pierre_Bourguignon","timestamp":"2024-11-14T15:19:24Z","content_type":"text/html","content_length":"218271","record_id":"<urn:uuid:4db86394-6f9d-402c-9dac-69a345944714>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00317.warc.gz"}
Enhanced Rupture Force in a Cut-Dispersed Double-Network Hydrogel College of Physics, Taiyuan University of Technology, Taiyuan 030024, China College of Materials Science & Engineering, Taiyuan University of Technology, Taiyuan 030024, China Shanxi-Zheda Institute of Advanced Materials and Chemical Engineering, Taiyuan 030024, China Institute for Chemical Reaction Design and Discovery, Hokkaido University, Sapporo 001-0021, Japan Ministry of Education Key Laboratory of Macromolecular Synthesis and Functionalization, Department of Polymer Science and Engineering, Zhejiang University, Hangzhou 310027, China Authors to whom correspondence should be addressed. These authors contributed equally to this work. Submission received: 8 January 2023 / Revised: 10 February 2023 / Accepted: 14 February 2023 / Published: 16 February 2023 The Kirigami approach is an effective way to realize controllable deformation of intelligent materials via introducing cuts into bulk materials. For materials ranging from ordinary stiff materials such as glass, ceramics, and metals to soft materials, including ordinary hydrogels and elastomers, all of them are all sensitive to the presence of cuts, which usually act as defects to deteriorate mechanical properties. Herein, we study the influence of the cuts on the mechanical properties by introducing “dispersed macro-scale cuts” into a model tough double network (DN) hydrogel (named D-cut gel), which consists of a rigid and brittle first network and a ductile stretchable second network. For comparison, DN gels with “continuous cuts” having the same number of interconnected cuts (named C-cut gel) were chosen. The fracture tests of D-cut gel and C-cut gel with different cut patterns were performed. The fracture observation revealed that crack blunting occurred at each cut tip, and a large wrinkle-like zone was formed where the wrinkles were parallel to the propagation direction of the cut. By utilizing homemade circular polarizing optical systems, we found that introducing dispersed cuts increases the rupture force by homogenizing the stress around the crack tip surrounding every cut, which reduces stress concentration in one certain cut. We believe this work reveals the fracture mechanism of tough soft materials with a kirigami cut structure, which should guide the design of advanced soft and tough materials along this line. 1. Introduction Morphing materials, including elastomers and gels, notable for their softness, high reversible stretchability, and functionalities [ ] find a wide range of advanced technical applications, from biomedical adhesives [ ], scaffolds for cell culture [ ], and tissue engineering [ ], to soft robotics [ ] and stretchable electronics [ ]. Among these materials, hydrogels have received increasing attention owing to their similarity to soft bio-tissues and multiresponsiveness to external stimuli. While as a kind of morphing material, these bulk hydrogels show a relatively small amplitude of deformation due to their geometric continuity. The Kirigami approach is an effective strategy by introducing cuts to disrupt the continuity to enhance the deformation ability [ ]. However, given the intrinsic brittle nature of conventional hydrogels, the fabrication of gel with a kirigami structure that can sustain the force change during deformation remains a challenge. Since the beginning of the 21st century, research on overcoming intrinsic hydrogel brittleness has progressed tremendously, and many great efforts have been made in fabricating strong and tough soft materials through chemical modification methods [ ] (e.g., introducing sacrificial bonds [ ]) or reinforcing fillers [ ]. Taking the double-network (DN) strategy as a typical example, the DN hydrogels are strong and tough hydrogels consisting of two kinds of interpenetrating polymer networks with contrasting physical features: i.e., a stiff and brittle first network with dilute, densely crosslinked short chains and a soft and ductile second network with concentrated, loosely crosslinked long chains [ ]. During deformation, the brittle first network breaks into clusters to dissipate a large amount of energy, while the stretchable second network keeps the integrity of the whole material without catastrophic failure [ ]. The resultant strong and tough DN hydrogels, despite containing high water contents (~90 wt%), exhibit relatively high stiffness (elastic modulus of 0.1–1.0 MPa), high mechanical strength (nominal tensile stress at break of 1∼10 MPa, strain at break of 1000–2000%), and high toughness (fracture energy of 10 J m ) [ ]. These mechanical performances are greatly superior to the SN gels of their individual components and even comparable to human load-bearing tissues and some tough industrial rubbers [ ]. The double-network strategy has recently been extended to multiple-network elastomers exhibiting extremely reinforced mechanical strength and toughness without sacrificing stretchability [ On the basis of the aforementioned developments, tough hydrogels with kirigami structures were created. For example, a shape memory organohydrogel sheet cut with complex configurations can be transformed between distinct configurations by multistep shape memorization complemented with an external force [ ]. The metal-coordinated tough hydrogels with complex geometries created by photolithographic polymerization can afford additional stretchability and better compliance to wrap on curved surfaces [ ]. In addition, the tough composite gels incorporating a series of cutouts show a high degree of deformation freedom and large deformation amplitude of the responsive gel strips [ ]. The above-mentioned tough hydrogels with kirigami structure show potential applications as flexible electronics, biomedical devices, and soft actuators. The cuts introduced in the bulk gels generally act as defects in gels, which strongly affect the physical and mechanical properties [ ]. However, very few studies have focused on the influence of introduced cuts in the kirigami structure on the mechanical properties of tough gels. In this work, the typical tough DN hydrogel is selected as a model system to study the effect of introduced cuts. The tough DN hydrogel is chosen for its unprecedented mechanical performance and easy synthesis among many hydrogel materials. It is known that DN hydrogels have rate-independent deformation behavior and negligible molecular interactions [ ]. More importantly, the near-tip yielding region of DN hydrogels, where the deformation is so large and energy dissipation significantly occurs, can be directly observed by a birefringence observation experimental setup [ ]. The dispersed cuts with a triangle shape were first introduced into the DN gels (named D-cut gels) to study the influence of cuts. For comparison, the DN gels with continuous cuts (named C-cut gel) and with pure shear geometry were also prepared ( Figure 1 ). Combining the birefringence observation and the fracture tests, we found that the introduction of dispersed cuts can increase the rupture force by homogenizing the stress around the crack tip surrounding every cut, avoiding stress concentration in one certain cut. Around each cut, a large damage zone was formed to dissipate considerable energy to endow the material with toughness. We believe this work not only provides some insights into the fracture of tough soft materials with kirigami-cut structure but also guides the design of soft and tough materials with targeted mechanical properties by the introduction of kirigami cuts. 2. Results and Discussion 2.1. Effect of Dispersed Cuts on the Fracture Behavior of DN Hydrogel Specimens A series of DN hydrogel specimens with dispersed and continuous cuts were created by a laser cutter (named D-cut gels and C-cut gels, respectively). Although the surface of the cross-section was burnt due to the high intensity of the laser, the fracture curve of DN gels cut with laser was almost overlapped with the gels cut by stainless steel blades, indicating that laser burns of cuts have negligible influence on the mechanical properties ( Figure S1 ). This is due to how DN gels can form a large damage zone around the cuts during deformation [ ], resulting in their robust mechanical properties, which are insensitive to the laser burns around the cuts. Figure 1 depicts the schematic illustration of the structural patterns containing an array of dispersed or continuous cuts. The length scale is defined as the horizontal size of the triangle-shaped cuts, while represents the horizontal spacing between adjacent triangle-shaped cuts. In this work, the size of is kept constant at 1.5 mm for simplicity, and the spacing of is tuned from 0.75 mm to 6.0 mm. According to the different spacing , the D-cut gel specimens were prepared from these different structural patterns with dispersed cuts at different spacing ratios (1:0.5, 1:1, 1:2, 1:3, and 1:4) and their counterpart C-cut specimens with aligned continuous cuts, as shown in Figure 1 . Note that for comparison, the C-cut gel specimens are prepared to have the same number of aligned continuous cuts as the D-cut gel specimens such that the overall lengths of the uncracked ligaments can be kept the same between these specimens. The “continuous cuts” geometry in C-cut gels is more like “pure shear” geometry with a long crack and triangle-shaped crack front. We first test the tensile behavior of the original DN hydrogel. As shown in Figure S2a , the original DN hydrogel (without cuts) shows the characteristic stress–stretch ratio curve with remarkable stress yielding ( = 0.57 MPa), which corresponds to the onset of necking (highly deformed region) in the tensile sample [ ]. As elucidated in the previous study, the brittle first network is considered to severely rupture into fragments, and the stretchable second network is highly deformed in the necking region [ ]. Such stress-yielding occurs ahead of the crack tip, reducing the stress concentration at the crack tip, thereby blunting the crack tip and enhancing the fracture resistance of the material [ We next investigated fracture behavior in these patterned DN hydrogel specimens with dispersed and continuous cuts to demonstrate the effect of dispersed cuts ( Figure S3 ). The D-cut and C-cut gel specimens were loaded in the vertical direction using displacement control at a constant displacement rate of 50 mm/min (the original length between clamps was kept constant at around 10 mm, corresponding to a stretch rate of 0.08 s ). Compared with samples without cuts shown in Figure S2 , the mechanical properties of the sample with cuts showed significant deterioration. Figure 2 a–e further show the force ( )–displacement ( ) curves of D-cut and C-cut DN hydrogel specimens during loading under different structural patterns, containing dispersed and continuous cuts at different spacing ratios of 1:0.5, 1:1, 1:2, 1:3, and 1:4. Here, we needed to emphasizes that the force–displacement curves of samples with C-cut and “pure shear” geometry are almost overlapped, indicating that the crack shape in the crack front has no obvious influence on the mechanical properties. Regardless of spacing ratios h[1]:h[2], the force curves of D-cut gel specimens exceed those of C-cut specimens from the initial stage of loading. Taking spacing ratio h[1]:h[2] of 1:0.5 as an example, for the D-cut specimen, the force increases with increasing displacement between clamps until reaching a critical value (rupture force F[rupture] = 10.84 ± 0.11 N occurring at x[rupture] = 9.83 ± 0.38 mm), at which point the specimen begins to rupture, as seen in the snapshots during loading shown in birefringence experiment in the later section. A drop in load occurs during specimen rupture, as some regions sustain load ruptures to lose load-bearing capability. By contrast, for the C-cut specimen, the force increases in a relatively slow manner, with increasing displacement between clamps until reaching a critical value (rupture force F[rupture] = 9.97 ± 0.60 N occurring at x[rupture] = 15.73 ± 2.54 mm), at which point the specimen begins to rupture. Figure 3 a summarizes all the representative force–displacement curves of DN hydrogel specimens with different structural patterns containing dispersed cuts and continuous cuts at different spacing ratios of 1:0.5, 1:1, 1:2, 1:3, and 1:4. For the D-cut specimens, by increasing the horizontal spacing between adjacent triangle-shaped cuts , both the critical rupture force and critical displacement increase ( Figure 3 a). This is reasonable because the cross-sectional area of the uncracked ligament in the specimens that sustain the load increases with the spacing between adjacent cuts , giving rise to a large load-bearing capability. We next analyzed the critical rupture force Figure 3 b. It is also found that regardless of different spacing ratios , the critical rupture forces of the D-cut specimens are higher than these of the C-cut specimens, and the deviation between of the D-cut specimens and C-cut specimens increases with Figure 3 b). Specifically, in the case of spacing ratio of 1:0.5, the D-cut specimens show a critical rupture force of 10.84 ± 0.11 N, slightly higher than that of the C-cut specimens (rupture force = 9.97 ± 0.60 N). While in the case of spacing ratio of 1:4, the D-cut specimens demonstrate the critical rupture force of 28.69 ± 2.48 N, much higher than that of the C-cut specimens (rupture force = 21.79 ± 1.76 N). Note that at the same spacing ratios , the D-cut specimens and C-cut specimens all have the same cross-sectional area of the uncracked ligament (the overall cross-sectional area of the whole specimen subtracted by the area occupied by cuts). Because the C-cut geometry is more like the conventional “pure shear” geometry with a precut, this result also indicates that the dispersed cuts can produce an enhanced rupture force than the pure shear specimen with a precut. The critical stretch ratio at the rupture point is further analyzed in Figure 3 c. In the case of spacing ratio of 1:0.5, the C-cut specimens show a much higher critical stretch ratio at rupture point (= 1.87 ± 0.04) than that of the D-cut specimens ( = 2.13 ± 0.11). With further increasing spacing until reaching spacing ratio of 1:2, the D-cut specimens begin to exhibit comparable with these of the C-cut specimens. In particular, at a spacing ratio of 1:4, the D-cut specimens exhibit of 2.50 ± 0.08, while the C-cut specimens show of 2.43 ± 0.13. We next compare the critical bulk stress at the rupture point ( ) of different specimens. We should note here that critical bulk stress at the rupture point ( σ[r, bulk] ) is defined as the rupture force divided by the cross-sectional area of the uncracked ligament (the overall cross-sectional area of the whole specimen subtracted by the area occupied by cuts). It is clearly shown in Figure 3 d that the of all the samples is located in the narrow range of 0.40 MPa to 0.55 MPa, which is close to the yielding stress of DN hydrogel ( = 0.57 MPa). Additionally, it is found that regardless of spacing ratios , the D-cut specimens show higher than the C-cut specimens. To demonstrate the effect of dispersed cuts, we then plot the rupture force ratio ( ) and normalized rupture stretch ratio ( ) as functions of spacing ratios Figure 4 . Here, the represent the rupture forces in fracture specimens containing dispersed cuts and continuous cuts at the same spacing ratios , respectively. The denote the rupture stretch ratios in fracture experiments containing dispersed cuts and continuous cuts at the same spacing ratios , respectively. The rupture force ratio also can be seen as the enhancement ratio of rupture force of fracture specimens by the dispersed cuts. As shown in Figure 4 , the enhancement ratio increases from 1.08 to 1.32 with changing spacing ratios from 1:0.5 to 1:4. In the meantime, the normalized rupture stretch ratio increases from 0.8 to 1.03 with changing spacing ratios from 1:0.5 to 1:4. This suggests that by increasing spacing , the dispersed cuts can enhance the rupture force without sacrificing the stretchability of the bulk materials, thereby increasing the fracture resistance. 2.2. Birefringence Observation on the Rupture of Specimens Containing Dispersed Cuts and Continuous Cuts We next try to understand why the dispersed cuts give rise to the enhanced rupture force. We perform the birefringence observation on the rupture of specimens containing dispersed cuts and continuous cuts using a birefringence observation experimental setup, shown in Figure S1 ; the specimen is placed between two crossed circular polarized films, and the two films were placed between a white lamp and a video camera [ ]. Real-time imaging of the birefringence for the specimens during loading allows us to investigate the crack tip behaviors at the rupture point for the D-cut and C-cut specimens. By this means, the highly deformed region of the specimens during rupture can be observed. Figure 5 Figure 6 Figure 7 display the representative snapshots of crack evolution in DN hydrogel specimens under different structural patterns containing dispersed cuts and continuous cuts at spacing ratios of 1:0.5, 1:2, and 1:4, respectively. As seen in Figure 5 a, in the case of spacing ratio of 1:0.5, for the D-cut specimens, owing to the small spacing between adjacent cuts, all the spacing regions are homogeneously deformed to a high level to exhibit a strong birefringence at the rupture point, just like the strong birefringence observed in the tensile process as reported in the previous study by Gong et al. After reaching the critical value of rupture force, some highly deformed spacing regions suddenly rupture, as seen in Figure 5 a. While in the case of C-cut specimens, a large bright triangle-shaped birefringence region occurs ahead of the crack tip ( Figure 5 b), like that observed in the “pure shear” geometry. As elucidated in the previous work by Gong et al., such a large bright birefringence area corresponds to the stress-yielding occurring ahead of the crack tip accompanied by the formation of a large yielding zone, where the brittle first network is considered to severely rupture into fragments and the stretchable second network is highly deformed. The formation of a large yielding zone reduces the stress concentration at the crack tip, thereby blunting the crack tip and enhancing the fracture resistance of the material [ With increasing spacing between adjacent cuts to reach a spacing ratio of 1:2, the spacing regions of the D-cut specimens are highly deformed to display nearly trapezoid-shaped birefringence areas, each of which is smaller than the triangle-shaped birefringence area in the C-cut specimens ( Figure 6 ). With further increasing of spacing between adjacent cuts to reach a spacing ratio of 1:4, the spacing regions of the D-cut specimens are highly deformed to display nearly triangle-shaped birefringence areas, some of which are comparable to the triangle-shaped birefringence area in the C-cut specimens ( Figure 7 ). Additionally, it should be mentioned that the C-cut specimens all exhibit the directed crack propagation direction from left to right like the conventional “pure shear” specimen, while the D-cut specimens rupture randomly in the weakest points of the spacing regions. It is well known that the specimen ruptures only when the material point ahead of the crack tip (or ahead of cuts) is highly stretched to possess stress exceeding the critical threshold value . Owing to the stress concentration, the crack tip stress is usually amplified from the bulk stress . If we denote a stress concentration factor α as the ratio of crack tip stress over the bulk stress $σ t i p = α σ b u l k$ , a severe stress concentration in the crack tip means that the crack tip stress will be amplified to α times the bulk stress . Because the yielding dominates the yielding zone area (birefringence area) ahead of the crack tip, we can simply consider that the maximum stress of the material point ahead of the crack tip, which is also the critical threshold value , should be related to yielding stress by a factor of β; thus we have $σ t h r e s h o l d = β σ y , t e n s$ . For the same material, β should be a constant. Then, the rupture criterion will be $σ t i p ≥ σ t h r e s h o l d$ , thus $α σ b u l k ≥ β σ y , t e n s$ . With increasing the bulk stress until reaching the critical bulk stress $σ r , b u l k$ at the rupture point, the rupture occurs. So, we have $α σ r , b u l k = β σ y , t e n s$ ; thus, a normalized stress concentration factor will be $α β = σ y , t e n s σ r , b u l k$ . We next plot the $σ y , t e n s σ r , b u l k$ for D-cut specimens and C-cut specimens in Figure 8 . It can be seen that the values of $σ y , t e n s σ r , b u l k$ for the D-cut specimens are lower than those of the C-cut specimens, suggesting that the stress concentration due to the crack tip (or cuts) is less severe in the “dispersed cuts” cases. Note that for common DN hydrogels, even in the “continuous cuts” cases, the formation of a large yielding zone already remarkably reduces the stress concentration at the crack tip, thereby blunting the crack tip and enhancing the fracture resistance of the material. Here, we show that the introduction of a “dispersed cuts” pattern can further reduce the stress concentration at the crack tip, enhancing the rupture force. It can also be observed from the birefringence snapshots for D-cut specimens in Figure 5 Figure 6 Figure 7 that the spacing regions between every two adjacent cuts, which are sustaining loads, are all deformed to a high level ahead of the crack tip, meaning that the dispersed cuts may homogenize the stress around the crack tip surrounding every cut, avoiding stress concentration in one certain cut. Therefore, the “dispersed cuts” specimens can sustain an enhanced rupture force. 2.3. Characteristic Fracture Structure of the DN Gels with Dispersed and Continuous Cuts To clarify the fracture structure in the gels with cuts, we further observed the characteristic fracture structure of gels by optical microscopy, and the results are presented in Figure 9 . These images were obtained before and after stretching for λ = 2.1. Before stretching, a smooth surface was observed around the crack tip, and there was no characteristic structure under the microscopic scale ( Figure 9 a(i),b(i)). This is in agreement with the fact that without deformation, the PAMPS network remains undamaged. After stretching the gel for λ = 2.1, we found that the deformation can induce a wrinkle-like fracture structure formed around the crack tips. For the D-cut gel with dispersed cuts, a wrinkle-like fracture structure was observed around every cut. While for the C-cut gel with continuous cuts, only the most inside cut (marked in Figure 9 b) shows a wrinkle-like fracture structure. The characteristic direction of the wrinkles is vertical to the direction of the applied force, suggesting a parallel and gradient fracture of the first network PAMPS along the stretching direction [ ]. The above wrinkle structure also has been observed in the ultrathin film DN gels with 100 μm thickness [ ]. The wrinkle structure region around the crack tip was considered as the damage zone dissipating a large amount of energy. The D-cut gels can form several damage zones while the C-cut gels can only form one damage zone; thus, the D-cut gels possibly dissipate more energy compared to the C-cut gels. 3. Conclusions We studied the fracture behaviors of tough materials with a kirigami-cut structure. By incorporating dispersed cuts into a tough hydrogel matrix, it can effectively improve the force at the rupture point. We demonstrate the effect of dispersed cuts by comparing the fracture behavior of D-cut specimens with dispersed cuts with that of the C-cut specimens having the same number of continuous cuts. By combining fracture test and fracture observation, we find that the introduction of dispersed cuts increases the rupture force by homogenizing the stress around the crack tip surrounding every cut, avoiding stress concentration in one certain cut. The quantitative analysis of a normalized stress concentration factor $σ y , t e n s σ r , b u l k$ supports the finding that the introduction of a “dispersed cuts” pattern can reduce the stress concentration at the crack tip, enhancing the rupture force. We believe this work not only provides some insights into the fracture of tough soft materials with a kirigami-cut structure, but also guides the design of soft and tough materials with targeted mechanical properties by the introduction of kirigami cuts. 4. Materials and Methods 4.1. Materials 2-Acrylamido-2-methylpropanesulfonic acid sodium salt (NaAMPS), acrylamide (AAm), N, N′-methylenebis(acrylamide) (MBAA), and α-ketoglutaric acid (α-keto) were purchased at Shanghai Aladdin Bio-Chem Technology Co., Ltd. (Shanghai, China) and used as received. Milli-Q water (resistivity: 18.3 MΩ·cm) was used in all experiments. 4.2. Synthesis of DN Hydrogels The poly(2-acrylamido-2-methylpropanesulfonic acid sodium salt)/polyacrylamide (PNaAMPS/PAAm) DN hydrogels were synthesized by a two-step sequential network formation technique following the literature [ ]. The first PNaAMPS network of the DN hydrogels was synthesized from an aqueous solution of 1.0 M NaAMPS containing 3 mol% crosslinking agent, MBAA, and 1 mol% initiator, -keto. To perform the polymerization, the solution was purged in an argon atmosphere to remove dissolved oxygen and then poured into a reaction cell consisting of a pair of glass plates with 0.5 mm spacing. The reaction cell was irradiated with UV light (365 nm) for 8 h. These gels (first network) were then immersed in an aqueous solution of 2.0 M AAm, containing 0.01 mol% MBAA and 0.01 mol% -keto, for one day until swelling equilibrium was reached. The polymerization was performed again by 365 nm UV irradiation for 8 h. The as-prepared DN gels were then immersed in pure water to reach equilibrium to obtain the DN gels for further experiments. 4.3. Preparation of Various Cuts Patterns in DN Hydrogel Specimens To prepare various cut patterns in DN hydrogels for fracture experiments, as illustrated in Figure 1 , the DN hydrogel samples were cut using a laser cutter machine (ULTRA R5000, Universal Laser Systems, Inc., Yokohama, Japan). The length scale is defined as the horizontal size of the triangle-shaped cuts, while represents the horizontal spacing between adjacent cuts. The is kept constant at 1.5 mm in this work, and the spacing is controlled in various lengths of 0.75 mm, 1.5 mm, 3.0 mm, and 6.0 mm, respectively. Accordingly, the DN hydrogel specimens with structural cut patterns were prepared at different spacing ratios (1:0.5, 1:1, 1:2, 1:3, and 1:4). 4.4. Real-Time Birefringence Observation on the Fracture Tests The experimental setup and sample size used for real-time birefringence observation of crack propagation in DN hydrogel specimens during fracture test are illustrated in Figure S1 ]. The D-cut specimens and C-cut specimens were fixed with the pure shear clamps of a tensile tester machine (4466, Instron Instruments, Inc., Norwood, MA, USA), where the initial clamp distance was fixed at ~10 mm. The clamps were stretched by a tensile tester machine at a constant velocity of 50 mm/min, and the force–displacement curve was recorded. To perform real-time observation of the sample, the sample is placed in between two crossed circular polarized films. One polarizing film was set in front of a white lamp across the light path, and the other one was placed in front of the recording video camera. The in situ fracture process was recorded using an ordinary video camera (24 frames/s, 1920 × 1080 pixels, Sony α7S E-mount Camera, Sony Electronics Inc., San Diego, CA, USA). The entire procedure was performed in a dark room. 4.5. Tensile Test The tensile mechanical properties of the DN hydrogels were measured with a commercial test machine (4466, Instron Instruments, Inc., USA.) in the air. The samples were cut into dumbbell shapes standardized as JISK6251-7 size (gauge length 12 mm, width 2 mm) with a gel cutting machine (Dumbbell Co., Ltd., Kawagoe, Japan). The nominal stress σ-stretch ratio λ curves were recorded while the sample gels were stretched at a constant velocity of 100 mm/min (strain rate of 0.14 s^−1). 4.6. Microscopic Observation of Cuts in the DN Gels Samples for the microscopic observation of tensile-induced fracture were shaped the same as shown in Figure 1 . The samples were first experienced to a designated pre-strain λ = 2.1 under the tensile velocity of 50 mm/min and then taken out for the microscopic observation with a 5-folds objective lens at the free-standing state of the samples. For comparison, the gel without stretching was also observed. Supplementary Materials The following supporting information can be downloaded at , Figure S1: The cut surface and the pure shear fracture behaviors of DN hydrogels; Figure S2: The stress-strain curve of the bulk DN gels; Figure S3: Representative strain-stress curves of highly deformable DN hydrogels under different structural models with dispersed cuts and continuous cuts; Figure S4: Schematic diagram of birefringence observation set-up for fracture tests. Author Contributions Conceptualization, Y.Y., Y.Z. and W.Y.; methodology, S.Z., D.Y., L.C., Y.W., F.Z., Y.Z. and Q.Z.; formal analysis, S.Z. and D.Y.; writing—original draft, S.Z. and Y.Z.; writing—review and editing, Y.Y. and Y.Z.; supervision, Y.Y., W.Y. and Q.Z. All authors have read and agreed to the published version of the manuscript. This work was funded by National Natural Science Foundation of China (32173026, 52103019), the Shanxi-Zheda Institute of Advanced Materials and Chemical Engineering (2021SX-TD001, 2021SX-TD010), the 2021 Major Science and Technology Program of Taiyuan, and the Fund for Shanxi “1331 Project”. Institutional Review Board Statement Not applicable. Informed Consent Statement Not applicable. Data Availability Statement The data presented in this study are available on request from the corresponding author. Conflicts of Interest The authors declare no conflict of interest. Figure 1. Scheme of the structural models with dispersed cuts and continuous cuts patterns for fracture experiments. (a) Scheme of the structural model containing an array of dispersed cuts investigated in this work. The length scale h[1] is defined as the horizontal size of the triangle-shaped cuts, while h[2] represents the horizontal spacing between adjacent cuts. In this work, the h [1] is kept constant at 1.5 mm for simplicity. (b) The structural models with dispersed cuts at different spacing ratios h[1]:h[2] (1:0.5, 1:1, 1:2, 1:3, and 1:4) and their counterpart structural models with the same number of aligned continuous cuts. The pure shear model was used as the reference. Figure 2. Representative force curves of highly deformable DN hydrogels under different structural models with dispersed cuts (D-cut gels) and continuous cuts (C-cut gels). The representative force ( F)–displacement (x) curves of DN hydrogels under different structural models containing dispersed cuts and continuous cuts at different spacing ratios h[1]:h[2] of 1:0.5 (a), 1:1 (b), 1:2 (c), 1:3 (d ), and 1:4 (e). The force curves of samples with a pure shear geometry were also provided in (a,c,e) for comparison. Figure 3. Effect of spacing ratios h[1]:h[2] and dispersed/continuous cuts on the mechanical behaviors of DN hydrogel samples in fracture experiments. (a) Summarized force (F)–displacement (x) curves of DN hydrogels. (b) Rupture force (F[rupture]), (c) critical stretch ratio at rupture point (λ[rupture]), and (d) critical bulk stress at rupture point (σ[r,bulk]) for DN hydrogel samples containing dispersed cuts (D-cut gels) and continuous cuts (C-cut gels) at different spacing ratios h[1]:h[2] (1:0.5, 1:1, 1:2, 1:3, and 1:4). Figure 4. The rupture force ratio (F[dis]/F[con]) and normalized rupture stretch ratio (λ[dis]/λ[con]) as functions of spacing ratios h[1]:h[2]. The F[dis] and F[con] represent the rupture forces in fracture specimens containing dispersed cuts and continuous cuts, respectively. The λ[dis] and λ[con] denote the rupture stretch ratios in fracture experiments containing dispersed cuts (D-cut gels) and continuous cuts (C-cut gels), respectively. Figure 5. Birefringence observation during fracture of D-cut and C-cut gels at spacing ratio h[1]:h[2] of 1:0.5. (a,b) The representative snapshots of crack evolution in DN hydrogel samples under different structural models containing dispersed cuts ((a) D-cut gel) and continuous cuts ((b) C-cut gel) at spacing ratio h[1]:h[2] of 1:0.5. Figure 6. Birefringence observation during fracture of D-cut and C-cut gels at spacing ratio h[1]:h[2] of 1:2. (a,b) The representative snapshots of crack evolution in DN hydrogel samples under different structural models containing dispersed cuts ((a) D-cut gel) and continuous cuts ((b) C-cut gel) at spacing ratio h[1]:h[2] of 1:2. Figure 7. Birefringence observation during fracture of D-cut and C-cut gels at spacing ratio h[1]:h[2] of 1:4. (a,b) The representative snapshots of crack evolution in DN hydrogel samples under different structural models containing dispersed cuts ((a) D-cut gel) and continuous cuts ((b) C-cut gel) at spacing ratio h[1]:h[2] of 1:4. Figure 8. The normalized stress concentration ratio (the ratio between tensile yielding stress and critical bulk stress, σ[y,tens]/σ[r,bulk]) as functions of spacing ratios h[1]:h[2]. Figure 9. The crack tip structure of DN hydrogels was observed by optical microscopy. (a,b) The representative crack tip structure in DN hydrogel samples before and after loading to a stretch ratio λ of 2 under different structural models containing dispersed cuts (a) and continuous cuts (b) at spacing ratio h[1]:h[2] of 1:2. Wrinkles-like damaged structure can be observed ahead of the crack tip, corresponding to the damage zone observed by birefringence. Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https: Share and Cite MDPI and ACS Style Zhu, S.; Yan, D.; Chen, L.; Wang, Y.; Zhu, F.; Ye, Y.; Zheng, Y.; Yu, W.; Zheng, Q. Enhanced Rupture Force in a Cut-Dispersed Double-Network Hydrogel. Gels 2023, 9, 158. https://doi.org/10.3390/ AMA Style Zhu S, Yan D, Chen L, Wang Y, Zhu F, Ye Y, Zheng Y, Yu W, Zheng Q. Enhanced Rupture Force in a Cut-Dispersed Double-Network Hydrogel. Gels. 2023; 9(2):158. https://doi.org/10.3390/gels9020158 Chicago/Turabian Style Zhu, Shilei, Dongdong Yan, Lin Chen, Yan Wang, Fengbo Zhu, Yanan Ye, Yong Zheng, Wenwen Yu, and Qiang Zheng. 2023. "Enhanced Rupture Force in a Cut-Dispersed Double-Network Hydrogel" Gels 9, no. 2: 158. https://doi.org/10.3390/gels9020158 Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details Article Metrics
{"url":"https://www.mdpi.com/2310-2861/9/2/158","timestamp":"2024-11-03T03:50:56Z","content_type":"text/html","content_length":"481207","record_id":"<urn:uuid:79ace2c2-9c1c-44f8-be80-bdcff8cb07dc>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00875.warc.gz"}
Miles (US survey) to Earth's distance from sun Converter Enter Miles (US survey) Earth's distance from sun ⇅ Switch toEarth's distance from sun to Miles (US survey) Converter How to use this Miles (US survey) to Earth's distance from sun Converter 🤔 Follow these steps to convert given length from the units of Miles (US survey) to the units of Earth's distance from sun. 1. Enter the input Miles (US survey) value in the text field. 2. The calculator converts the given Miles (US survey) into Earth's distance from sun in realtime ⌚ using the conversion formula, and displays under the Earth's distance from sun label. You do not need to click any button. If the input changes, Earth's distance from sun value is re-calculated, just like that. 3. You may copy the resulting Earth's distance from sun value using the Copy button. 4. To view a detailed step by step calculation of the conversion, click on the View Calculation button. 5. You can also reset the input by clicking on button present below the input field. What is the Formula to convert Miles (US survey) to Earth's distance from sun? The formula to convert given length from Miles (US survey) to Earth's distance from sun is: Length[(Earth's distance from sun)] = Length[(Miles (US survey))] / 92956937.04847766 Substitute the given value of length in miles (us survey), i.e., Length[(Miles (US survey))] in the above formula and simplify the right-hand side value. The resulting value is the length in earth's distance from sun, i.e., Length[(Earth's distance from sun)]. Calculation will be done after you enter a valid input. Consider that a cross-country road trip covers 2,000 miles (US survey). Convert this distance from miles (US survey) to Earth's distance from sun. The length in miles (us survey) is: Length[(Miles (US survey))] = 2000 The formula to convert length from miles (us survey) to earth's distance from sun is: Length[(Earth's distance from sun)] = Length[(Miles (US survey))] / 92956937.04847766 Substitute given weight Length[(Miles (US survey))] = 2000 in the above formula. Length[(Earth's distance from sun)] = 2000 / 92956937.04847766 Length[(Earth's distance from sun)] = 0.0000215153388602 Final Answer: Therefore, 2000 mi is equal to 0.0000215153388602 earth's distance from sun. The length is 0.0000215153388602 earth's distance from sun, in earth's distance from sun. Consider that a long-distance bike ride spans 100 miles (US survey). Convert this distance from miles (US survey) to Earth's distance from sun. The length in miles (us survey) is: Length[(Miles (US survey))] = 100 The formula to convert length from miles (us survey) to earth's distance from sun is: Length[(Earth's distance from sun)] = Length[(Miles (US survey))] / 92956937.04847766 Substitute given weight Length[(Miles (US survey))] = 100 in the above formula. Length[(Earth's distance from sun)] = 100 / 92956937.04847766 Length[(Earth's distance from sun)] = 0.000001075766943 Final Answer: Therefore, 100 mi is equal to 0.000001075766943 earth's distance from sun. The length is 0.000001075766943 earth's distance from sun, in earth's distance from sun. Miles (US survey) to Earth's distance from sun Conversion Table The following table gives some of the most used conversions from Miles (US survey) to Earth's distance from sun. Miles (US survey) (mi) Earth's distance from sun (earth's distance from sun) 0 mi 0 earth's distance from sun 1 mi 1.076e-8 earth's distance from sun 2 mi 2.152e-8 earth's distance from sun 3 mi 3.227e-8 earth's distance from sun 4 mi 4.303e-8 earth's distance from sun 5 mi 5.379e-8 earth's distance from sun 6 mi 6.455e-8 earth's distance from sun 7 mi 7.53e-8 earth's distance from sun 8 mi 8.606e-8 earth's distance from sun 9 mi 9.682e-8 earth's distance from sun 10 mi 1.0758e-7 earth's distance from sun 20 mi 2.1515e-7 earth's distance from sun 50 mi 5.3788e-7 earth's distance from sun 100 mi 0.00000107577 earth's distance from sun 1000 mi 0.00001075767 earth's distance from sun 10000 mi 0.00010757669 earth's distance from sun 100000 mi 0.00107576694 earth's distance from sun Miles (US survey) A mile (US survey) is a unit of length used primarily in land surveying in the United States. One US survey mile is defined as exactly 5,280 feet or approximately 1,609.347 meters. The US survey mile is used in land measurement and mapping, differing slightly from the international mile, which is defined as exactly 1,609.344 meters. US survey miles are utilized primarily in the United States for property measurement, land surveying, and other applications requiring precise distance measurement. This unit ensures consistency and accuracy in surveying and mapping activities. Earth's distance from sun The Earth's average distance from the Sun, known as an astronomical unit (AU), is approximately 149.6 million kilometers or about 93 million miles. This distance represents the mean distance between the Earth and the Sun, which can vary slightly due to the elliptical shape of Earth's orbit. The Earth’s orbit is not a perfect circle but an ellipse, so the distance varies between about 147.1 million kilometers (91.4 million miles) at perihelion (closest approach) and about 152.1 million kilometers (94.5 million miles) at aphelion (farthest point). The Earth's distance from the Sun is crucial for understanding solar radiation, climate patterns, and the length of seasons. It provides a baseline measurement used in astronomy and space science to define distances within our solar system and beyond. Frequently Asked Questions (FAQs) 1. What is the formula for converting Miles (US survey) to Earth's distance from sun in Length? The formula to convert Miles (US survey) to Earth's distance from sun in Length is: Miles (US survey) / 92956937.04847766 2. Is this tool free or paid? This Length conversion tool, which converts Miles (US survey) to Earth's distance from sun, is completely free to use. 3. How do I convert Length from Miles (US survey) to Earth's distance from sun? To convert Length from Miles (US survey) to Earth's distance from sun, you can use the following formula: Miles (US survey) / 92956937.04847766 For example, if you have a value in Miles (US survey), you substitute that value in place of Miles (US survey) in the above formula, and solve the mathematical expression to get the equivalent value in Earth's distance from sun.
{"url":"https://convertonline.org/unit/?convert=miles_us_survey-earths_distance_from_sun","timestamp":"2024-11-02T06:29:20Z","content_type":"text/html","content_length":"94239","record_id":"<urn:uuid:e436965a-7011-49ee-8460-76227d4d3532>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00283.warc.gz"}
Geometry, Quantum Fields, and Strings: Categorial Aspects At “Geometry, Quantum Fields, and Strings: Categorial Aspects” Posted by Urs Schreiber This week at Oberwolfach takes place a workshop titled Geometry, Quantum Fields, and Strings: Categorial Aspects, organized by Peter Bouwknegt, Dan Freed, Christoph Schweigert. The workshop’s webpage is here. Posted at June 8, 2010 12:48 PM UTC Orientifold differential cohomology Close to my heart today was the talk by Greg Moore on his work with Jacques Distler and Dan Freed (which we have talked about before here on various occasions) on the differential cohomology of orientifold backgrounds for string sigma-models. The slides he pesented are available here: • Greg Moore, The RR-charge of an orientifold. Posted by: Urs Schreiber on June 8, 2010 1:21 PM | Permalink | Reply to this Differential T-duality Yesterday Alexander Kahle told us about his work with Alessandro Valentino on refining the aspect of T-duality that is known as topological T-duality from an operation acting on just K-theory cocycles to an action on differential K-classes – the RR-fields: • Alexander Kahle, Alessandro Valentino, T-duality and Differential K-Theory . Posted by: Urs Schreiber on June 8, 2010 3:49 PM | Permalink | Reply to this Tricategory of conformal nets Not that I necessarily have to travel to hear about this, but Arthur Bartels gave another nice summary of the main result of • Arthur Bartels, Chris Douglas, André Henriques, Conformal nets and local field theory on the symmetric monoidal tricategory of conformal nets. Posted by: Urs Schreiber on June 8, 2010 4:07 PM | Permalink | Reply to this A-oo-category valued FQFT from Lagrangian correspondence Chris Woodward talked about his joint work with Kathrin Wehrheim on Lagrangian correspondences and $A_\infty$-category valued TFTs. Posted by: Urs Schreiber on June 8, 2010 9:43 PM | Permalink | Reply to this Re: At Geometry, Quantum Fields, and Strings: Categorial Aspects A bunch of us (Samson Abramsky, Peter Hines, Lou Kaufman, Sanjeevi Krishnan, Jimie Lawson, Gordon Plotkin, Jamie Vicary, …) are also in Germany, at Oberwolfach’s CS-counterpart Dagstuhl where The Semantics of Information organized by Mike Mislove and Keye Martin is taking place. Posted by: bob on June 9, 2010 1:22 PM | Permalink | Reply to this At The Semantics of Information A bunch of us […] are also in Germany, at Oberwolfach’s CS-counterpart Dagstuhl where The Semantics of Information organized by Mike Mislove and Keye Martin is taking place. Then we should have a dedicated entry on this. If you would like to make a guest post, just send me some content by email (preferably code that also compiles in a comment-window here) and I’ll post it for you. Posted by: Urs Schreiber on June 9, 2010 9:29 PM | Permalink | Reply to this Geometric and topological structures related to M-branes Hisham Sati gave a brief survey of aspects of Geometric and topological structures related to M-branes. Posted by: Urs Schreiber on June 10, 2010 2:25 AM | Permalink | Reply to this Motivic Donaldson-Thomas invariants Yan Soibelman talked about his work with Maxim Kontsevich on motivic Donaldson-Thomas invariants (see there for a review). Maybe I find time to post my notes later. But we were told tat just a few minutes before the talk, Maxim Kontsevich submitted a preprint with more details to the arXiv. So I suppose tomorrow I can to that, too. By the way, a message to my fellow $n$-lab assistants: i haven’t been able to open the $n$-Forum page for days (I think precisely for the days that I am here in Oberwolfach): when I point my browser to it, it keeps loading and loading and loading, and nothing ever appears. So that’s why I haven’t been logging my recent edits. Posted by: Urs Schreiber on June 10, 2010 11:40 AM | Permalink | Reply to this Re: Motivic Donaldson-Thomas invariants That’s not a general problem with the n-Forum. It’s working for me and others. Posted by: David Corfield on June 10, 2010 11:57 AM | Permalink | Reply to this Re: Motivic Donaldson-Thomas invariants But we were told tat just a few minutes before the talk, Maxim Kontsevich submitted a preprint with more details to the arXiv. So I suppose tomorrow I can point to that, too. As Zoran reminds me, it’s Cohomological Hall algebra, exponential Hodge structures and motivic Donaldson-Thomas invariants Posted by: Urs Schreiber on June 15, 2010 7:10 PM | Permalink | Reply to this 2d TQFT and Filtrations of the moduli space of surfaces Ezra Getzler gave again a talk on his work on the topology of the moduli space of punctured surfaces. By putting a filtration on this that reflects the $n$-categorical filtration of the modular operad, one can very efficiently read off various kinds of classication results of 2d TQFTs from the homotopy groups of the filtered subspaces. I put my notes from his talk at 2d TQFT into the section Filtrations of the moduli space of surfaces. Posted by: Urs Schreiber on June 10, 2010 1:41 PM | Permalink | Reply to this Witt classes of vertex operator algebras Alexei Davydov spoke about how two vertex operator algebras may appear as the left and right chiral parts of a full 2d CFT precisely if their modular tensor categories of representations have the same Witt class. Notes on the central statement I put into the section Full versus chiral CFT at conformal field theory. I missed the reference to be given here, will try to provide it later. Posted by: Urs Schreiber on June 10, 2010 4:22 PM | Permalink | Reply to this Re: Witt classes of vertex operator algebras Posted by: Gytis on December 28, 2015 3:48 PM | Permalink | Reply to this T-duality from path-integral reasoning Kentaro Hori gave a review of the path-integral heuristic argument for seeing T-duality for the string on the circle. So I used the opportunity to add that part of the standard story to the $n$Lab entry, at Path integral heuristics deriving T-duality. If I understand correctly then Kentaro Hori’s point is to give a particularly elegant collection of auxiliary fields on the worldsheet such that T-duality becomes manifest simply by integrating these out in two different orders. He also did the superstring analog, but that I found too tedious to type out. Posted by: Urs Schreiber on June 11, 2010 11:28 AM | Permalink | Reply to this
{"url":"https://golem.ph.utexas.edu/category/2010/06/at_geometry_quantum_fields_and.html","timestamp":"2024-11-03T10:39:12Z","content_type":"application/xhtml+xml","content_length":"32594","record_id":"<urn:uuid:8edc8036-4e49-4503-9efc-13e691884676>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00606.warc.gz"}
Feature Detection Finds edges in an image using the [Canny86] algorithm. C++: void Canny(InputArray image, OutputArray edges, double threshold1, double threshold2, int apertureSize=3, bool L2gradient=false ) Python: cv2.Canny(image, threshold1, threshold2[, edges[, apertureSize[, L2gradient]]]) → edges C: void cvCanny(const CvArr* image, CvArr* edges, double threshold1, double threshold2, int aperture_size=3 ) • image – 8-bit input image. • edges – output edge map; single channels 8-bit image, which has the same size as image . Parameters: • threshold1 – first threshold for the hysteresis procedure. • threshold2 – second threshold for the hysteresis procedure. • apertureSize – aperture size for the Sobel() operator. • L2gradient – a flag, indicating whether a more accurate L2gradient=true ), or whether the default L2gradient=false ). The function finds edges in the input image image and marks them in the output map edges using the Canny algorithm. The smallest value between threshold1 and threshold2 is used for edge linking. The largest value is used to find initial segments of strong edges. See http://en.wikipedia.org/wiki/Canny_edge_detector • An example on using the canny edge detector can be found at opencv_source_code/samples/cpp/edge.cpp • (Python) An example on using the canny edge detector can be found at opencv_source_code/samples/python/edge.py Calculates eigenvalues and eigenvectors of image blocks for corner detection. C++: void cornerEigenValsAndVecs(InputArray src, OutputArray dst, int blockSize, int ksize, int borderType=BORDER_DEFAULT ) Python: cv2.cornerEigenValsAndVecs(src, blockSize, ksize[, dst[, borderType]]) → dst C: void cvCornerEigenValsAndVecs(const CvArr* image, CvArr* eigenvv, int block_size, int aperture_size=3 ) • src – Input single-channel 8-bit or floating-point image. • dst – Image to store the results. It has the same size as src and the type CV_32FC(6) . Parameters: • blockSize – Neighborhood size (see details below). • ksize – Aperture parameter for the Sobel() operator. • borderType – Pixel extrapolation method. See borderInterpolate() . For every pixel cornerEigenValsAndVecs considers a blockSize blockSize neighborhood where the derivatives are computed using the Sobel() operator. After that, it finds eigenvectors and eigenvalues of The output of the function can be used for robust edge or corner detection. • (Python) An example on how to use eigenvectors and eigenvalues to estimate image texture flow direction can be found at opencv_source_code/samples/python2/texture_flow.py Harris corner detector. C++: void cornerHarris(InputArray src, OutputArray dst, int blockSize, int ksize, double k, int borderType=BORDER_DEFAULT ) Python: cv2.cornerHarris(src, blockSize, ksize, k[, dst[, borderType]]) → dst C: void cvCornerHarris(const CvArr* image, CvArr* harris_response, int block_size, int aperture_size=3, double k=0.04 ) • src – Input single-channel 8-bit or floating-point image. • dst – Image to store the Harris detector responses. It has the type CV_32FC1 and the same size as src . Parameters: • blockSize – Neighborhood size (see the details on cornerEigenValsAndVecs() ). • ksize – Aperture parameter for the Sobel() operator. • k – Harris detector free parameter. See the formula below. • borderType – Pixel extrapolation method. See borderInterpolate() . The function runs the Harris corner detector on the image. Similarly to cornerMinEigenVal() and cornerEigenValsAndVecs() , for each pixel Corners in the image can be found as the local maxima of this response map. Calculates the minimal eigenvalue of gradient matrices for corner detection. C++: void cornerMinEigenVal(InputArray src, OutputArray dst, int blockSize, int ksize=3, int borderType=BORDER_DEFAULT ) Python: cv2.cornerMinEigenVal(src, blockSize[, dst[, ksize[, borderType]]]) → dst C: void cvCornerMinEigenVal(const CvArr* image, CvArr* eigenval, int block_size, int aperture_size=3 ) • src – Input single-channel 8-bit or floating-point image. • dst – Image to store the minimal eigenvalues. It has the type CV_32FC1 and the same size as src . Parameters: • blockSize – Neighborhood size (see the details on cornerEigenValsAndVecs() ). • ksize – Aperture parameter for the Sobel() operator. • borderType – Pixel extrapolation method. See borderInterpolate() . The function is similar to cornerEigenValsAndVecs() but it calculates and stores only the minimal eigenvalue of the covariance matrix of derivatives, that is, cornerEigenValsAndVecs() description. Refines the corner locations. C++: void cornerSubPix(InputArray image, InputOutputArray corners, Size winSize, Size zeroZone, TermCriteria criteria) Python: cv2.cornerSubPix(image, corners, winSize, zeroZone, criteria) → corners C: void cvFindCornerSubPix(const CvArr* image, CvPoint2D32f* corners, int count, CvSize win, CvSize zero_zone, CvTermCriteria criteria) • image – Input image. • corners – Initial coordinates of the input corners and refined coordinates provided for output. • winSize – Half of the side length of the search window. For example, if winSize=Size(5,5) , then a Parameters: • zeroZone – Half of the size of the dead region in the middle of the search zone over which the summation in the formula below is not done. It is used sometimes to avoid possible singularities of the autocorrelation matrix. The value of (-1,-1) indicates that there is no such a size. • criteria – Criteria for termination of the iterative process of corner refinement. That is, the process of corner position refinement stops either after criteria.maxCount iterations or when the corner position moves by less than criteria.epsilon on some iteration. The function iterates to find the sub-pixel accurate location of corners or radial saddle points, as shown on the figure below. Sub-pixel accurate corner locator is based on the observation that every vector from the center where the gradients are summed within a neighborhood (“search window”) of The algorithm sets the center of the neighborhood window at this new center Determines strong corners on an image. C++: void goodFeaturesToTrack(InputArray image, OutputArray corners, int maxCorners, double qualityLevel, double minDistance, InputArray mask=noArray(), int blockSize=3, bool useHarrisDetector=false, double k=0.04 ) Python: cv2.goodFeaturesToTrack(image, maxCorners, qualityLevel, minDistance[, corners[, mask[, blockSize[, useHarrisDetector[, k]]]]]) → corners C: void cvGoodFeaturesToTrack(const CvArr* image, CvArr* eig_image, CvArr* temp_image, CvPoint2D32f* corners, int* corner_count, double quality_level, double min_distance, const CvArr* mask=NULL, int block_size=3, int use_harris=0, double k=0.04 ) • image – Input 8-bit or floating-point 32-bit, single-channel image. • eig_image – The parameter is ignored. • temp_image – The parameter is ignored. • corners – Output vector of detected corners. • maxCorners – Maximum number of corners to return. If there are more corners than are found, the strongest of them is returned. • qualityLevel – Parameter characterizing the minimal accepted quality of image corners. The parameter value is multiplied by the best corner quality measure, which is the minimal Parameters: eigenvalue (see cornerMinEigenVal() ) or the Harris function response (see cornerHarris() ). The corners with the quality measure less than the product are rejected. For example, if the best corner has the quality measure = 1500, and the qualityLevel=0.01 , then all the corners with the quality measure less than 15 are rejected. • minDistance – Minimum possible Euclidean distance between the returned corners. • mask – Optional region of interest. If the image is not empty (it needs to have the type CV_8UC1 and the same size as image ), it specifies the region in which the corners are • blockSize – Size of an average block for computing a derivative covariation matrix over each pixel neighborhood. See cornerEigenValsAndVecs() . • useHarrisDetector – Parameter indicating whether to use a Harris detector (see cornerHarris()) or cornerMinEigenVal(). • k – Free parameter of the Harris detector. The function finds the most prominent corners in the image or in the specified image region, as described in [Shi94]: 1. Function calculates the corner quality measure at every source image pixel using the cornerMinEigenVal() or cornerHarris() . 2. Function performs a non-maximum suppression (the local maximums in 3 x 3 neighborhood are retained). 3. The corners with the minimal eigenvalue less than 4. The remaining corners are sorted by the quality measure in the descending order. 5. Function throws away each corner for which there is a stronger corner at a distance less than maxDistance. The function can be used to initialize a point-based tracker of an object. If the function is called with different values A and B of the parameter qualityLevel , and A > {B}, the vector of returned corners with qualityLevel=A will be the prefix of the output vector with qualityLevel=B . Finds circles in a grayscale image using the Hough transform. C++: void HoughCircles(InputArray image, OutputArray circles, int method, double dp, double minDist, double param1=100, double param2=100, int minRadius=0, int maxRadius=0 ) C: CvSeq* cvHoughCircles(CvArr* image, void* circle_storage, int method, double dp, double min_dist, double param1=100, double param2=100, int min_radius=0, int max_radius=0 ) Python: cv2.HoughCircles(image, method, dp, minDist[, circles[, param1[, param2[, minRadius[, maxRadius]]]]]) → circles • image – 8-bit, single-channel, grayscale input image. • circles – Output vector of found circles. Each vector is encoded as a 3-element floating-point vector • circle_storage – In C function this is a memory storage that will contain the output sequence of found circles. • method – Detection method to use. Currently, the only implemented method is CV_HOUGH_GRADIENT , which is basically 21HT , described in [Yuen90]. • dp – Inverse ratio of the accumulator resolution to the image resolution. For example, if dp=1 , the accumulator has the same resolution as the input image. If dp=2 , the accumulator has half as big width and height. Parameters: • minDist – Minimum distance between the centers of the detected circles. If the parameter is too small, multiple neighbor circles may be falsely detected in addition to a true one. If it is too large, some circles may be missed. • param1 – First method-specific parameter. In case of CV_HOUGH_GRADIENT , it is the higher threshold of the two passed to the Canny() edge detector (the lower one is twice • param2 – Second method-specific parameter. In case of CV_HOUGH_GRADIENT , it is the accumulator threshold for the circle centers at the detection stage. The smaller it is, the more false circles may be detected. Circles, corresponding to the larger accumulator values, will be returned first. • minRadius – Minimum circle radius. • maxRadius – Maximum circle radius. The function finds circles in a grayscale image using a modification of the Hough transform. #include <opencv2/imgproc.hpp> #include <opencv2/highgui.hpp> #include <math.h> using namespace cv; int main(int argc, char** argv) Mat img, gray; if( argc != 2 && !(img=imread(argv[1], 1)).data) return -1; cvtColor(img, gray, COLOR_BGR2GRAY); // smooth it, otherwise a lot of false circles may be detected GaussianBlur( gray, gray, Size(9, 9), 2, 2 ); vector<Vec3f> circles; HoughCircles(gray, circles, HOUGH_GRADIENT, 2, gray->rows/4, 200, 100 ); for( size_t i = 0; i < circles.size(); i++ ) Point center(cvRound(circles[i][0]), cvRound(circles[i][1])); int radius = cvRound(circles[i][2]); // draw the circle center circle( img, center, 3, Scalar(0,255,0), -1, 8, 0 ); // draw the circle outline circle( img, center, radius, Scalar(0,0,255), 3, 8, 0 ); namedWindow( "circles", 1 ); imshow( "circles", img ); return 0; Usually the function detects the centers of circles well. However, it may fail to find correct radii. You can assist to the function by specifying the radius range ( minRadius and maxRadius ) if you know it. Or, you may ignore the returned radius, use only the center, and find the correct radius using an additional procedure. • An example using the Hough circle detector can be found at opencv_source_code/samples/cpp/houghcircles.cpp Finds lines in a binary image using the standard Hough transform. C++: void HoughLines(InputArray image, OutputArray lines, double rho, double theta, int threshold, double srn=0, double stn=0, double min_theta=0, double max_theta=CV_PI ) Python: cv2.HoughLines(image, rho, theta, threshold[, lines[, srn[, stn[, min_theta[, max_theta]]]]]) → lines C: CvSeq* cvHoughLines2(CvArr* image, void* line_storage, int method, double rho, double theta, int threshold, double param1=0, double param2=0, double min_theta=0, double max_theta=CV_PI ) • image – 8-bit, single-channel binary source image. The image may be modified by the function. • lines – Output vector of lines. Each line is represented by a two-element vector • rho – Distance resolution of the accumulator in pixels. • theta – Angle resolution of the accumulator in radians. • threshold – Accumulator threshold parameter. Only those lines are returned that get enough votes ( • srn – For the multi-scale Hough transform, it is a divisor for the distance resolution rho . The coarse accumulator distance resolution is rho and the accurate accumulator resolution is rho/srn . If both srn=0 and stn=0 , the classical Hough transform is used. Otherwise, both these parameters should be positive. • stn – For the multi-scale Hough transform, it is a divisor for the distance resolution theta. • min_theta – For standard and multi-scale Hough transform, minimum angle to check for lines. Must fall between 0 and max_theta. • max_theta – For standard and multi-scale Hough transform, maximum angle to check for lines. Must fall between min_theta and CV_PI. • method – One of the following Hough transform variants: □ CV_HOUGH_STANDARD classical or standard Hough transform. Every line is represented by two floating-point numbers CV_32FC2 type Parameters: □ CV_HOUGH_PROBABILISTIC probabilistic Hough transform (more efficient in case if the picture contains a few long linear segments). It returns line segments rather than the whole line. Each segment is represented by starting and ending points, and the matrix must be (the created sequence will be) of the CV_32SC4 type. □ CV_HOUGH_MULTI_SCALE multi-scale variant of the classical Hough transform. The lines are encoded the same way as CV_HOUGH_STANDARD. • param1 – First method-dependent parameter: □ For the classical Hough transform, it is not used (0). □ For the probabilistic Hough transform, it is the minimum line length. □ For the multi-scale Hough transform, it is srn. • param2 – Second method-dependent parameter: □ For the classical Hough transform, it is not used (0). □ For the probabilistic Hough transform, it is the maximum gap between line segments lying on the same line to treat them as a single line segment (that is, to join them). □ For the multi-scale Hough transform, it is stn. The function implements the standard or standard multi-scale Hough transform algorithm for line detection. See http://homepages.inf.ed.ac.uk/rbf/HIPR2/hough.htm for a good explanation of Hough transform. See also the example in HoughLinesP() description. • An example using the Hough line detector can be found at opencv_source_code/samples/cpp/houghlines.cpp Finds line segments in a binary image using the probabilistic Hough transform. C++: void HoughLinesP(InputArray image, OutputArray lines, double rho, double theta, int threshold, double minLineLength=0, double maxLineGap=0 ) Python: cv2.HoughLinesP(image, rho, theta, threshold[, lines[, minLineLength[, maxLineGap]]]) → lines • image – 8-bit, single-channel binary source image. The image may be modified by the function. • lines – Output vector of lines. Each line is represented by a 4-element vector • rho – Distance resolution of the accumulator in pixels. Parameters: • theta – Angle resolution of the accumulator in radians. • threshold – Accumulator threshold parameter. Only those lines are returned that get enough votes ( • minLineLength – Minimum line length. Line segments shorter than that are rejected. • maxLineGap – Maximum allowed gap between points on the same line to link them. The function implements the probabilistic Hough transform algorithm for line detection, described in [Matas00]. See the line detection example below: /* This is a standalone program. Pass an image name as the first parameter of the program. Switch between standard and probabilistic Hough transform by changing "#if 1" to "#if 0" and back */ #include <opencv2/imgproc.hpp> #include <opencv2/highgui.hpp> using namespace cv; int main(int argc, char** argv) Mat src, dst, color_dst; if( argc != 2 || !(src=imread(argv[1], 0)).data) return -1; Canny( src, dst, 50, 200, 3 ); cvtColor( dst, color_dst, COLOR_GRAY2BGR ); #if 0 vector<Vec2f> lines; HoughLines( dst, lines, 1, CV_PI/180, 100 ); for( size_t i = 0; i < lines.size(); i++ ) float rho = lines[i][0]; float theta = lines[i][1]; double a = cos(theta), b = sin(theta); double x0 = a*rho, y0 = b*rho; Point pt1(cvRound(x0 + 1000*(-b)), cvRound(y0 + 1000*(a))); Point pt2(cvRound(x0 - 1000*(-b)), cvRound(y0 - 1000*(a))); line( color_dst, pt1, pt2, Scalar(0,0,255), 3, 8 ); vector<Vec4i> lines; HoughLinesP( dst, lines, 1, CV_PI/180, 80, 30, 10 ); for( size_t i = 0; i < lines.size(); i++ ) line( color_dst, Point(lines[i][0], lines[i][1]), Point(lines[i][2], lines[i][3]), Scalar(0,0,255), 3, 8 ); namedWindow( "Source", 1 ); imshow( "Source", src ); namedWindow( "Detected Lines", 1 ); imshow( "Detected Lines", color_dst ); return 0; This is a sample picture the function parameters have been tuned for: And this is the output of the above program in case of the probabilistic Hough transform: Line segment detector class, following the algorithm described at [Rafael12]. class LineSegmentDetector : public Algorithm Creates a smart pointer to a LineSegmentDetector object and initializes it. C++: Ptr<LineSegmentDetector> createLineSegmentDetector(int _refine=LSD_REFINE_STD, double _scale=0.8, double _sigma_scale=0.6, double _quant=2.0, double _ang_th=22.5, double _log_eps=0, double _density_th=0.7, int _n_bins=1024) Python: cv2.createLineSegmentDetector([_refine[, _scale[, _sigma_scale[, _quant[, _ang_th[, _log_eps[, _density_th[, _n_bins]]]]]]]]) → retval The LineSegmentDetector algorithm is defined using the standard values. Only advanced users may want to edit those, as to tailor it for their own application. Finds lines in the input image. See the lsd_lines.cpp sample for possible usage. C++: void LineSegmentDetector::detect(const InputArray _image, OutputArray _lines, OutputArray width=noArray(), OutputArray prec=noArray(), OutputArray nfa=noArray()) Python: cv2.createLineSegmentDetector.detect(_image[, _lines[, width[, prec[, nfa]]]]) → _lines, width, prec, nfa :param _image A grayscale (CV_8UC1) input image. If only a roi needs to be selected, use :: lsd_ptr->detect(image(roi), lines, ...); lines += Scalar(roi.x, roi.y, roi.x, roi.y); • lines – A vector of Vec4i elements specifying the beginning and ending point of a line. Where Vec4i is (x1, y1, x2, y2), point 1 is the start, point 2 - end. Returned lines are strictly oriented depending on the gradient. • width – Vector of widths of the regions, where the lines are found. E.g. Width of line. • prec – Vector of precisions with which the lines are found. • nfa – Vector containing number of false alarms in the line region, with precision of 10%. The bigger the value, logarithmically better the detection. □ -1 corresponds to 10 mean false alarms □ 0 corresponds to 1 mean false alarm □ 1 corresponds to 0.1 mean false alarms This vector will be calculated only when the objects type is LSD_REFINE_ADV. This is the output of the default parameters of the algorithm on the above shown image. • An example using the LineSegmentDetector can be found at opencv_source_code/samples/cpp/lsd_lines.cpp Draws the line segments on a given image. C++: void LineSegmentDetector::drawSegments(InputOutputArray _image, InputArray lines) Python: cv2.createLineSegmentDetector.drawSegments(_image, lines) → _image Parameters: • image – The image, where the liens will be drawn. Should be bigger or equal to the image, where the lines were found. • lines – A vector of the lines that needed to be drawn. Draws two groups of lines in blue and red, counting the non overlapping (mismatching) pixels. C++: int LineSegmentDetector::compareSegments(const Size& size, InputArray lines1, InputArray lines2, InputOutputArray _image=noArray()) Python: cv2.createLineSegmentDetector.compareSegments(size, lines1, lines2[, _image]) → retval, _image • size – The size of the image, where lines1 and lines2 were found. Parameters: • lines1 – The first group of lines that needs to be drawn. It is visualized in blue color. • lines2 – The second group of lines. They visualized in red color. • image – Optional image, where the lines will be drawn. The image should be color(3-channel) in order for lines1 and lines2 to be drawn in the above mentioned colors. Calculates a feature map for corner detection. C++: void preCornerDetect(InputArray src, OutputArray dst, int ksize, int borderType=BORDER_DEFAULT ) Python: cv2.preCornerDetect(src, ksize[, dst[, borderType]]) → dst C: void cvPreCornerDetect(const CvArr* image, CvArr* corners, int aperture_size=3 ) • src – Source single-channel 8-bit of floating-point image. Parameters: • dst – Output image that has the type CV_32F and the same size as src . • ksize – Aperture size of the Sobel() . • borderType – Pixel extrapolation method. See borderInterpolate() . The function calculates the complex spatial derivative-based function of the source image where D_y are the first image derivatives, D_{yy} are the second image derivatives, and The corners can be found as local maximums of the functions, as shown below: Mat corners, dilated_corners; preCornerDetect(image, corners, 3); // dilation with 3x3 rectangular structuring element dilate(corners, dilated_corners, Mat(), 1); Mat corner_mask = corners == dilated_corners; [Canny86] 10. Canny. A Computational Approach to Edge Detection, IEEE Trans. on Pattern Analysis and Machine Intelligence, 8(6), pp. 679-698 (1986). [Matas00] Matas, J. and Galambos, C. and Kittler, J.V., Robust Detection of Lines Using the Progressive Probabilistic Hough Transform. CVIU 78 1, pp 119-137 (2000) [Shi94] 10. Shi and C. Tomasi. Good Features to Track. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 593-600, June 1994. [Yuen90] Yuen, H. K. and Princen, J. and Illingworth, J. and Kittler, J., Comparative study of Hough transform methods for circle finding. Image Vision Comput. 8 1, pp 71–77 (1990) [Rafael12] Rafael Grompone von Gioi, Jérémie Jakubowicz, Jean-Michel Morel, and Gregory Randall, LSD: a Line Segment Detector, Image Processing On Line, vol. 2012. http://dx.doi.org/10.5201/
{"url":"https://docs.opencv.org/3.0-beta/modules/imgproc/doc/feature_detection.html","timestamp":"2024-11-11T00:04:18Z","content_type":"text/html","content_length":"103174","record_id":"<urn:uuid:8297b989-3eec-42ee-8541-a1318117cc02>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00388.warc.gz"}
Calculating The Length Of A Triangle: An Essential Guide Calculating the Length of a Triangle: An Essential Guide In Mathematics education, understanding the concept of length in a triangle is fundamental. The length of a triangle refers to the measurement of its sides or any other line segment within the triangle. It plays a crucial role in various mathematical calculations and problem-solving techniques. By deciphering the length of a triangle, we can determine its perimeter, area, and even explore relationships between sides and angles. In this article, we will delve into the intricacies of measuring the length of a triangle, exploring different techniques and formulas that enable us to accurately determine this important attribute. The Concept of Triangle Length The concept of triangle length refers to the measurement of the sides of a triangle. In mathematics education, it is important to understand how to determine the length of each side in order to solve various geometric problems involving triangles. The length of a triangle's sides can be determined using various formulas and principles, such as the Pythagorean theorem or the Law of Sines. Calculating Triangle Length using the Pythagorean Theorem The Pythagorean theorem is a fundamental principle in mathematics that states that in a right triangle, the square of the length of the hypotenuse is equal to the sum of the squares of the lengths of the other two sides. This theorem can be used to calculate the length of a triangle's side when the lengths of the other two sides are known. By rearranging the equation, we can isolate the length of the unknown side and find its value. Using Trigonometric Ratios to Find Triangle Length Trigonometric ratios, such as sine, cosine, and tangent, can also be used to find the length of a triangle's sides. These ratios relate the angles of a right triangle to the ratios of the lengths of its sides. By applying these ratios and knowing the values of certain angles and at least one side length, we can determine the lengths of the other sides of the triangle. Applying the Law of Sines to Find Triangle Length The Law of Sines is another mathematical principle that can be used to find the length of a triangle's sides. It states that the ratio of the length of a side of a triangle to the sine of its opposite angle is constant for all sides and angles of the triangle. By rearranging the equation, we can isolate the length of an unknown side and solve for its value using known angle measures and side lengths. Note: Make sure to use the appropriate HTML tags and formatting in your actual article for the headings and emphasized text. frequently asked questions What is the length of a triangle and how is it determined? The length of a triangle is determined by measuring the distance between its vertices or by using the Pythagorean theorem to calculate the lengths of its sides. How can I calculate the length of a triangle if only the base and height are given? The length of a triangle can be calculated by using the formula for the area of a triangle: Area = (base * height) / 2. To find the length, you would rearrange the formula: length = (2 * Area) / Is there a specific formula to find the length of a triangle if the side lengths are known? Yes, there is a specific formula to find the length of a triangle if the side lengths are known. It's called the Heron's formula. Can the length of a triangle be found if only the angles are given? No, the length of a triangle cannot be found if only the angles are given. Are there any specific measurements or criteria that need to be known in order to determine the length of a triangle accurately? In order to determine the length of a triangle accurately, one needs to know the measurements of at least two sides or one side and the included angle. These measurements can be used in various trigonometric formulas, such as the Law of Sines or the Law of Cosines, to calculate the missing side lengths. In conclusion, understanding the concept of length in a triangle is crucial in Mathematics education. By applying the Pythagorean theorem and utilizing various formulas, students can accurately calculate the lengths of different sides and segments within a triangle. It is important for educators to emphasize the significance of these calculations and provide practical examples to enhance students' comprehension. Strengthening their knowledge in this area will not only enhance their problem-solving skills but also lay a solid foundation for more advanced mathematical concepts. So, let's empower our students with the tools they need to confidently tackle any triangle length problem that comes their way. If you want to know other articles similar to Calculating the Length of a Triangle: An Essential Guide you can visit the category Geometry.
{"url":"https://warreninstitute.org/what-is-the-length-of-a-triangle/","timestamp":"2024-11-05T22:45:27Z","content_type":"text/html","content_length":"101638","record_id":"<urn:uuid:19605173-74d4-4d02-a739-2fc5940bb88e>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00227.warc.gz"}
How to determine percentage of T-cell subsets by flow. Basic math help. - How to determine percentage of T-cell subsets by flow. Basic math help. - (Apr/17/2009 ) Hello Everyone, I am starting a new project. Though I understand the concept I have never worked with flowcytometry before and I am very confused with how to calculate the final percentages of cells. I have been given several flowcytometry reports and have been asked to determine the percentages of: 1. gamma-delta/ CD8 + 2. apha-beta/CD8+ 3. gamma-delta/ CD8 - 4. CD4-/ CD8- / gamma-delta 5. CD3-/CD4-/CD8- For some reason when I start subtracting one population from another that sounds logical to me I get results not so logical. The acquisition was done years ago and repeating it with other Ab pairings is not an option. These were the Ab pairings used: a. CD4/CD8 b. Gamma-delta/alpha-beta c. CD8/CD3 d. CD4/CD3 e. CD4/CD8 f. CD16/56 /CD3 g. CD19/ CD3 h. CD19/CD16/56 i. CD3/CD103 If anyone can help me with basic math I would appreciate it. What do you mean by your results are "not so logical"? You should be able to use the analysis software to gate the cells based on a threshold and calculate the percentages in each population automatically. What program are you using for analysis? gfischer on Apr 17 2009, 01:27 PM said: What do you mean by your results are "not so logical"? You should be able to use the analysis software to gate the cells based on a threshold and calculate the percentages in each population automatically. What program are you using for analysis? Thank you for your reply, gfischer! I do not have any software. Just a bunch of printed papers with dot plots. My PI gave me an example of how to calculate CD8-/gammadelta. He suggested subtracting gamma-delta % from the CD8-/CD4- %. CD4+ 6 CD8+ 9 ab 15 gd 6 No TCR 78 CD4-/CD8- 82 CD3- 69 CD3+/CD8- 10 CD3-/CD8- 80 CD3-/CD4- 78 In the following case if I do that it will give me: Gamma-delta/CD8- = 82% -6% = 76% But how can there be 76% of gamma-delta/CD8- cells if 78% have no TCR? This is what I do not find logical. Thanks again if you can help me calculate all those subsets!
{"url":"http://www.protocol-online.org/biology-forums-2/posts/7584.html","timestamp":"2024-11-10T22:01:53Z","content_type":"text/html","content_length":"12755","record_id":"<urn:uuid:967a584a-cf71-4b8c-934a-633734d6e903>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00704.warc.gz"}
CondTotalVarIC-class: Conditionally centered influence curve of total variaton type in ROptRegTS: Optimally Robust Estimation for Regression-Type Models Description Objects from the Class Slots Extends Methods Author(s) References See Also Examples Class of conditionally centered (partial) influence curves of contamination type for average conditional total variation with lower clipping function c, upper clipping function b and standardizing matrix A. Lambda_f stands for the L2 derivative of the corresponding error distribution. Objects can be created by calls of the form new("ContTotalVarIC", ...). More frequently they are created via the generating function ContTotalVarIC, respectively via the method generateIC. object of class "call": creates an object of the underlying L2-differentiable regression type family. object of class "matrix" with two columns named method and message: additional informations. object of class "OptionalNumeric": optional constant for lower case solution. object of class "numeric": radius of the corresponding conditional contamination neighborhood. object of class "numeric": radius curve of the corresponding conditional contamination neighborhood. Class "CondIC", directly. Class "IC", by class "CondIC". Class "InfluenceCurve", by class "CondIC". signature(neighbor = "CondTotalVarNeighborhood", L2Fam = "L2RegTypeFamily"): generate an object of class "CondTotalVarIC". Rarely called directly. Kohl, M. (2005) Numerical Contributions to the Asymptotic Theory of Robustness. Bayreuth: Dissertation. For more information on customizing the embed code, read Embedding Snippets.
{"url":"https://rdrr.io/cran/ROptRegTS/man/CondTotalVarIC-class.html","timestamp":"2024-11-05T06:45:54Z","content_type":"text/html","content_length":"39195","record_id":"<urn:uuid:131e4419-e16f-4111-b5a8-16b044712034>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00655.warc.gz"}
Question #5ce0c | Socratic Question #5ce0c 1 Answer To find the inverse function you can do the following: 1. Write the formula of $y = f \left(x\right)$. 2. Transform it to calculate $x$ as $f \left(y\right)$ The function you get in 2. is the inverse function. Here we have: $y = {2}^{x} - 3$ ${2}^{x} = y + 3$ $x = {\log}_{2} \left(y + 3\right)$ Now we can change the letters $x$ and $y$ to get the function in the usual form with $x$ as independent variable: $y = {\log}_{2} \left(x + 3\right)$ Impact of this question 771 views around the world
{"url":"https://socratic.org/questions/592051ab7c01495083b5ce0c#426770","timestamp":"2024-11-01T22:26:21Z","content_type":"text/html","content_length":"32244","record_id":"<urn:uuid:bc15996f-d313-493c-a310-97e18c07ab0d>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00061.warc.gz"}
SVR - SVM Tutorial Today we are going to talk about SVMs in general. I recently received an email from a reader of my serie of articles about the math behind SVM: I felt I got deviated a lot on Math part and its derivations and assumptions and finally got confused what exactly SVM is ? And when to use ... Read more
{"url":"https://www.svm-tutorial.com/tag/svr/","timestamp":"2024-11-07T09:21:42Z","content_type":"text/html","content_length":"44353","record_id":"<urn:uuid:0985e2de-76c5-483d-8aa8-69f855bc4b0e>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00337.warc.gz"}
Outlier Detection Using PyOD in Python - Tele Blue Soft Outlier Detection Using PyOD in Python In today’s data-driven world, identifying anomalies or outliers in datasets is crucial for various applications, from fraud detection in finance to identifying rare diseases in healthcare. Outliers are data points that deviate significantly from the majority of the data, and they can either indicate data entry errors, rare events, or novel patterns. Detecting these outliers effectively requires specialized tools and methods. PyOD (Python Outlier Detection) is a dedicated Python package designed to streamline this task. With its rich set of algorithms and ease of integration with other data analysis tools, PyOD has become a go-to solution for outlier detection in multivariate datasets. In this article, we’ll explore the capabilities of PyOD, demonstrate how to implement some of its algorithms, and discuss the advantages and challenges associated with using this package. Why Use PyOD for Outlier Detection? 1. Versatility PyOD offers a broad spectrum of algorithms tailored for various anomaly detection needs. Whether you’re dealing with high-dimensional data, time series, or simple univariate data, PyOD has a method suited for your task. Some of the most popular algorithms available in PyOD include: • K-Nearest Neighbors (KNN): A simple yet effective method for detecting outliers based on the distance to the k-nearest neighbors. • Isolation Forest: An ensemble-based method that isolates observations by randomly selecting a feature and then randomly selecting a split value between the maximum and minimum values of the selected feature. • AutoEncoder: A deep learning model designed to learn a compressed representation of the data, useful for detecting anomalies in high-dimensional spaces. • One-Class SVM: A variant of the Support Vector Machine that finds a boundary in the data beyond which any point is considered an outlier. 2. Ease of Use PyOD’s API is consistent and straightforward, making it easy for users to test and compare different outlier detection methods. Whether you’re a beginner or an experienced data scientist, PyOD’s user-friendly interface allows you to implement complex algorithms with minimal code. 3. Seamless Integration PyOD works seamlessly with other popular Python data analysis libraries like NumPy, Pandas, and Scikit-learn. This integration allows you to include outlier detection as a part of a larger data analysis pipeline without any hassle. You can easily fit a PyOD model within your existing workflow, leveraging its compatibility with these libraries. How to Use PyOD: A Step-by-Step Guide 1. Installation To get started with PyOD, you first need to install the package. You can do this using pip: 2. Basic Workflow Let’s walk through a basic workflow to detect outliers in a dataset using PyOD. We’ll use a synthetic dataset for simplicity, but the same principles apply to real-world data. # Importing necessary libraries import numpy as np import pandas as pd from pyod.models.knn import KNN from pyod.models.iforest import IForest from pyod.models.auto_encoder import AutoEncoder from pyod.utils.data import generate_data from pyod.utils.data import evaluate_print # Generating a synthetic dataset X_train, X_test, y_train, y_test = generate_data(n_train=200, n_test=100, n_features=2, contamination=0.1, random_state=42) # Checking the shape of the dataset print("Training data shape:", X_train.shape) print("Test data shape:", X_test.shape) # Initializing a KNN model knn = KNN() # Fitting the model # Predicting the test data y_test_pred = knn.predict(X_test) # Evaluating the model evaluate_print('KNN', y_test, y_test_pred) 3. Choosing the Right Algorithm The KNN algorithm is a good starting point for many outlier detection tasks, but PyOD offers several other algorithms depending on the nature of your data: • Isolation Forest (IForest): Ideal for high-dimensional datasets. • AutoEncoder: Best suited for datasets with complex, nonlinear relationships. • One-Class SVM: Useful when the data distribution is skewed or when the dataset is small. Here’s how you can implement Isolation Forest and AutoEncoder using PyOD: # Initializing an Isolation Forest model iforest = IForest() # Fitting the model # Predicting the test data y_test_pred_iforest = iforest.predict(X_test) # Evaluating the model evaluate_print('IForest', y_test, y_test_pred_iforest) # Initializing an AutoEncoder model auto_encoder = AutoEncoder(epochs=30, batch_size=32, contamination=0.1) # Fitting the model # Predicting the test data y_test_pred_ae = auto_encoder.predict(X_test) # Evaluating the model evaluate_print('AutoEncoder', y_test, y_test_pred_ae) 4. Interpreting the Results After fitting the models and predicting the test data, the evaluate_print function provides a summary of the model’s performance. Metrics like accuracy, precision, and recall help you understand how well the model is identifying outliers. For example, in the case of the KNN model, you might see output similar to this: KNN ROC: 0.95, precision @ rank n: 0.85 This indicates that the model has a high area under the ROC curve (0.95), meaning it’s effective at distinguishing between inliers and outliers. 5. Visualizing the Results Visualization is key to understanding how the model is performing and where the outliers are located. PyOD allows you to visualize the decision boundaries of various models. Here’s an example using import matplotlib.pyplot as plt from pyod.utils.data import get_color_codes # Plotting decision boundaries for KNN plt.figure(figsize=(10, 6)) plt.scatter(X_test[:, 0], X_test[:, 1], c=y_test_pred, cmap='coolwarm') plt.title('KNN Outlier Detection') plt.xlabel('Feature 1') plt.ylabel('Feature 2') This scatter plot visualizes the outliers detected by the KNN model, where each point is colored based on whether it’s classified as an outlier or not. Challenges of Using PyOD 1. Algorithm Complexity While PyOD offers advanced algorithms like AutoEncoder and Isolation Forest, these methods can be computationally expensive and may require fine-tuning. AutoEncoders, for example, involve training deep neural networks, which can be challenging without sufficient computational resources or expertise in deep learning. 2. Documentation Gaps Although PyOD’s documentation is comprehensive, some advanced features might be less documented or harder to implement without additional examples. Users might need to experiment with the code or seek community support to fully leverage these advanced features. PyOD vs. Other Popular Packages PyOD vs. Scikit-learn Scikit-learn is a versatile library that includes some outlier detection methods, but it’s more general-purpose and lacks the specialized focus of PyOD. PyOD’s rich set of algorithms and specific tools for outlier detection make it a better choice for this task. PyOD vs. TensorFlow TensorFlow is a powerful deep learning library but is generally more suited for complex models and deep learning tasks. While TensorFlow can be used for anomaly detection, it requires a steeper learning curve. PyOD, on the other hand, offers a user-friendly experience tailored for anomaly detection. PyOD is a powerful and versatile tool for outlier detection in Python. Its extensive range of algorithms, ease of use, and seamless integration with other data analysis libraries make it an excellent choice for detecting anomalies in multivariate datasets. Despite some challenges, such as the complexity of advanced algorithms and occasional documentation gaps, PyOD stands out as a specialized package that excels in its domain. Whether you’re working in finance, healthcare, cybersecurity, or any other field where anomaly detection is crucial, PyOD provides the tools you need to identify outliers with confidence. With PyOD, you can enhance your data analysis workflows and ensure that you catch those rare, but critical, outliers in your datasets.
{"url":"https://www.telebluesoft.com/2024/08/24/outlier-detection-using-pyod-in-python/","timestamp":"2024-11-02T18:46:28Z","content_type":"text/html","content_length":"69717","record_id":"<urn:uuid:d9c8512b-b54b-4aac-b729-09bc24b9c25f>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00026.warc.gz"}
Wyniki wyszukiwania dla: POLYNOMIAL ALGORITHM - MOST Wiedzy 1. Wyszukiwarka Wyniki wyszukiwania dla: POLYNOMIAL ALGORITHM Wyniki wyszukiwania dla: POLYNOMIAL ALGORITHM • Publikacja Dominating sets find application in a variety of networks. A subset of nodes D is a (1,2)-dominating set in a graph G=(V,E) if every node not in D is adjacent to a node in D and is also at most a distance of 2 to another node from D. In networks, (1,2)-dominating sets have a higher fault tolerance and provide a higher reliability of services in case of failure. However, finding such the smallest set is NP-hard. In this paper, we... • Publikacja W pracy opisano wielomianowy algorytm wyznaczający optymalne T-pokolorowania dla uogólnionych kaktusów. • Publikacja In the note we consider vertex coloring of a graph in which each color has an associated cost which is incurred each time the color is assigned to a vertex. The cost of coloring is the sum of costs incurred at each vertex. We show that the minimum cost coloring problem for n-vertex bipartite graph of degree ∆≤4 can be solved in O(n^2) time. This extends Jansen’s result [K.Jansen,The optimum cost chromatic partition problem, in:... • Publikacja W pracy rozważa się rozproszony model obliczeń, w którym struktura systemu jest reprezentowana przez graf bezpośrednich połączeń komunikacyjnych. W tym modelu podajemy nowy samostabilizujący algorytm znajdowania drzewa spinającego. Zgodnie z naszą wiedzą jest to pierwszy algorytm dla tego problemu z gwarantowaną wielomianową liczbą ruchów. • Publikacja The goal of this paper is to explore and to provide tools for the investigation of the problems of unit-length scheduling of incompatible jobs on uniform machines. We present two new algorithms that are a significant improvement over the known algorithms. The first one is Algorithm 2 which is 2-approximate for the problem Qm|p j = 1, G = bisubquartic|Cmax . The second one is Algorithm 3 which is 4-approximate for the problem Qm|p... • Publikacja The article describes a problem of splitting data for k-fold cross-validation, where class proportions must be preserved, with additional constraint that data is divided into groups that cannot be split into different cross-validation sets. This problem often occurs in e.g. medical data processing, where data samples from one patient must be included in the same cross-validation set. As this problem is NP-complete, a heuristic... • Publikacja Convolution is one of the main algorithms performed in the digital signal processing. The algorithm is similar to polynomial multiplication and very intensive computationally. This paper presents a new convolution algorithm based on the Polynomial Residue Number System (PRNS). The use of the PRNS allows to decompose the computation problem and thereby reduce the number of multiplications. The algorithm has been implemented in Xilinx... • Publikacja This paper presents the study of fast discrete convolution calculation with use of the Polynomial Residue Number System (PRNS). Convolution can be based the algorithm similar to polynomial multiplication. The residue arithmetic allows for fast realization of multiplication and addition, which are the most important arithmetic operations in the implementation of convolution. The practical aspects of hardware realization of PRNS... • Publikacja We consider the CHROMATIC SUM PROBLEM on bipartite graphs which appears to be much harder than the classical CHROMATIC NUMBER PROBLEM. We prove that the CHROMATIC SUM PROBLEM is NP-complete on planar bipartite graphs with Delta less than or equal to 5, but polynomial on bipartite graphs with Delta less than or equal to 3, for which we construct an O(n(2))-time algorithm. Hence, we tighten the borderline of intractability for this... • Publikacja The problem of scheduling jobs on parallel machines under an incompatibility relation is considered in this paper. In this model, a binary relation between jobs is given and no two jobs that are in the relation can be scheduled on the same machine. We consider job scheduling under the incompatibility relation modeled by a bipartite graph, under the makespan optimality criterion, on uniform and unrelated machines. Unrelated machines... • Publikacja Inverse problems of contact heat conduction with an interfacial heat source are common in various fields of science, engineering and technology. In this study, an algorithm for their solution is developed based on an inverse parametric optimisation method with an impulse response function describing the heat partition and contact heat transfer. A first-order thermocouple model with a time constant parameter is embedded in the impulse... • Publikacja We study the shared processor scheduling problem with a single shared processor to maximize total weighted overlap, where an overlap for a job is the amount of time it is processed on its private and shared processor in parallel. A polynomial-time optimization algorithm has been given for the problem with equal weights in the literature. This paper extends that result by showing an (log) -time optimization algorithm for a class... • Publikacja We present some characterizations of characteristic graphs of row and/or column symmetric channels. We also give a polynomial-time algorithm that decides whether there exists a discrete symmetric channel whose characteristic graph is equal to a given input graph. In addition, we show several applications of our results. • Publikacja The problem of scheduling n identical jobs on 4 uniform machines with speeds s1&gt;=s2&gt;=s3&gt;=s4 is considered.The aim is to find a schedule with minimum possible length. We assume that jobs are subject to mutual exclusion constraints modeled by a bipartite incompatibility graph of degree delta. We show that the general problem is NP-hard even if s1=s2=s3. If, however, delta&lt;5 and s1&gt;12s2 s2=s3=s4, then the problem can be solved to... • Publikacja In this paper we consider a problem of graph P-coloring consisting in partitioning the vertex set of a graph such that each of the resulting sets induces a graph in a given additive, hereditary class of graphs P. We focus on partitions generated by the greedy algorithm. In particular, we show that given a graph G and an integer k deciding if the greedy algorithm outputs a P-coloring with a least k colors is NP-complete for an infinite... • Publikacja A hypergraph is equitablyk-colorable if its vertices can be partitioned into k sets/colorclasses in such a way that monochromatic edges are avoided and the number of verticesin any two color classes differs by at most one. We prove that the problem of equitable 2-coloring of hypergraphs is NP-complete even for 3-uniform hyperstars. Finally, we apply the method of dynamic programming for designing a polynomial-time algorithm to... • Publikacja In the following paper it will be presented a new model for analysis (in polynomial time) of the effectiveness of the guard systems. Therewill be presented its practical applications in problems such as searching for the weakest points of the system, planning guards' paths or cameras deployment, switching image from multiple cameras on several monitors, or interception of the intruder. This model is based on describing the guarded... • Publikacja In this paper we consider a variant of graph partitioning consisting in partitioning the vertex set of a graph into the minimum number of sets such that each of them induces a graph in hereditary class of graphs P (the problem is also known as P-coloring). We focus on the computational complexity of several problems related to greedy partitioning. In particular, we show that given a graph G and an integer k deciding if the greedy... • Publikacja In the letter, an improved algorithm for electromagnetic (EM)-driven size reduction of wideband antennas is proposed. Our methodology utilizes variable-fidelity EM simulation models, auxiliary polynomial regression surrogates, as well as multi-point response correction. The constraint handling is implicit, using penalty functions. The core optimization algorithm is a trust-region gradient search with a repair step added in order... • Publikacja In this study, the authors propose a robust and computationally efficient algorithm for simulation-driven design optimisation of microwave structures. Our technique exploits variable-fidelity electromagnetic models of the structure under consideration. The low-fidelity model is optimised using its local response surface approximation surrogates. The high-fidelity model is refined by space mapping with polynomial interpolation of... • Publikacja We analyse stability of the four-dimensional Kitaev model-a candidate for scalable quantum memory - in finite temperature within the weak coupling Markovian limit. It is shown that, below a critical temperature, certain topological qubit observables X and Z possess relaxation times exponentially long in the size of the system. Their construction involves polynomial in system size algorithm which uses as an input the results of... • Publikacja Given a graph G and a sequence of color costs C, the Cost Coloring optimization problem consists in finding a coloring of G with the smallest total cost with respect to C. We present an analysis of this problem with respect to weighted bipartite graphs. We specify for which finite sequences of color costs the problem is NP-hard and we present an exact polynomial algorithm for the other finite sequences. These results are then extended... • Publikacja In this paper, the problem of scheduling on parallel machines with a presence of incompatibilities between jobs is considered. The incompatibility relation can be modeled as a complete multipartite graph in which each edge denotes a pair of jobs that cannot be scheduled on the same machine. The paper provides several results concerning schedules, optimal or approximate with respect to the two most popular criteria of optimality:... • Publikacja A graph is equitably k-colorable if its vertices can be partitioned into k independent sets in such a way that the number of vertices in any two sets differ by at most one. The smallest k for which such a coloring exists is known as the equitable chromatic number of G. In this paper the problem of determinig the equitable coloring number for coronas of cubic graphs is studied. Although the problem of ordinary coloring of coronas... • Publikacja In the paper we study the computational complexity of the backbone coloring problem for planar graphs with connected backbones. For every possible value of integer parameters λ≥2 and k≥1 we show that the following problem: Instance: A simple planar graph GG, its connected spanning subgraph (backbone) HH. Question: Is there a λ-backbone coloring c of G with backbone H such that maxc(V(G)) ≤k? is either NP-complete or polynomially... • Publikacja Set S ⊂ V is called secure set iff ∀ X ⊂ S | N [ X ] ∩ S | ≥ | N ( X ) \ S | [3]. That means that every subset of a secure set has at least as many friends (neighbour vertices in S) as enemies (neighbour vertices outside S) and will be defended in case of attack. Problem of determining if given set is secure is co −NP -complete, there is no efficient algorithm solving it [3]. Property testers are algorithms that distinguish inputs... • Publikacja Dynamics is an inherent feature of many real life systems so it is natural to define and investigate the properties of models that reflect their dynamic nature. Dynamic graph colorings can be naturally applied in system modeling, e.g. for scheduling threads of parallel programs, time sharing in wireless networks, session scheduling in high-speed LAN's, channel assignment in WDM optical networks as well as traffic scheduling. In... • Publikacja We consider a bi-criteria generalization of the pathwidth problem, where, for given integers k, l and a graph G, we ask whether there exists a path decomposition P of G such that the width of P is at most k and the number of bags in P, i.e., the length of P, is at most l. We provide a complete complexity classification of the problem in terms of k and l for general graphs. Contrary to the original pathwidth problem, which is fixed-parameter... • Publikacja A problem of graph F-free coloring consists in partitioning the vertex set of a graph such that none of the resulting sets induces a graph containing a fixed graph F as an induced subgraph. In this paper we consider dynamic F-free coloring in which, similarly as in online coloring, the graph to be colored is not known in advance; it is gradually revealed to the coloring algorithm that has to color each vertex upon request as well... • Publikacja The objective of this paper is to present a robust optimization algorithm for computationally efficient airfoil design under mixed (inherent and epistemic) uncertainty using a multi-fidelity approach. This algorithm exploits stochastic expansions derived from the Non-Intrusive Polynomial Chaos (NIPC) technique to create surrogate models utilized in the optimization process. A combined NIPC expansion approach is used, where both... • Publikacja The tree-depth problem can be seen as finding an elimination tree of minimum height for a given input graph G. We introduce a bicriteria generalization in which additionally the width of the elimination tree needs to be bounded by some input integer b. We are interested in the case when G is the line graph of a tree, proving that the problem is NP-hard and obtaining a polynomial-time additive 2b-approximation algorithm. This particular... • Publikacja We study shared multi-processor scheduling problem where each job can be executed on its private processor and simultaneously on one of many processors shared by all jobs in order to reduce the job’s completion time due to processing time overlap. The total weighted overlap of all jobs is to be maximized. The problem models subcontracting scheduling in supply chains and divisible load scheduling in computing. We show that synchronized... • Publikacja This paper presents a mathematical model for the hysteresis phenomenon in ferromagnetic tape wound core. The feedback scalar Preisach model of hysteresis is used to simulate magnetic behavior of the grain oriented silicon strip of ET114-27 type. Determination of B-H hysteretic curve is based on measurement of the initial magnetization curve and the main hysteresis loop. The Preisach distribution function (PDF) of ET114-27 material... • Publikacja Savitzky–Golay (SG) filtering is a classical signal smoothing technique based on the local least squares approximation of the analyzed signal by a linear combination of known functions of time (originally — powers of time, which corresponds to polynomial approximation). It is shown that the regularized version of the SG algorithm can be successfully applied to identification of time-varying finite impulse response (FIR) systems.... • Publikacja We study several problems of clearing subgraphs by mobile agents in digraphs. The agents can move only along directed walks of a digraph and, depending on the variant, their initial positions may be pre-specified. In general, for a given subset S of vertices of a digraph D and a positive integer k, the objective is to determine whether there is a subgraph H=(V,A) of D such that (a) S is a subset of V, (b) H is the union of k directed... • Publikacja We consider the problem of finding edge search strategies of minimum cost. The cost of a search strategy is the sum of searchers used in the clearing steps of the search. One of the natural questions is whether it is possible to find a search strategy that minimizes both the cost and the number of searchers used to clear a given graph G. We call such a strategy ideal. We prove, by an example, that ideal search strategies do not... • Publikacja One of the ways to improve the fuel economy of heavy duty trucks is to operate the combustion engine in its most efficient operating points. To do that, a mathematical model of the engine is required, which shows the relations between engine speed, torque and fuel consumption in transient states. In this paper, easy accessible exploitation data collected via CAN bus of the heavy duty truck were used to obtain a model of a diesel... • Publikacja We study the problem of rendezvous of two mobile agents starting at distinct locations in an unknown graph. The agents have distinct labels and walk in synchronous steps. However the graph is unlabelled and the agents have no means of marking the nodes of the graph and cannot communicate with or see each other until they meet at a node. When the graph is very large we want the time to rendezvous to be independent of the graph size... • Publikacja We study a problem of shared processor scheduling of multiprocessor weighted jobs. Each job can be executed on its private processor and simultaneously on possibly many processors shared by all jobs. This simultaneous execution reduces their completion times due to the processing time overlap. Each of the m shared processors may charge a different fee but otherwise the processors are identical. The goal is to maximize the total... • Publikacja The dissertation discusses the problems of scheduling compatible jobs on parallel machines. Some jobs are incompatible, which is modeled as a binary relation on the set of jobs; the relation is often modeled by an incompatibility graph. We consider two models of machines. The first model, more emphasized in the thesis, is a classical model of scheduling, where each machine does one job at time. The second one is a model of p-batching... • Publikacja In this paper, novel methods for the evaluation of digital-filter stability are investigated. The methods are based on phase analysis of a complex function in the characteristic equation of a digital filter. It allows for evaluating stability when a characteristic equation is not based on a polynomial. The operation of these methods relies on sampling the unit circle on the complex plane and extracting the phase quadrant of a function... • Publikacja Given a graph G, a spanning subgraph H of G and an integer λ&gt;=2, a λ-backbone coloring of G with backbone H is a vertex coloring of G using colors 1, 2, ..., in which the color difference between vertices adjacent in H is greater than or equal to lambda. The backbone coloring problem is to find such a coloring with maximum color that does not exceed a given limit k. In this paper, we study the backbone coloring problem for bounded-degree... • Publikacja A team of mobile entities, which we refer to as agents or searchers interchangeably, starting from homebases needs to complete a given task in a graph.The goal is to build a strategy, which allows agents to accomplish their task. We analyze strategies for their effectiveness (e.g., the number of used agents, the total number of performed moves by the agents or the completion time).Currently, the fields of on-line (i.e., agents... • Publikacja Generalized adaptive notch filters are used for identification/tracking of quasi-periodically varying dynamic systems and can be considered an extension, to the system case, of classical adaptive notch filters. For general patterns of frequency variation the generalized adaptive notch filtering algorithms yield biased frequency estimates. We show that when system frequencies change slowly in a smooth way, the estimation bias can... • Publikacja Given a graph G = (V(G), E(G)), the size of a minimum dominating set, minimum paired dominating set, and a minimum total dominating set of a graph G are denoted by γ (G), γpr(G), and γt(G), respectively. For a positive integer k, a k-packing in G is a set S ⊆ V(G) such that for every pair of distinct vertices u and v in S, the distance between u and v is at least k + 1. The k-packing number is the order of a largest kpacking and... • Publikacja In the open shop scheduling with bioperational jobs each job consists of two unit operations with a delay between the end of the first operation and the beginning of the second one. No-wait requirement enforces that the delay between operations is equal to 0. No-idle means that there is no idle time on any machine. We model this problem by the interval incidentor (1, 1)-coloring (IIR(1, 1)-coloring) of a graph with the minimum... • Publikacja ALS point cloud filtering involves the separation of observations representing the physical terrain surface from those representing terrain details. A digital terrain model (DTM) is created from a subset of points representing the ground surface. The accuracy of the generated DTM is influenced by several factors, including the survey method used, the accuracy of the source data, the applied DTM generation algorithm, and the survey...
{"url":"https://mostwiedzy.pl/pl/search?s=POLYNOMIAL%20ALGORITHM","timestamp":"2024-11-03T12:35:31Z","content_type":"text/html","content_length":"140907","record_id":"<urn:uuid:5a8e53b3-dae5-4163-9596-7bc89ed206bd>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00550.warc.gz"}
Markov Decision Processes with Uncertain Parameters Markov Decision Processes (MDPs) are a commonly used model type to describe decision making in stochastic models. However, usually the parameters of MDPs result from some data or estimates of experts. In both cases parameters are uncertain. This uncertainty has to be taken into account during the computation of policies that describe decisions to be made over time. In this case, the parameters of the MDP are not exactly known, instead a set of MDPs is defined which captures the parameter uncertainty. The goal is then to find a policy that behaves good for all possible realizations of uncertainty. A good behavior is not really well-defined, common interpretations are an optimal policy for the worst possible realization of uncertainty (robust optimization), an optimal policy for the expected behavior (risk neutral stochastic optimization), an optimal policy for the expected behavior plus some guarantee for the worst case (stochastic optimization with some baseline guarantee), etc. In general, the computation of optimal policies for MDPs with uncertain parameters becomes much more complex than the computation of optimal policies for MDPs with known parameters. Many problems are NP-hard. However, often good approximation algorithms can be found to compute acceptable policies sometimes even with an error bound. Queuing Network Example MDPs with uncertain parameters appear in in various application areas like the control of queuing networks, maintenance problems, routing in computer networks, computer security or computational finance. In the figure above a small control problem for queuing networks is shown. Upon arrival of a new task it has to be decided into which queue the entity is routed. The routing algorithm knows the number of customers in each queue but not the detailed service times. In an MDP setting the rates are known, internal uncertainty is introduced by different realizations of the (exponentially distributed) service and inter arrival times. With uncertain parameters the rates are not exactly known introducing another source of external uncertainty Rectangular Parameter Uncertainty The basic assumption of rectangular parameter uncertainty is that uncertainty occurs independently for every state-action pair (s,a). A typical example are Bounded Parameter MDPs (BMDPs) where transition probabilities or/and rewards are defined by intervals rather than scalars. For rectangular uncertainty, the robust policy, which maximizes the reward under the worst possible realization of uncertainty, can be computed with polynomial effort. However, the computational effort is still much higher than the effort required to compute optimal policies for MDP with known parameters. In particular, solution approaches based on linear programming cannot be applied for the analysis of BMDPs or related models. BMDP Example In our research on MDPs with rectangular uncertainty, mainly BMDPs have been analyzed. The specification of BMDPs with a class of Stochastic Petri Nets has been published in [1], [6] describes the application of BMDPs for maintenance problems. In [4] numerical methods to compute robust policies for BMDPs are compared and [3] describes the relation between aggregation of large MDPs and the use of BMDPs. Approximate and exact methods for two NP-hard problems are considered in the papers [2] and [7]. In [2] the Pareto frontier of policies that are not dominated with respect to their worst, average and best case behavior is approximated. [7] introduces several algorithms to determine policies that are optimal according to one criterion, like the average or worst case, and guarantee a minimal reward for the other criterion. Scenario Based Parameter Uncertainty In the scenario based approach, uncertainty is represented by finitely many scenarios. This allows one to consider correlations between different parameters of the model or to sample from a set of parameter distributions. The goal is then to find a policy which maximizes the weighted sum of rewards over all scenarios. The resulting optimization problem has shown to be NP-hard [1]. Example of a Concurrent MDPs with two Scenarios The class of Concurrent MDPs (CMDPs) has been defined in [2,8]. Paper [8] also presents several methods to compute or approximate optimal policies. The approach is extended to a new class of policies and decision dependent rewards in [9]. 1. Marco Beccuti, Elvio Gilberto Amparore, Susanna Donatelli, Dimitri Scheftelowitsch, Peter Buchholz, Giuliana Franceschinis: Markov Decision Petri Nets with Uncertainty. EPEW 2015: 177-192 2. Dimitri Scheftelowitsch, Peter Buchholz, Vahid Hashemi, Holger Hermanns: Multi-Objective Approaches to Markov Decision Processes with Uncertain Transition Parameters. VALUETOOLS 2017: 44-51 3. Peter Buchholz, Iryna Dohndorf, Alexander Frank, Dimitri Scheftelowitsch: Bounded Aggregation for Continuous Time Markov Decision Processes. EPEW 2017: 19-32 4. Peter Buchholz, Iryna Dohndorf, Dimitri Scheftelowitsch: Analysis of Markov Decision Processes Under Parameter Uncertainty. EPEW 2017: 3-18 5. Dimitri Scheftelowitsch: Markov decision processes with uncertain parameters. Dissertation Technical University of Dortmund, Germany, 2018 6. Peter Buchholz, Iryna Dohndorf, Dimitri Scheftelowitsch: Time-Based Maintenance Models Under Uncertainty. MMB 2018: 3-18 7. Peter Buchholz, Dimitri Scheftelowitsch: Light robustness in the optimization of Markov decision processes with uncertain parameters. Computers & OR 108: 69-81 (2019) 8. Peter Buchholz, Dimitri Scheftelowitsch: Computation of weighted sums of rewards for concurrent MDPs. Math. Meth. of OR 89(1): 1-42 (2019) 9. Peter Buchholz, Dimitri Scheftelowitsch: Concurrent MDPs with Finite Markovian Policies. Accepted for MMB 2020 A collection of Matlab functions which implements the algorithms from [8] and [9] can be downloaded (please cite the above references if you use the software).
{"url":"https://ls4-www.cs.tu-dortmund.de/download/buchholz/CMDP/CMDP_Description.html","timestamp":"2024-11-15T04:14:38Z","content_type":"text/html","content_length":"9845","record_id":"<urn:uuid:703c798b-1b08-4f71-8f0b-a61415d16007>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00696.warc.gz"}
Introduction to FPGA – The Difference Between FPGA and Processors In the last part of this article, we went over the ins and outs of FPGAs and how they allow for the design of various logic circuits using basic digital resources. But what really sets FPGAs apart from processors? In this part, I'll go over the distinctions between FPGAs and processors and break down the pros and cons of each. The Difference Between FPGA and Processors FPGAs have a key advantage over processors in terms of speed when performing processing and control tasks. They can usually implement a given algorithm much faster than a processor. There are two main reasons why FPGAs can perform high-speed logic and mathematical operations. Firstly, FPGAs can perform an unlimited number of different operations in parallel. For instance, if various algorithms or even different parts of a particular algorithm do not depend on each other's results, they can be performed simultaneously. This is not possible in processors. Processors have a CPU with specific commands, and only one command can be executed at a time. To perform an algorithm, it must be programmed with predetermined instructions of the CPU, which will then execute them one by one. The figure below provides a conceptual illustration of this point. In the figure, the C1 command is meant to be executed on the D1 data, while the C2 command is meant to be executed on the D2 data. In a processor, only one of these commands can be executed on the desired data at any given moment. But in an FPGA, the circuit can be designed in parallel to perform both commands simultaneously on the desired data. Another reason FPGAs can achieve faster processing speeds is due to the ability to customize designs for each algorithm. As previously mentioned, a processor developer must implement each algorithm using a limited number of predefined commands for the processor. Thus, they have little flexibility in optimizing the algorithm's execution time, except by redefining different parts of the algorithm with fewer commands. However, an FPGA designer can create an optimized circuit for each algorithm from the ground up. This means that the digital designer can design and implement a specific and optimized CPU to run each part of the algorithm. A Few Disadvantages of FPGAs One drawback of FPGAs is that implementing complex mathematical algorithms on them may consume a lot of hardware resources. Therefore, sometimes, it may not be practical to implement all processing parts of a system on an FPGA. As a result, algorithms with low to medium complexity that require high execution speed are typically implemented on FPGAs, while more sophisticated algorithms are implemented on DSP processors. Another disadvantage of FPGAs compared to processors is that designing and debugging them can be a lengthy process. Debugging FPGA designs is time-consuming and can ultimately increase the overall project's time to market and cost. Therefore, it's essential to carefully divide the processing and control tasks between FPGA and processors in a complex project to avoid any additional time or cost on the project. When to Use FPGAs? When implementing a system, a digital designer must determine which technology to use for each part of the system. Since FPGAs are typically much more expensive than various types of processors, it is not cost-effective to use FPGAs when a processor such as ARM or DSP can be used instead. Additionally, as previously noted, designing and debugging with FPGAs takes longer than with processors, leading to increased costs. Therefore, it's essential to have a proper standard for selecting implementation technology to save time and money. Time Required to Run a Processing Algorithm To decide whether to use FPGAs or processors, it's important to carefully examine the expected specifications of each system. For instance, FPGAs are often used in real-time systems, which must run processing algorithms and control functions quickly to mimic natural phenomena. For example, in an active radar system that transmits signals and detects targets using the time it takes for echoes of the transmitted signal to return to the radar, the system must process received signals very rapidly. To implement this system, digital designers must consider the time and amount of hardware resources (computation complexity) required to perform the processing associated with target detection. This question can have different answers, even in similar systems. In the active radar example, if the signals are transmitted every millisecond, the radar has one millisecond to run the processing and control algorithms required for detection after receiving the echo of any transmission. Therefore, designers must determine whether a processor like AVR or DSP can run the processing algorithm within one millisecond. If not, an FPGA may be necessary. To analyze larger processing systems, designers use a standard unit called FLOP, or Floating-Point Operation, which indicates how many standard floating-point operations a processing algorithm needs to perform on one set of inputs. For example, computing an N-point FFT algorithm requires Nlog2N floating-point operations. Thus, a 128-point FFT requires 896 floating-point operations. By calculating the FLOP for different parts of the processing algorithm of a large system, a digital designer can calculate the total number of FLOPs needed. Depending on the requirements of a particular project, it's crucial to determine in what amount of time the processing algorithm should run. Dividing the number of processing operations needed by the time required to run it yields another critical value called FLOPS, or Floating-Point Operation per Second. This value can directly determine which processor is necessary for implementation. For example, if the 128-point FFT algorithm must run in 100 microseconds, the required FLOPS would be 8.96 million floating-point operations per second. Some applications may require the simultaneous use of multiple DSP processors and FPGA chips to achieve the needed FLOPS. {"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
{"url":"https://fpgatek.com/difference-between-fpga-and-processors/","timestamp":"2024-11-08T13:57:48Z","content_type":"text/html","content_length":"484599","record_id":"<urn:uuid:60769d6e-ed76-4472-bef7-1b4ed91d047a>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00418.warc.gz"}
Bridges Studies Break-up of Atmospheric Chemicals You Break it, You Understand It PSC, XSEDE Help Quantum Chemists Understand Break-up of Atmospheric Chemicals March 6, 2018 Why It’s Important: Hydrazine and samarium are two chemicals with little in common, other than that humans sometimes introduce them into the atmosphere. Hydrazine is a rocket fuel. Samarium is an element that, when it reacts with oxygen atoms in the upper atmosphere, can damp down plasma fluctuations caused by solar activity. The U.S. Air Force is interested in both of them. Hydrazine, because they’d like to remotely identify its combustion products when the Bad Guys launch a rocket. Samarium, because if a national crisis ever coincided with sunspot activity, it would be nice to release a small amount into the ionosphere to maintain vital communications. To better understand the chemistry of these very different substances and how they will act once in the atmosphere, the Air Force Office of Scientific Research engaged quantum chemist Peter Armentrout and his colleagues at the University of Utah. “For [the hydrazine] work, we were engaged by ENSCO Corp, which contracts with the Air Force. ENSCO was interested in modeling hydrazine, its compounds and IR signatures, so any information they could obtain that might help benchmark their modeling was useful. That way they could model other species potentially important in the launches so that all species formed might be identified. If all you have is the recorded signature, then you don’t necessarily know what species you are looking at. And if the bad guys use a different fuel mix, then the species formed could be very “The Air Force has done three trial releases of samarium into the ionosphere. The idea is that samarium atoms react with oxygen atoms to form the SmO molecule with enough energy to lose an electron, forming a samarium oxide cation, and both charged particles mediate plasma fluctuations. There aren’t many molecules [that can do this].” —P. B. Armentrout, University of Utah Armentrout and his team used a simple method to better understand both hydrazine and samarium compounds: they broke them. By studying the energy required to split the chemical bonds holding them together in the lab, and the way they interacted with components in the atmosphere, the Utah scientists could compare the results to what chemists thought they knew about both. But they needed intensive computing resources to simulate the exact results predicted by the theory. They turned to the XSEDE-allocated supercomputers Bridges at PSC and Comet at the San Diego Supercomputer Center. How PSC and XSEDE Helped: Bridges and Comet brought to the table a computational capacity that even a large university “cluster” supercomputer couldn’t match. Both studies employed the regular-memory and large-memory (LM) nodes of Bridges. The complex, multi-faceted “matrix” calculations needed for the work would not have been possible on other supercomputers. The 3-terabyte LM nodes offered a unique ability to provide “spare room” for the computations, avoiding the data traffic jam that would have occurred with more conventional 386 gigabyte (equal to 0.386 terabytes) conventional nodes. The bond between the two nitrogen atoms in hydrazine rocket fuel (left) broke to produce fragments that weren’t at the lowest possible energy level. While the result was a little surprising, a comparison with the theory via Bridges helped explain it. In the case of hydrazine, the bond between the two nitrogen atoms in the rocket fuel broke to produce fragments that weren’t at the lowest possible energy level. That was a little surprising, because generally molecules find the lowest level that they can. But the simulations on Bridges and Comet explained the result. In not going to the lowest level possible, the fragments had maintained the orbitals of the electrons within the fragments. This conserved the character of the “binding” electrons that had kept the nitrogens together. “We turned to [Bridges] because … the electron-correlation calculations … [were] fundamentally just too big to do on a traditional supercomputer … We wanted to do a really high-level calculation, as good … as one could conceivably do. PSC and XSEDE allowed us to do that pretty routinely.”—P. B. Armentrout, University of Utah In the case of samarium, the picture wasn’t as clear. Working with computational chemistry specialist Kirk Peterson of Washington State University, the team studied a compound similar to samarium oxide, samarium sulfide (SmS). They started with positively charged SmS^+ cations, because generation of the oxygen-containing equivalent, SmO^+, in the ionosphere is an important part of samarium’s ability to reduce plasma fluctuations. Also, the cations can be easily accelerated and detected in the lab (unlike the neutral SmS molecules). Finally, chemists understand a lot about the relationship between SmS and SmS^+, so the results with the cation will tell them much about neutral SmS as well. By hitting SmS^+ with various components of the atmosphere and comparing the lab results with simulations on Bridges and Comet, the scientists found a discrepancy between measurement and theory. While initially a setback, the result will allow scientists to adjust their computational approaches to better predict the properties of heavy elements like Sm. This will improve their ability to use such elements in maintaining critical communications. The scientists published their results in two papers. They reported the hydrazine results in the Journal of Chemical Physics in September 2017, and the samarium results in the same journal the following December.
{"url":"https://wp-dev.psc.edu/bridges-studies-break-up-of-atmospheric-chemicals/","timestamp":"2024-11-03T14:10:38Z","content_type":"text/html","content_length":"172039","record_id":"<urn:uuid:c62aa3b2-b2dd-4a75-8b1b-31199580aca1>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00749.warc.gz"}
Pi Day (2022/03/14) Piaxid Monochromatic 1/4 Pi is my official and favorite constant and today is 3/14 day when the Pi constant is celebrated. Let \(C\) and \(d\) respectively be the circumference and diameter of a circle. Then, \(\pi\) is the ratio of the circle’s circumference to its diameter, that is: \[\pi = \frac{C}{d}\] Next, I will dedicate some of my best words to Pi: Pi is an abstraction that is always found in nature but, circles too. The Pi abstraction is defined by, but not limited to, circles just like, but not limited to, the open balls from the usual Euclidean topological basis populating a whole topological space. The circle represents perfection as it encloses all the Pi decimals. Axioms, atomic ideal models, celestial bodies being shaped into spheres due to the gravitational implication, even fruits and beauty are all representations of this perfection. Smooth curves represent beauty as they consist of continuous functions with continuous derivatives. This leads to functionally creating a cohesive and properly articulated whole. I have been working under this philosophy set for 5 years and is what I am. I’ll keep writing about Pi as of now. — Pi Day (2022/03/14) Tobias Briones
{"url":"https://blog.mathsoftware.engineer/pi-day-2022-03-14","timestamp":"2024-11-02T20:35:53Z","content_type":"text/html","content_length":"10961","record_id":"<urn:uuid:5a82b338-b54d-40ae-b538-80c315b6d29b>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00434.warc.gz"}
In my earlier post ‘measuring it out‘, I wondered how fractal the Welsh coast is. Shortly after writing ‘measuring it out’, I found a GPX file for the Welsh Coastal path at the Long Distance Walkers Association (LDWA) site, and realised, as well as being useful for many purposes, it would enable me to precisely measure the fractal dimension of the Welsh coast (or at least the coast path). Mandelbrot’s original work on fractals^1 was motivated by the question “How long is the coast of Britain”. The problem, known as the “Coastal Paradox“, is that as you look at smaller and smaller scales, you see smaller and smaller features and hence get a larger value for the length. He suggests, based on earlier data of the cartographer Richardson^2, that coastlines are self-similar – each magnified part has similar properties to the whole. In Mandelbrot’s own words: “Geographical curves are so involved in their detail that their lengths are often infinite or more accurately, undefinable. However, many are statistically ‘‘self-similar,’’ meaning that each portion can be considered a reduced-scale image of the whole. In that case, the degree of complication can be described by a quantity D that has many properties of a ‘‘dimension,’’ though it is fractional. In particular, it exceeds the value unity associated with ordinary curves” The characteristic feature of a true fractal is that the length measured increases inversely as a power of the length the ruler used to do the measuring. A smaller ruler means a bigger length, but with the precise rule: L = K x R^1-D The constant D is called the fractal dimension of the curve. In the case of a smooth line or curve it is precisely 1, in the case of a space filling curve such as the Hilbert curve it is 2 and for the Koch ‘snowflake’ it is approximately 1.262^3. If you then plot fractal curve length vs. ruler length, you see a straight line with slope 1-D. Mandelbot, using Richardson’s data, and others, have found that the fractal dimension of coastlines of countries vary between about 1 and 1.3, with the fractal dimension of the west coast of Britain (a quite intricate coast) is about 1.25. The GPX file on the Long Distance Walkers Association (LDWA) site has 14500 points marked over its 870 mile length; that is, on average, a point approximately every 100 yards (or metres). This is almost certainly at a finer scale than Richardson considered in his work, which according to Shelberg et al.^4, used ruler lengths of between 10 km and 1000 km for measurements. In 1961 Robertson had to set a dividers to a scaled length and then walk the length of detailed maps. In 1982, Shelberg and colleagues had to hand-digitise coastlines and then analysed them using computers one million times slower than those today. Doing the same analysis now, the slowest thing is parsing the XML of the GPX file to get the long-lat coordinates of each point, it is easy to get rapid results at a far finer scale. So, is the Welsh coast fractal, and if so what is its dimension? The figure below shows a log-log plot of the measured length against the virtual ‘ruler’ length from 1 mile (log = 0) to 32 miles (log ~ 1.5). The red line has a slope of -0.2 suggesting a fractal dimension of 1.2, a little smoother than the whole British west coast (which includes the incredibly intricate west coast of Scotland), but toward the upper range of typical coastline fractal So, looking at this, yes, the Welsh coast path is fractal with a dimension of 1.2 … except … … the average distance between points in the GPX trail is about 100m, with the smallest features around 25m. The graph above only shows the upper end of distances above 1 mile, if the fractal analysis is extended as fine as possible, the curve starts to look very different. Here is the full graph including the ‘ruler’ lengths below one mile. This now looks very different, with the points curving down to the left (smaller rulers) well below the straight line fitted to the larger ruler distances. Now the first point or two represent features below the average point distance on the digitised GPX trail. These are probably places where the path is particularly curvy, so are probably reasonable to include. However, to be on the safe side, ignore the first two or three points. Still, the graph is far from a straight line; that is it appears not to be truly fractal at all, smoother at fine scales than a true fractal would be. Given the iconic nature of coastlines in the early fractal literature, this seems a little surprising, and indeed one can start to think of reasons for this. Paths by their nature probably smooth out features. However, this is the coast path, and following it on the official maps, it clearly does hug the coast as closely as possible; smoothing may happen at distances of a hundred yards or so (the bottom two or three pints on the graph), but not at distances between a few hundred yards and a mile, which are already dipping well below the straight line. Of course, the GPX file may not be accurate and ‘smooths’ the data, however, the inaccuracies needed would be several hundred yards, a long way off the path for a walker. With this discrepancy in mind I looked in more detail at the Shelberg (1982) paper. While Mandlebot’s Science article simply quotes Richardson’s data, Shelberg’s shows graphs of their results using the example of Kodiak Island in Alaska (a particularly intricate coastline!) with a 1653 point digitisation. The actual points are (in Shelberg et al’s words) “rainbow” shaped, and do not perfectly fit the line. Following a slightly handwaving argument, the authors decide to effectively ignore the smaller scale data points, leading to a reduced range (recomputed, but effectively the right-hand side of the above graph), which, like the Welsh coast data, looks closer to a straight line … but of course any smooth curve, if looked at closely enough, ends up nearly straight. I did wonder whether perhaps this deviation of coastlines from ‘pure’ fractal is commonly accepted, but on the Wolfram page for Coastline Paradix, it still says: “a coastline is an example of a fractal, and plotting the length of the ruler versus the measured length of the coastline on a log-log plot gives a straight line, the slope of which is the fractal dimension of the coastline” It is clear that the early authors were not intending to ‘massage’ their data, just that they came with a preconceived notion that the coastline will be fractal and assume that deviations are in some way errors, deviations, or noise. Having removed the problematic data, the assumption is ‘verified’ deepening the folklore of the discipline Similar self-confirming academic folklore is common. Indeed the same kind of graph with a ‘fitted’ line is common throughout the web self-similarity literature and, in HCI, Fitts’ Law leads to very similar misconceptions. As with all folklore, these academic myths are based on truth: in this case coasts do get more wiggly as one looks in more detail, fractals (or at least fractal-like) shapes do occur in nature, and power laws do arise in many circumstances – just not everywhere and not perfectly. Power laws are a good meme, but the world does not always comply!
{"url":"https://alanwalks.wales/tag/education/","timestamp":"2024-11-14T21:59:52Z","content_type":"text/html","content_length":"44588","record_id":"<urn:uuid:46bc2f76-0b39-43c4-a55f-87aeb47d55cc>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00619.warc.gz"}
I'm not much of a math wiz but the Algebrator helps me out with fractions and other stuff I need to know for my math appreciation class in college. Layla Richards, TX I got 95% on my college Algebra midterm which boosted my grade back up to an A. I was down to a C and worried when I found your software. I credit your program for most of what I learned. Thanks for the quick reply. Jessica Flores, FL Step-by-step, Algebrator has made algebra as easy as memorizing the multiplication tables! It is just impossible that I would be doing so well academically, and feel so confident about myself, if not for this program! It changed my life! Jenny Lane, AL I was just fascinated to see human-like steps to all the problems I entered. Remarkable! Carl J. Oldham, FL This is great, finishing homework much faster! Clara Johnson, ND
{"url":"https://algebra-expression.com/algebra-expressions/graphing-function/algebraic-fractions-worksheet.html","timestamp":"2024-11-12T03:25:47Z","content_type":"text/html","content_length":"82229","record_id":"<urn:uuid:88ee1979-661b-4847-bdf3-23f68004174b>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00744.warc.gz"}
2005 Sprinter T1N 413cdi no communication and ABS issues 1 04-17-2024, 04:08 PM (This post was last modified: 04-17-2024, 09:44 PM by UrbanAuto.) Hi guys, I tried searching using the search function but I could not find anything of any resemblance. I hope this post complies with the rules. We have a 2005 413cdi Sprinter here. There is no communication using Launch/Topdon/Snapon diagnostic machines. The problem on hand is related to the ABS system. Braking is compromised as the ABS kicks in after a few meters of driving. The brakes are weak and pedal vibrates when braking (ABS engages all the time). We know the ABS pumps fail on these but we had it tested and it is good. Any ideas on where to go with it would be appreciated. Thanks given by: Disconnect the hydraulic group unit (ABS/ESP...) and check if the brake behaves as it should, that is: like a traditional brake. Thanks given by: (04-17-2024, 04:28 PM)gcs190 Wrote: Disconnect the hydraulic group unit (ABS/ESP...) and check if the brake behaves as it should, that is: like a traditional brake. They do, this is what we did for the customer so he could use it for a while. We want to get this fixed now. Thanks given by: Any chance anyone can point me towards a wiring diagram for this model? Thanks given by: (04-17-2024, 04:08 PM)UrbanAuto Wrote: Hi guys, I tried searching using the search function but I could not find anything of any resemblance. I hope this post complies with the rules. We have a 2005 413cdi Sprinter here. There is no communication using Launch/Topdon/Snapon diagnostic machines. The problem on hand is related to the ABS system. Braking is compromised as the ABS kicks in after a few meters of driving. The brakes are weak and pedal vibrates when braking (ABS engages all the time). We know the ABS pumps fail on these but we had it tested and it is good. Any ideas on where to go with it would be appreciated. You have a fault with an ABS sensor / reluctor ring or air gap Scope each of the sensors while spinning the wheels or check the live data while driving in a straight line Thanks given by: This is what we think, yes. As there is no comms we cannot look at the live data and as we have no wiring diagram we cannot scope the sensors. I am not sure if this makes a difference at all but this is an old, ex-police van. Thanks given by: (04-17-2024, 11:13 PM)UrbanAuto Wrote: This is what we think, yes. As there is no comms we cannot look at the live data and as we have no wiring diagram we cannot scope the sensors. I am not sure if this makes a difference at all but this is an old, ex-police van. In this instance, your better off getting star and being sure that its not your equipment to begin with Then check the K line to the plug or just pin it out this will make diagnosis easier of live data and fault codes where are you? Thanks given by: We're based in Essex. Although we cannot rule it out we never had any issues with Sprinters or Vitos using our diagnostics. We need the wiring diagram to carry out the tests. Thanks given by: (04-18-2024, 04:54 PM)UrbanAuto Wrote: We're based in Essex. Although we cannot rule it out we never had any issues with Sprinters or Vitos using our diagnostics. We need the wiring diagram to carry out the tests. pm me Thanks given by: 2 04-17-2024, 04:28 PM 3 04-17-2024, 05:02 PM 4 04-17-2024, 09:43 PM 5 04-17-2024, 10:31 PM 6 04-17-2024, 11:13 PM (This post was last modified: 04-17-2024, 11:13 PM by UrbanAuto.) 7 04-18-2024, 11:45 AM 8 04-18-2024, 04:54 PM 9 04-18-2024, 06:51 PM
{"url":"https://mhhauto.com/Thread-2005-Sprinter-T1N-413cdi-no-communication-and-ABS-issues","timestamp":"2024-11-08T20:13:48Z","content_type":"application/xhtml+xml","content_length":"75645","record_id":"<urn:uuid:1dc75027-431e-476c-b1c6-b1f3a9f59e88>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00178.warc.gz"}
Using add and plus (Timed) - Addition Maths Games for Year 1 (age 5-6) by URBrainy.com Using add and plus (Timed) Recognising add and plus to add two small numbers. Using add and plus (Timed) Recognising add and plus to add two small numbers. Create my FREE account including a 7 day free trial of everything Already have an account? Sign in Free Accounts Include Subscribe to our newsletter The latest news, articles, and resources, sent to your inbox weekly. © Copyright 2011 - 2024 Route One Network Ltd. - URBrainy.com 11.4.6
{"url":"https://urbrainy.com/get/6816/y01t44-using-add-and-plus-timed","timestamp":"2024-11-04T07:10:56Z","content_type":"text/html","content_length":"110948","record_id":"<urn:uuid:dc8a1e44-8577-41c9-91ab-2aa9c18e98ab>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00871.warc.gz"}
HVAC sensible and latent cooling numbers, calculator results versus equipment specs - Air Ducter Looking to size a new heat pump for my home, I've used two online calculators. They give pretty similar results. For cooling, the former gives "total" load and "latent" load, while the latter gives "total" load and "sensible" load. I understand that "sensible" cooling removes the heat from the air without changing the moisture content, and "latent" cooling dehumidifies the air by causing condensation. The two calculators simply choose to display these numbers in different ways. But I'm confused about evaluating a system's specs to determine if it can satisfy the requirements specified by the calculator. I've attached a page from the tech document for a system I'm Look at the box containing the yellow-highlighted numbers. Note that for the constant indoor dry-bulb temperature of 75 degrees, as the air gets dryer (so the wet-bulb temperature drops from 71 to 59 degrees as the humidity drops from 87% to 58%), the total number (labeled "capacity") stays roughly the same, but the "S/T" numbers (sensible to total ratio) increases fromn 0.5 to 1.0. So for the 71 wet-bulb column, the sensible number (capacity x S/T) is WAY too small (compared to the "sensible result" from the calculators). Similarly, if I look at the 59-degree wet-bulb column, the latent number (capacity x (1-S/T)) is way too small (in fact, it's zero). Should I take all this to mean that I can use the 71-degree column for determing if the system's latent capacity is sufficient and the 59-degree column for determining if the system's sensible capacity is sufficient ? Put another way, if initially the air in the house is humid, the system dehumidifies the air, and then once the RH% has dropped into the 50s, it concentrates on actually cooling the air ?
{"url":"https://airducter.com/question/hvac-sensible-and-latent-cooling-numbers-calculator-results-versus-equipment-specs/","timestamp":"2024-11-08T11:49:26Z","content_type":"text/html","content_length":"151060","record_id":"<urn:uuid:52788809-9019-4d98-8bd8-115281fd3451>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00720.warc.gz"}
Matrix Transformations - Documentation - Unigine Developer Matrix Transformations A lot of calculations in Unigine are performed by using matrices. Actually, matrix transformations are one of the main concepts of 3D engines. This article contains an explanation of matrix transformations with usage examples. See Also • An article about mat4 and dmat4 Unigine data types. • The description of Math Matrix Functions. In simple terms, a matrix in 3D graphics is an array of numbers arranged in rows and columns: Usually, 4x4 matrices are used. Such size (4x4) of matrices is caused by the translation state in 3D space. When you put a new node into the world, it has a 4x4 world transform matrix that defines the position of the node in the world. In Unigine, matrices are column major (column-oriented). Hence, the first column of the transform matrix represent the X vector v1, the second represent Y vector v2, the third represent Z vector v3, and the fourth represent the translation vector t. First three columns show directions of axes and the scale of the origin. The last column contains the translation of the local origin relatively to the world origin. Identity Matrix The world origin has the following matrix: This matrix is called identity matrix, a matrix with ones on the main diagonal, and zeros elsewhere. If a matrix will be multiplied by the identity matrix, it won't change anything: the resulting matrix will be the same as it was before multiplying. If the local origin has the identity matrix, it means the local origin and the world origin are coincident. To change the orientation of the local origin, the first three columns of the matrix should be changed. To rotate the origin along different axes, you should use proper matrices: In the matrices given above, α is a rotation angle along the axis. The next matrix shows the rotation of the local origin along the Y axis at 45 degrees: The last column of the transform matrix shows the position of the local origin in the world relatively to the world origin. The next matrix shows the translation of the origin. The translation vector t is (3, 0, 2). The length of the vector shows the scale coefficient along the axis. To calculate the vector length (also known as magnitude), you should find a square root of the sum of the squares of vector components. The formula is the following: |vector length| = √(x² + y² + z²) The following matrix scales the local origin up to 2 units along all axes. Cumulating Transformations The order of matrix transformations (scaling, rotation and translation) is very important. The order of cumulating transformations is the following: 1. Translation 2. Rotation 3. Scaling This is the formula of the transformation order: TransformedVector = TranslationMatrix * RotationMatrix * ScaleMatrix * Vector Here is an example which demonstrates different positions of the local origin related to the world origin. Translation * Rotation order Rotation * Translation order On the left picture, the local origin was translated first and then rotated; on the right picture, the local origin was rotated first and then translated. All values (rotation angle, translation vector) are the same, but the result is different. This example shows what happens if you choose another order of matrix transformations. The code example below gets the material ball object. In the first case we use the translation * rotation order, in the second case we use the rotation * translation order. In the AppWorldLogic.h file, define the material_ball node smart pointer. // AppWorldLogic.h /* ... */ class AppWorldLogic : public Unigine::WorldLogic { /* .. */ Unigine::NodePtr material_ball; In the AppWorldLogic.cpp file, perform the following: • Include the UnigineEditor.h, UnigineVisualizer.h, UnigineConsole.h headers. • Use using namespace Unigine and using namespace Unigine::Math directives: names of the Unigine and Unigine::Math namespaces will be injected into global namespace. • Enable the visualizer by passing show_visualizer 1 command to the run() function of the Console class. • Get the material ball from the Editor. • Create new rotation and translation matrices. • Calculate new transformation matrix and apply it to the material ball. • Render the world origin by using renderVector() method of the Visualizer class. // AppWorldLogic.cpp file #include "AppWorldLogic.h" #include "UnigineEditor.h" #include "UnigineVisualizer.h" #include "UnigineConsole.h" // inject Unigine and Unigine::Math namespaces names to global namespace using namespace Unigine; using namespace Unigine::Math; /* ... */ int AppWorldLogic::init() { // enable the visualizer for world origin rendering Console::get()->run("show_visualizer 1"); // get the material ball from editor material_ball = Editor::get()->getNodeByName("material_ball"); // create rotation and translation matrices Mat4 rotation_matrix = (Mat4)rotateZ(-90.0f); Mat4 translation_matrix = (Mat4)translate(vec3(0.0f, 3.0f, 0.0f)); // create a new transformation matrix for the material ball // by multiplying the current matrix by rotation and translation matrices Mat4 transform = material_ball->getTransform() * translation_matrix * rotation_matrix; // set the transformation matrix to the material ball return 1; int AppWorldLogic::update() { // render world origin Visualizer::get()->renderVector(Vec3(0.0f,0.0f,0.1f), Vec3(1.0f,0.0f,0.1f), vec4(1.0f,0.0f,0.0f,1.0f)); Visualizer::get()->renderVector(Vec3(0.0f,0.0f,0.1f), Vec3(0.0f,1.0f,0.1f), vec4(0.0f,1.0f,0.0f,1.0f)); Visualizer::get()->renderVector(Vec3(0.0f,0.0f,0.1f), Vec3(0.0f,0.0f,1.1f), vec4(0.0f,0.0f,1.0f,1.0f)); return 1; To change the order, just change the line of cumulating transformations: Mat4 transform = material_ball->getTransform() * rotation_matrix * translation_matrix; The result will be different. The pictures below show the difference (camera is located at the same place). Translation * Rotation order Rotation * Translation order The pictures above show the position of the meshes related to the world origin. Matrix Hierarchy Another important concept is a matrix hierarchy. When a node is added into the world as a child of another node, it has a transform matrix that is related to the parent node. That is why the Node class has different functions: getTransform(), setTransform() and getWorldTransform(), setWorldTransform() that return the local and the world transformation matrices respectively. If the added node has no parent, this node uses the World transformation matrix. What is the reason of using matrix hierarchy? To move a node relatively to another node. And when you move a parent node, child nodes will also be moved, that is the point. Parent origin is the same with the world origin Parent origin has been moved and the child origin has also been moved Pictures above show the main point of the matrix hierarchy. When the parent origin (node) is moved, the chlld origin will also be moved and the local transformation matrix of the child would not be changed. But the world transformation matrix of the child will be changed. If you need the world transformation matrix of the child related to the world origin, you should use the getWorldTransform() , setWorldTransform() functions; in case, when you need the local transformation matrix of the child related to the parent, you should use the getTransform(), setTransform() functions. The following example shows how important the matrix hierarchy. In this example, we create get the node from the editor and clone it. Then we change the transformation matrices of these node: we examine two cases 1. when there are two independent nodes 2. when one node is the child of another In the AppWorldLogic.h file, define the material_ball child and parent nodes smart pointers. // AppWorldLogic.h /* ... */ class AppWorldLogic : public Unigine::WorldLogic { /* .. */ Unigine::NodePtr material_ball_child; Unigine::NodePtr material_ball_parent; In the AppWorldLogic.cpp implement the following code: // AppWorldLogic.cpp #include "AppWorldLogic.h" #include "UnigineEditor.h" #include "UnigineVisualizer.h" #include "UnigineConsole.h" #include "UnigineLog.h" using namespace Unigine; using namespace Unigine::Math; int AppWorldLogic::init() { // enable the visualizer for world origin rendering Console::get()->run("show_visualizer 1"); // get the material ball from editor and clone it material_ball_child = Editor::get()->getNodeByName("material_ball"); material_ball_parent = material_ball_child->clone(); // make the one node the child of another // create rotation and translation matrices for the first material_ball Mat4 rotation_matrix = (Mat4)rotateZ(-90.0f); Mat4 translation_matrix = (Mat4)translate(vec3(3.0f, 0.0f, 0.0f)); // create translation matrix for the second (parent) material ball Mat4 translation_matrix_clone = (Mat4)translate(vec3(0.5f, 0.0f, 1.0f)); // create new transformation matrices for the material balls Mat4 transform = material_ball_child->getTransform() * translation_matrix * rotation_matrix; Mat4 transform_clone = material_ball_parent->getTransform() * translation_matrix_clone; // set the transformation matrices to the material balls return 1; int AppWorldLogic::update() { // render world origin Visualizer::get()->renderVector(Vec3(0.0f,0.0f,0.1f), Vec3(1.0f,0.0f,0.1f), vec4(1.0f,0.0f,0.0f,1.0f)); Visualizer::get()->renderVector(Vec3(0.0f,0.0f,0.1f), Vec3(0.0f,1.0f,0.1f), vec4(0.0f,1.0f,0.0f,1.0f)); Visualizer::get()->renderVector(Vec3(0.0f,0.0f,0.1f), Vec3(0.0f,0.0f,1.1f), vec4(0.0f,0.0f,1.0f,1.0f)); return 1; int AppWorldLogic::shutdown() { // clear smart pointers return 1; If you comment the following line // make the one node the child of another you'll get another result: Parent-child nodes Nodes are independent When nodes are independent they have different local and world transformation matrices. In case of parent-child nodes, the child's local transformation matrix after moving remains the same, but the world transformation matrix will be changed (you can check it by using debug profiler). Last update: 2018-04-26 Help improve this article Was this article helpful? (or select a word/phrase and press Ctrl+Enter)
{"url":"https://developer.unigine.com/en/docs/2.7/code/fundamentals/matrix_transformations/?rlang=cpp","timestamp":"2024-11-10T21:18:30Z","content_type":"text/html","content_length":"294868","record_id":"<urn:uuid:99e9ce4f-21d5-475e-a8e7-777294627452>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00342.warc.gz"}
dice = 1:6 using Statistics μ = Statistics.mean(dice) sum(dice) / length(dice) Statistics.var(dice, corrected=false) sum((dice .- μ).^2 / length(dice)) sum((dice .- μ).^2 / (length(dice) - 1)) [1:6; 1:6] Statistics.cor([1:6; 1:6]) [1:6; 6:-1:1] Statistics.cor([1:6; 6:-1:1]) [1:6 6:-1:1] Statistics.cor([1:6 6:-1:1]) rand produces independent output so we should see a zero correlation. For small values we still see a bit of correlation rand(6, 2) Statistics.cor(rand(6, 2)) But as we increase, the correlation goes to zero Statistics.cor(rand(100, 2)) Statistics.cor(rand(100000, 2)) using Plots μ = 0 σ = 1 f(x) = exp(-1/2*((x - μ) / σ)^2) / (σ * √(2π)) x = range(-4.0, stop=4.0, length=1000) plot(x, f.(x)) z = randn(100) scatter!(z, f.(z), markersize=2) See Example 7.1 of Boyd & Vandenberghe "Convex Optimization" book. Suppose we observe $y_i = a_i^\top x + v_i$ where $v_i$ has a normal distribution $\mathcal{N}(\mu_i, \sigma_i^2)$. We want to recover $x$ from the observation $y_i$ such that the $v_i$ are likely to occur. That is, we want to maximize $f(v_i)$. This is multi-objective. Instead, we can maximize the likelihood that they all occur. Since they are independent, it's the product. $\prod_{i=1}^n f(v_i) = \prod_{i=1}^n \frac{1}{\sigma_i \sqrt{2\pi}}\exp(-\frac{(v_i - \mu_i)^2}{2\sigma_i^2})$ Since $\log$ is an increasing function, that is equivalent to maximizing the logarithm: $\sum_{i=1}^n \log(\frac{1}{\sigma_i \sqrt{2\pi}}\exp(-\frac{(v_i - \mu_i)^2}{2\sigma_i^2}))$ which is equal to $-\sum_{i=1}^n \log(\sigma_i) - \frac{n\log(2\pi)}{2} -\frac{1}{2\sigma_i^2} \sum_{i=1}^n (v_i - \mu_i)^2$ The first two terms do not depend on $v_i$ so we can drop them. $-\frac{1}{2} \sum_{i=1}^n \frac{(v_i - \mu_i)^2}{\sigma_i^2}$ $-\frac{1}{2}$ is a negative constant so maximizing this expression is equivalent to minimizing $\sum_{i=1}^n \frac{(v_i - \mu_i)^2}{\sigma_i^2}$ In terms of $x$, this is $\min_x \sum_{i=1}^n \frac{(y_i - a_i^\top x - \mu_i)^2}{\sigma_i^2}$ If $\mu_i = 0$ and $\sigma_i$ does not depend on $i$ (same for all samples, we say they are independent and identially distributed (i.i.d)), this gives $\min_x \lVert y - Ax \rVert_2$ where the $i$th row of $A$ is $a_i$. So the classical linear regression we saw during the first week assumes i.i.d. normal noise of zero mean. How to interpret the scaling $\sigma_i^2$ in terms of influence on $x$ for very noisy samples ? n = 101 σ = [1000; ones(n - 1)] μ = 100rand(n) v = randn(n) .* σ .+ μ m = 10 x = rand(m) A = rand(n, m) y = A * x + v Without taking the noise into account, we get large errors: x - A \ y Taking $\mu$ into account can be done as follows: x - A \ (y - μ) How do we take $\sigma$ into account ? We know that \ solves the least square $\min_x \lVert y - Ax \rVert_2 = \sum_{i = 1}^n (y_i - a_i^\top x)^2$ but we want it to solve $\sum_{i = 1}^n (y_i - a_i^\top x - \mu_i)^2 / \sigma_i^2$ instead. How do we do this ? The expression is equal to $\sum_{i = 1}^n (\frac{y_i - \mu_i}{\sigma_i} - \frac{a_i^\top}{\sigma_i} x)^2$ so we can just scale $y$ and $A$: x - (A ./ σ) \ ((y - μ) ./ σ) Now that's much better. This page was generated using Literate.jl.
{"url":"https://blegat.github.io/ccir/practical8/index.html","timestamp":"2024-11-09T04:27:57Z","content_type":"text/html","content_length":"93443","record_id":"<urn:uuid:d2bbaef0-746a-4aed-9a70-9dac39def2e1>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00866.warc.gz"}
Supported 61914 - math word problem (61914) Supported 61914 A person lifts a stone weighing 285 kg with the help of a lever that is supported at a distance of 30 cm from the stone. A person acts on the lever at a distance of 90 cm from the support. How much force does a person have to use to lift a stone? Correct answer: Did you find an error or inaccuracy? Feel free to write us . Thank you! Tips for related online calculators Do you have a linear equation or system of equations and are looking for its ? Or do you have a quadratic equation You need to know the following knowledge to solve this word math problem: Units of physical quantities: Themes, topics: Grade of the word problem: Related math problems and questions:
{"url":"https://www.hackmath.net/en/math-problem/61914","timestamp":"2024-11-06T09:12:19Z","content_type":"text/html","content_length":"73490","record_id":"<urn:uuid:802ebcb3-0ec0-4d1a-a3b5-7ab0b6ea22b0>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00794.warc.gz"}
British computer scientist’s new “nullity” idea provokes reaction from mathematicians Monday, December 11, 2006 On December 7, BBC News reported a story about Dr James Anderson, a teacher in the Computer Science department at the University of Reading in the United Kingdom. In the report it was stated that Anderson had “solved a very important problem” that was 1200 years old, the problem of division by zero. According to the BBC, Anderson had created a new number, that he had named “nullity”, that lay outside of the real number line. Anderson terms this number a “transreal number”, and denotes it with the Greek letter ? {\displaystyle \Phi } . He had taught this number to pupils at Highdown School, in Emmer Green, Reading. The BBC report provoked many reactions from mathematicians and others. In reaction to the story, Mark C. Chu-Carroll, a computer scientist and researcher, posted a web log entry describing Anderson as an “idiot math teacher”, and describing the BBC’s story as “absolutely infuriating” and a story that “does an excellent job of demonstrating what total innumerate idiots reporters are”. Chu-Carroll stated that there was, in fact, no actual problem to be solved in the first place. “There is no number that meaningfully expresses the concept of what it means to divide by zero.”, he wrote, stating that all that Anderson had done was “assign a name to the concept of ‘not a number'”, something which was “not new” in that the IEEE floating-point standard, which describes how computers represent floating-point numbers, had included a concept of “not a number”, termed “NaN“, since 1985. Chu-Carroll further continued: “Basically, he’s defined a non-solution to a non-problem. And by teaching it to his students, he’s doing them a great disservice. They’re going to leave his class believing that he’s a great genius who’s solved a supposed fundamental problem of math, and believing in this silly nullity thing as a valid mathematical concept. “It’s not like there isn’t already enough stuff in basic math for kids to learn; there’s no excuse for taking advantage of a passive audience to shove this nonsense down their throats as an exercise in self-aggrandizement. “To make matters worse, this idiot is a computer science professor! No one who’s studied CS should be able to get away with believing that re-inventing the concept of NaN is something noteworthy or profound; and no one who’s studied CS should think that defining meaningless values can somehow magically make invalid computations produce meaningful results. I’m ashamed for my field.” There have been a wide range of other reactions from other people to the BBC news story. Comments range from the humorous and the ironic, such as the B1FF-style observation that “DIVIDION BY ZERO IS IMPOSSIBLE BECAUSE MY CALCULATOR SAYS SO AND IT IS THE TRUTH” and the Chuck Norris Fact that “Only Chuck Norris can divide by zero.” (to which another reader replied “Chuck Norris just looks at zero, and it divides itself.”); through vigourous defences of Dr Anderson, with several people quoting the lyrics to Ira Gershwin‘s song “They All Laughed (At Christopher Columbus)”; to detailed mathematical discussions of Anderson’s proposed axioms of transfinite numbers. Several readers have commented that they consider this to have damaged the reputation of the Computer Science department, and even the reputation of the University of Reading as a whole. “By publishing his childish nonsense the BBC actively harms the reputation of Reading University.” wrote one reader. “Looking forward to seeing Reading University maths application plummit.” wrote another. “Ignore all research papers from the University of Reading.” wrote a third. “I’m not sure why you refer to Reading as a ‘university’. This is a place the BBC reports as closing down its physics department because it’s too hard. Lecturers at Reading should stick to folk dancing and knitting, leaving academic subjects to grown ups.” wrote a fourth. Steve Kramarsky lamented that Dr Anderson is not from the “University of ‘Rithmetic“. Several readers criticised the journalists at the BBC who ran the story for not apparently contacting any mathematicians about Dr Anderson’s idea. “Journalists are meant to check facts, not just accept whatever they are told by a self-interested third party and publish it without question.” wrote one reader on the BBC’s web site. However, on Slashdot another reader countered “The report is from Berkshire local news. Berkshire! Do you really expect a local news team to have a maths specialist? Finding a newsworthy story in Berkshire probably isn’t that easy, so local journalists have to cover any piece of fluff that comes up. Your attitude to the journalist should be sympathy, not scorn.” Ben Goldacre, author of the Bad Science column in The Guardian, wrote on his web log that “what is odd is a reporter, editor, producer, newsroom, team, cameraman, soundman, TV channel, web editor, web copy writer, and so on, all thinking it’s a good idea to cover a brilliant new scientific breakthrough whilst clearly knowing nothing about the context. Maths isn’t that hard, you could even make a call to a mathematician about it.”, continuing that “it’s all very well for the BBC to think they’re being balanced and clever getting Dr Anderson back in to answer queries about his theory on Tuesday, but that rather skips the issue, and shines the spotlight quite unfairly on him (he looks like a very alright bloke to me).”. From reading comments on his own web log as well as elsewhere, Goldacre concluded that he thought that “a lot of people might feel it’s reporter Ben Moore, and the rest of his doubtless extensive team, the people who drove the story, who we’d want to see answering the questions from the mathematicians.”. Andrej Bauer, a professional mathematician from Slovenia writing on the Bad Science web log, stated that “whoever reported on this failed to call a university professor to check whether it was really new. Any university professor would have told this reporter that there are many ways of dealing with division by zero, and that Mr. Anderson’s was just one of known ones.” Ollie Williams, one of the BBC Radio Berkshire reporters who wrote the BBC story, initially stated that “It seems odd to me that his theory would get as far as television if it’s so easily blown out of the water by visitors to our site, so there must be something more to it.” and directly responded to criticisms of BBC journalism on several points on his web log. He pointed out that people should remember that his target audience was local people in Berkshire with no mathematical knowledge, and that he was “not writing for a global audience of mathematicians”. “Some people have had a go at Dr Anderson for using simplified terminology too,” he continued, “but he knows we’re playing to a mainstream audience, and at the time we filmed him, he was showing his theory to a class of schoolchildren. Those circumstances were never going to breed an in-depth half-hour scientific discussion, and none of our regular readers would want that.”. On the matter of fact checking, he replied that “if you only want us to report scientific news once it’s appeared, peer-reviewed, in a recognised journal, it’s going to be very dry, and it probably won’t be news.”, adding that “It’s not for the BBC to become a journal of mathematics — that’s the job of journals of mathematics. It’s for the BBC to provide lively science reporting that engages and involves people. And if you look at the original page, you’ll find a list as long as your arm of engaged and involved people.”. Williams pointed out that “We did not present Dr Anderson’s theory as gospel, although with hindsight it could have been made clearer that this is very much a theory and by no means universally accepted. But we certainly weren’t shouting a mathematical revolution from the rooftops. Dr Anderson has, in one or two places, been chastised for coming to the media with his theory instead of his peers — a sure sign of a quack, boffin and/or crank according to one blogger. Actually, one of our reporters happened to meet him during a demonstration against the closure of the university’s physics department a couple of weeks ago, got chatting, and discovered Dr Anderson reckoned he was onto something. He certainly didn’t break the door down looking for media coverage.”. Some commentators, at the BBC web page and at Slashdot, have attempted serious mathematical descriptions of what Anderson has done, and subjected it to analysis. One description was that Anderson has taken the field of real numbers and given it complete closure so that all six of the common arithmetic operators were surjective functions, resulting in “an object which is barely a commutative ring (with operators with tons of funky corner cases)” and no actual gain “in terms of new theorems or strong relation statements from the extra axioms he has to tack on”. Jamie Sawyer, a mathematics undergraduate at the University of Warwick writing in the Warwick Maths Society discussion forum, describes what Anderson has done as deciding that R ? { ? ? , + ? } {\ displaystyle \mathbb {R} \cup \lbrace -\infty ,+\infty \rbrace } , the so-called extended real number line, is “not good enough […] because of the wonderful issue of what 0 0 {\displaystyle {\frac {0}{0}}} is equal to” and therefore creating a number system R ? { ? ? , ? , + ? } {\displaystyle \mathbb {R} \cup \lbrace -\infty ,\Phi ,+\infty \rbrace } . Andrej Bauer stated that Anderson’s axioms of transreal arithmetic “are far from being original. First, you can adjoin + ? {\displaystyle +\infty } and ? ? {\displaystyle -\infty } to obtain something called the extended real line. Then you can adjoin a bottom element to represent an undefined value. This is all standard and quite old. In fact, it is well known in domain theory, which deals with how to represent things we compute with, that adjoining just bottom to the reals is not a good idea. It is better to adjoin many so-called partial elements, which denote approximations to reals. Bottom is then just the trivial approximation which means something like ‘any real’ or ‘undefined real’.” Commentators have pointed out that in the field of mathematical analysis, 0 0 {\displaystyle {\frac {0}{0}}} (which Anderson has defined axiomatically to be ? {\displaystyle \Phi } ) is the limit of several functions, each of which tends to a different value at its limit: • lim x ? 0 x 0 {\displaystyle \lim _{x\to 0}{\frac {x}{0}}} has two different limits, depending from whether x {\displaystyle x} approaches zero from a positive or from a negative direction. • lim x ? 0 0 x {\displaystyle \lim _{x\to 0}{\frac {0}{x}}} also has two different limits. (This is the argument that commentators gave. In fact, 0 x {\displaystyle {\frac {0}{x}}} has the value 0 {\displaystyle 0} for all x ? 0 {\displaystyle x\neq 0} , and thus only one limit. It is simply discontinuous for x = 0 {\displaystyle x=0} . However, that limit is different to the two limits for lim x ? 0 x 0 {\displaystyle \lim _{x\to 0}{\frac {x}{0}}} , supporting the commentators’ main point that the values of the various limits are all different.) • Whilst sin ? 0 = 0 {\displaystyle \sin 0=0} , the limit lim x ? 0 sin ? x x {\displaystyle \lim _{x\to 0}{\frac {\sin x}{x}}} can be shown to be 1, by expanding the sine function as an infinite Taylor series, dividing the series by x {\displaystyle x} , and then taking the limit of the result, which is 1. • Whilst 1 ? cos ? 0 = 0 {\displaystyle 1-\cos 0=0} , the limit lim x ? 0 1 ? cos ? x x {\displaystyle \lim _{x\to 0}{\frac {1-\cos x}{x}}} can be shown to be 0, by expanding the cosine function as an infinite Taylor series, dividing the series subtracted from 1 by x {\displaystyle x} , and then taking the limit of the result, which is 0. Commentators have also noted l’Hôpital’s rule. It has been pointed out that Anderson’s set of transreal numbers is not, unlike the set of real numbers, a mathematical field. Simon Tatham, author of PuTTY, stated that Anderson’s system “doesn’t even think about the field axioms: addition is no longer invertible, multiplication isn’t invertible on nullity or infinity (or zero, but that’s expected!). So if you’re working in the transreals or transrationals, you can’t do simple algebraic transformations such as cancelling x {\displaystyle x} and ? x {\displaystyle -x} when both occur in the same expression, because that transformation becomes invalid if x {\displaystyle x} is nullity or infinity. So even the simplest exercises of ordinary algebra spew off a constant stream of ‘unless x is nullity’ special cases which you have to deal with separately — in much the same way that the occasional division spews off an ‘unless x is zero’ special case, only much more often.” Tatham stated that “It’s telling that this monstrosity has been dreamed up by a computer scientist: persistent error indicators and universal absorbing states can often be good computer science, but he’s stepped way outside his field of competence if he thinks that that also makes them good maths.”, continuing that Anderson has “also totally missed the point when he tries to compute things like 0 0 {\displaystyle 0^{0}} using his arithmetic. The reason why things like that are generally considered to be ill-defined is not because of a lack of facile ‘proofs’ showing them to have one value or another; it’s because of a surfeit of such ‘proofs’ all of which disagree! Adding another one does not (as he appears to believe) solve any problem at all.” (In other words: 0 0 {\displaystyle 0^ {0}} is what is known in mathematical analysis as an indeterminate form.) To many observers, it appears that Anderson has done nothing more than re-invent the idea of “NaN“, a special value that computers have been using in floating-point calculations to represent undefined results for over two decades. In the various international standards for computing, including the IEEE floating-point standard and IBM’s standard for decimal arithmetic, a division of any non-zero number by zero results in one of two special infinity values, “+Inf” or “-Inf”, the sign of the infinity determined by the signs of the two operands (Negative zero exists in floating-point representations.); and a division of zero by zero results in NaN. Anderson himself denies that he has re-invented NaN, and in fact claims that there are problems with NaN that are not shared by nullity. According to Anderson, “mathematical arithmetic is sociologically invalid” and IEEE floating-point arithmetic, with NaN, is also faulty. In one of his papers on a “perspex machine” dealing with “The Axioms of Transreal Arithmetic” (Jamie Sawyer writes that he has “worries about something which appears to be named after a plastic” — “Perspex” being a trade name for polymethyl methacrylate in the U.K..) Anderson writes: We cannot accept an arithmetic in which a number is not equal to itself (NaN != NaN), or in which there are three kinds of numbers: plain numbers, silent numbers, and signalling numbers; because, on writing such a number down, in daily discourse, we can not always distinguish which kind of number it is and, even if we adopt some notational convention to make the distinction clear, we cannot know how the signalling numbers are to be used in the absence of having the whole program and computer that computed them available. So whilst IEEE floating-point arithmetic is an improvement on real arithmetic, in so far as it is total, not partial, both arithmetics are invalid models of arithmetic. In fact, the standard convention for distinguishing the two types of NaNs when writing them down can be seen in ISO/IEC 10967, another international standard for how computers deal with numbers, which uses “qNaN” for non-signalling (“quiet”) NaNs and “sNaN” for signalling NaNs. Anderson continues: [NaN’s] semantics are not defined, except by a long list of special cases in the IEEE standard. “In other words,” writes Scott Lamb, a BSc. in Computer Science from the University of Idaho, “they are defined, but he doesn’t like the definition.”. The main difference between nullity and NaN, according to both Anderson and commentators, is that nullity compares equal to nullity, whereas NaN does not compare equal to NaN. Commentators have pointed out that in very short order this difference leads to contradictory results. They stated that it requires only a few lines of proof, for example, to demonstrate that in Anderson’s system of “transreal arithmetic” both 1 = 2 {\displaystyle 1=2} and 1 ? 2 {\displaystyle 1\neq 2} , after which, in one commentator’s words, one can “prove anything that you like”. In aiming to provide a complete system of arithmetic, by adding extra axioms defining the results of the division of zero by zero and of the consequent operations on that result, half as many again as the number of axioms of real-number arithmetic, Anderson has produced a self-contradictory system of arithmetic, in accordance with Gödel’s incompleteness theorems. One reader-submitted comment appended to the BBC news article read “Step 1. Create solution 2. Create problem 3. PROFIT!”, an allusion to the business plan employed by the underpants gnomes of the comedy television series South Park. In fact, Anderson does plan to profit from nullity, having registered on the 27th of July, 2006 a private limited company named Transreal Computing Ltd, whose mission statement is “to develop hardware and software to bring you fast and safe computation that does not fail on division by zero” and to “promote education and training in transreal computing”. The company is currently “in the research and development phase prior to trading in hardware and software”. In a presentation given to potential investors in his company at the ANGLE plc showcase on the 28th of November, 2006, held at the University of Reading, Anderson stated his aims for the company as • To build a transreal computer. • To derive Maxwell’s equations and classical gravitation in transreal numbers so that they have no naked singularities (infinities affecting a neighbourhood of space). • To collaborate with anyone who wants to unify Quantum Electro-Dynamics with gravitation. To investors, Anderson makes the following promises: • “I will help you develop a curriculum for transreal arithmetic if you want me to.” • “I will help you unify QED and gravitation if you want me to.” • “I will build a transreal supercomputer.” He asks potential investors: • “How much would you pay to know that the engine in your ship, car, aeroplane, or heart pacemaker won’t just stop dead?” • “How much would you pay to know that your Government’s computer controlled military hardware won’t just stop or misfire?” The current models of computer arithmetic are, in fact, already designed to allow programmers to write programs that will continue in the event of a division by zero. The IEEE’s Frequently Asked Questions document for the floating-point standard gives this reply to the question “Why doesn’t division by zero (or overflow, or underflow) stop the program or trigger an error?”: “The [IEEE] 754 model encourages robust programs. It is intended not only for numerical analysts but also for spreadsheet users, database systems, or even coffee pots. The propagation rules for NaNs and infinities allow inconsequential exceptions to vanish. Similarly, gradual underflow maintains error properties over a precision’s range. “When exceptional situations need attention, they can be examined immediately via traps or at a convenient time via status flags. Traps can be used to stop a program, but unrecoverable situations are extremely rare. Simply stopping a program is not an option for embedded systems or network agents. More often, traps log diagnostic information or substitute valid results.” Simon Tatham stated that there is a basic problem with Anderson’s ideas, and thus with the idea of building a transreal supercomputer: “It’s a category error. The Anderson transrationals and transreals are theoretical algebraic structures, capable of representing arbitrarily big and arbitrarily precise numbers. So the question of their error-propagation semantics is totally meaningless: you don’t use them for down-and-dirty error-prone real computation, you use them for proving theorems. If you want to use this sort of thing in a computer, you have to think up some concrete representation of Anderson transfoos in bits and bytes, which will (if only by the limits of available memory) be unable to encompass the entire range of the structure. And the point at which you make this transition from theoretical abstract algebra to concrete bits and bytes is precisely where you should also be putting in error handling, because it’s where errors start to become possible. We define our theoretical algebraic structures to obey lots of axioms (like the field axioms, and total ordering) which make it possible to reason about them efficiently in the proving of theorems. We define our practical number representations in a computer to make it easy to detect errors. The Anderson transfoos are a consequence of fundamentally confusing the one with the other, and that by itself ought to be sufficient reason to hurl them aside with great force.” Geomerics, a start-up company specializing in simulation software for physics and lighting and funded by ANGLE plc, had been asked to look into Anderson’s work by an unnamed client. Rich Wareham, a Senior Research and Development Engineer at Geomerics and a MEng. from the University of Cambridge, stated that Anderson’s system “might be a more interesting set of axioms for dealing with arithmetic exceptions but it isn’t the first attempt at just defining away the problem. Indeed it doesn’t fundamentally change anything. The reason computer programs crash when they divide by zero is not that the hardware can produce no result, merely that the programmer has not dealt with NaNs as they propagate through. Not dealing with nullities will similarly lead to program crashes.” “Do the Anderson transrational semantics give any advantage over the IEEE ones?”, Wareham asked, answering “Well one assumes they have been thought out to be useful in themselves rather than to just propagate errors but I’m not sure that seeing a nullity pop out of your code would lead you to do anything other than what would happen if a NaN or Inf popped out, namely signal an error.”.
{"url":"https://www.th3blogs.com/british-computer-scientists-new-nullity-idea-provokes-reaction-from-mathematicians/","timestamp":"2024-11-11T01:51:29Z","content_type":"text/html","content_length":"59504","record_id":"<urn:uuid:3c679d90-3554-46f9-97cf-5159d38f0bdf>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00778.warc.gz"}
Ellipse Coefficient Map-Based Geomagnetic Fingerprint Considering Azimuth Angles The Advanced IT & Ship Convergence Center, Korea Maritime and Ocean University, Busan 49112, Korea The Research Development Center, Woori Engineering, Sejong 30054, Korea The Department of Electrical Engineering, Korea Polytechnics, Ulsan 44482, Korea The Division of Electronics and Electrical Information Engineering, Korea Maritime and Ocean University, Busan 49112, Korea Author to whom correspondence should be addressed. Submission received: 3 April 2019 / Revised: 11 May 2019 / Accepted: 21 May 2019 / Published: 23 May 2019 Geomagnetic fingerprint has been actively studied because of the high signal stability and positioning resolution even when the time has elapsed. However, since the measured three-axis geomagnetism signals at one position are irregular according to the change of the azimuth angle, a large-sized database which is stored magnitudes per angles is required for robust and accurate positioning against the change of the azimuth angle. To solve this problem, this paper proposes a novel approach, an elliptic coefficient map based geomagnetic fingerprint. Unlike the general fingerprint, which stores strength or magnitude of the geomagnetism signals depending on the position, the proposed algorithm minimized the size of databased by storing the Ellipse coefficient map through the ellipse equation derived from the characteristics of 2-D magnetic vectors depending on the position. In addition, the curvature bias of ellipse was reduced by applying the normalized linear least-squares method to 2-D geomagnetic characteristics and the positioning accuracy was improved by applying the weighted geomagnetic signal equalization method. 1. Introduction Fingerprint is a representative indoor positioning method that uses the difference of measured signal strength such as Wi-Fi [ ], BLE (Bluetooth low energy) [ ], geomagnetic [ ] according to the location. The fingerprint is divided into a training phase that store the signals according to the position of the regular interval, called Reference point (RP), and a positioning phase that compares the measured signal with the stored position in real-time. Therefore, it is essential that a database is required but it is advantageous to apply to the complex indoor space because it is robust against NLOS (non-line of sight) environment compared with TOA (time of arrival) using round trip time of radio. In particular, the geomagnetic fingerprint does not need to install separate APs (access points) in the indoor space unlike Wi-Fi, Zigbee, BLE, etc. Therefore, the Fingerprint system construction and maintenance cost are low since there are no positioning errors in accordance with change location and power of APs [ ]. WiFi-based positioning is not affected by azimuth angle, but many geomagnetic signals require a database because the signal varies according to the azimuth angle. In addition, it is very difficult to estimate the initial start position without fixed AP. Recently, geomagnetic fingerprint studies have improved the positioning accuracy by constructing systems that are integrated with indoor positioning technology, such as Wi-Fi and corrects the position estimation error through filtering algorithms, such as Kalman and particle filters. Most systems are based on the total intensity of the geomagnetism signal along the 3-axes. However, geomagnetism signals change according to changes in the azimuth, large positioning errors may occur, even from the same position [ Nguyen et al. [ ] proposed an algorithm that improves the positioning accuracy of the Wi-Fi fingerprint by integrating geomagnetism. This algorithm uses geomagnetism as an assistive technology at the landmark location for indoor positioning based on Wi-Fi. This method improves positioning accuracy compared to the existing Wi-Fi fingerprint algorithm. However, an error may occur because the values of the total intensity of the geomagnetism signals stored for each reference point (RP) located at regular intervals do not consider azimuth angles. Chung et al. [ ] proposed an indoor positioning algorithm based on fingerprints that consider the change of geomagnetism signals according to azimuth angles. In the training phase, by using stepping motors, the values of the total intensity of the geomagnetism at all azimuth angles were constructed in the database. In the positioning phase, the geomagnetism signals collected from the four sensors facing 0, 90, 180, and 270° were compared with the signals from the database. The location at which the difference in root mean square (RMS) was minimal and recognized as the position. Thus, it reflects the change of geomagnetism signals according to the azimuth angles; however, it takes a long time to build the database and requires at least four sensors. Xie et al. [ ] proposed the magnetic fingerprinting based indoor localization (MaLoc) algorithm, which uses smartphones to acquire ambient magnetic measurements. The system consists of a client running on a smartphone and a server implementing a new reliability enhancement particle filter. MaLoc does not require prior calibration owing to its self-calibration feature and does not apply any restrictions on smartphone orientation. However, an error occurs in the position estimation using the geomagnetism of the unmapped azimuth angles. As described above, for the geomagnetic fingerprint, it is difficult to recognize the position of the signal at an angle that is not stored in a database. In addition, there is a limit to store the signal for all azimuth angles in indoor space. To solve these problems, in this paper, we propose an ellipse coefficient map (ECM)-based using Fingerprint based on the ellipse equation (UFEE) that integrates 2-D magnetic vectors and yaw-axis correction techniques to reduce the positioning error caused by the azimuthal change in an indoor environment using only a single geomagnetic sensor and a limited database. To verify the validity of the proposed algorithm, the UFEE results were compared with existing geomagnetism intensity-based results. 2. Related Works 2.1. Signal Characteristics of 3-Axis Geomagnetism The magnetic vector of the earth consists of seven components as shown in Figure 1 . X is the geographical north (true north) at the north end of the earth’s rotation axis, Y is the geographical west, and Z is the vertical component of the geomagnetic field toward the earth’s nadir. F is the earth’s magnetic force, H is the horizontal magnetic force (magnetic north), D is the angle (declination) between true north and magnetic north, respectively [ ]. The total magnitude of the geomagnetism output in the three axes is given by Equation (1) and the magnitude of the horizontal geomagnetic field is given by Equation (2). $F = H d 2 + R d 2 + N d 2$ $H d$ $R d , a n d N d$ are a heading direction, a right direction, and a nadir direction, respectively. To find the azimuth angle on the body coordinate system, it is essential to transform the horizontal coordinate system according to the current attitude angle, as shown in the following Equation (3). $[ X H Y H Z H ] = [ c o s θ 0 s i n θ 0 1 0 − s i n θ 0 c o s θ ] [ 1 0 0 0 c o s ∅ − s i n ∅ 0 s i n ∅ c o s ∅ ] [ m x m y m z ]$ are the roll angle and the pitch angle. $m x$ $m y$ , and $m z$ are three-axis earth magnetic fields measured in the fuselage coordinate system, and $X H$ $Y H$ , and $Z H$ are magnetic vectors in which the attitude angle is compensated. Hence, the horizontal magnetic vectors $X H$ $Y H$ can be obtained and the azimuth angle can be obtained as in Equation (4) [ $ψ = t a n − 1 ( m y c o s θ − m z s i n ∅ m x c o s θ + m y s i n ∅ s i n ∅ + m z c o s ∅ s i n θ ) + L .$ This azimuth is a very important factor in the prediction of the movement and the ideal geomagnetic vector according to the azimuth angle at one location is expressed in the form of a perfect circle as shown in Figure 2 . However, due to the inherent errors (soft-iron effect) of the geomagnetic sensor and the measurement errors (hard-iron effect) by the reinforced structure and the magnetic objects, the shapes of the geomagnetic vector according to the angle appears in various elliptical shapes and its size is not constant either [ ]. Therefore, a conventional Fingerprint that compares the magnitude of geomagnetic vectors at each position is caused a serious positioning error only by a simple azimuth angle change on one position. To solve this problem, many geomagnetic vectors data are saved in the database according to the angle, or a variety of communications and sensors are added. 2.2. Wi-Fi and Geomagnetic Fingerprint For the indoor positioning method using acceleration, a Wi-Fi and a geomagnetic sensor built in a smartphone have been studied. In the case of an acceleration sensor, the application skill is simple, but it is necessary to specify the starting position and it is difficult to track for a long time due to the cumulative errors. On the other hand, Fingerprint using Wi-Fi or geomagnetism signals can solve this problem because it saves per-location characteristics using DB. Figure 3 shows a flow chart of the Wi-Fi Fingerprint. The Fingerprint is divided into training and positioning phase. In the training phase, a user measures the signal strength that can be located at a regular interval of RP and generates these as a database called a radio map. The optimum RP interval is determined according to the kind of the signal applied to the fingerprint and the indoor environment. This interval means resolution in which the fingerprint system can distinguish each position. Therefore, the narrower the interval, the better the positional accuracy. However, since the standard deviation of RSSI measured at one RP is large, RP is applied at intervals of 2 to 3 m, which is the smallest unit that can distinguish positions. The RSSIs measured for each RP have a unique strength depending on the relative distance between the APs and the RPs and the existence of obstacles, and these strengths are stored in the DB. In the positioning phase, based on the stored DB, the user who has a smartphone moves to real time and compares the measured RSSIs with DB to estimate the user’s position to the location of the most similar RP. For real-time processing, deterministic or probabilistic models [ ] with low computational complexity are mostly applied to the user’s position estimation algorithm. Unlike Wi-Fi fingerprint, which measures RSSI received from APs installed indoors, the geomagnetic fingerprint does not require a separate AP and is only capable of positioning with a mobile phone. Thus, the geomagnetic fingerprint uses the magnitudes of the three-axis geomagnetic vector measured from the geomagnetic sensor built into the smartphone, instead of the RSSI provided by wireless communications. Figure 4 shows a traditional geomagnetic fingerprint. 3-Axis geomagnetism signals measured according to the RP is stored in the DB and the position of the user is estimated through real-time comparison. Creating a database is a very important process that determines location accuracy. There are two methods store 3-axis geomagnetism signals in a database. One method is to fix the azimuth angle at one angle. This method is advantageous for tracking in the case of a constant attitude, but there is a large error in recognizing the change of the azimuth angle at the same RP. Another method is to store the all 3-axis geomagnetism signals, which is measured for each angle at one RP. This method is robust against the change of the azimuth angle, but it has a disadvantage that the size of the DB increases by 3 times. Therefore, the conventional method has a limit to recognize the position with a single sensor. Therefore, we propose a novel method of ECM-based UFEE that can optimize the position with only the geomagnetic sensor. 3. Proposed ECM Based UFEE Figure 5 shows the proposed ECM based UFEE. The proposed UFEE divided into the training phase in which an ECM is created using the measured geomagnetic vectors in accordance with each angle, and the positioning phase in which the user position is estimated. In the training phase, to enable position recognition by angle, five randomly measured 3-axis geomagnetism signals are collected by rotating the geomagnetic sensor at each RP. In order to reduce the deviation of the curvature of the ellipse estimated based on the collected five data, the measured 3-axis geomagnetism signals of the x-axis and the y-axis are applied to the standardized least squares method. In addition, we applied the weighted geomagnetic signal equalization method for an optimal weight value derived from the iterative operation of these values to the z-axis signals to keep the relatively very large z-axis signals equal to the x, y signals. Through this method, the proposed ECM was generated. In the positioning phase, the positions of the user are estimated by comparing the geomagnetism signals measured in real time through the user’s smartphone with the ECM produced in the training phase. The measured geomagnetism x, y, and z-axis signals are assigned to the ellipse equation according to the ellipse coefficients stored in the ECM and estimate the most dominant position among these equations. 3.1. Proposed UFEE in the Training Phase Unlike the conventional method of storing geomagnetic vectors in a radio map, the proposed UFEE is a novel approach that not only reduces the size of the radio map by storing the ellipse coefficients but also is robust to angle changes. Generally, the 2-D magnetic vector according to the total azimuth angle at one position due to the soft-iron effect and the hard-iron effect appears as an ellipse as shown in Figure 6 . The x-axis represents the magnetic vector x measured from the sensor and the y-axis represents the measured value of the magnetic vector y. The points represent 36 geomagnetism signals measured at 10-degree intervals at any one position. The 2-D magnetic vector characteristic output in the form of an ellipse is expressed by an Ellipse Equation as shown in Equation (5). $a M V x 2 + b M V x M V y + c M V y 2 + d M V x + e M V y + f = 0 .$ $M V x$ $M V y$ are the x- and y-axis value of the magnetic vectors, respectively, and a, b, c, d, e, and f are ellipse coefficients that determine the rotation angle, center point, major radius, and minor radius of an ellipse, defined as EC. In order to obtain the ellipse coefficient for outputting the ellipse and to further reflect the characteristic unique to each position, a 2-D magnetic vector with at least five angles is required. Therefore, the proposed UFEE measures the signal at five arbitrary angles like the red dot while rotating 360 degrees in order to minimize the measurement process. As in the red dots in Figure 6 , the ellipse equation is derived by acquiring five 2-D magnetic vectors collected at 0, 70, 140, 210 and 280°, respectively. By acquiring five signals, five equations can be obtained and thus the variables in Equation (5) can be derived. In this case, since the measured geomagnetism values are reduced, when the values are transformed into elliptical equations, a curvature deviation occurs, resulting in an error between the equations measured at intervals of 10 degrees and the proposed equations. Therefore, the normalized linear least squares method is applied to minimize such curvature deflection. To avoid the situation where all coefficients have a value of 0, the parameter f is normalized to fit the ellipse coefficient by determining f = 1, which is an optimal coefficient for the ellipse having a small radius of curvature around the origin [ ]. The normalized least-squares method can then be expressed in terms of the pseudo-inverse as $EC = ( A T A ) − 1 A T B = ( a b c d e ) T$ $A = ( M V x 0 ° 2 M V x 0 ° M V y 0 ° M V y 0 ° 2 M V x 0 ° M V y 0 ° M V x 70 ° 2 M V x 70 ° M V y 70 ° M V y 70 ° 2 M V x 70 ° M V y 70 ° M V x 140 ° 2 M V x 140 ° M V y 140 ° M V y 140 ° 2 M V x 140 ° M V y 140 ° M V x 210 ° 2 M V x 210 ° M V y 210 ° M V y 210 ° 2 M V x 210 ° M V y 210 ° M V x 280 ° 2 M V x 280 ° M V y 280 ° M V y 280 ° 2 M V x 280 ° M V y 280 ° ) , B = ( − 1 − 1 − 1 − 1 − 1 ) .$ Thus, when the 2-D magnetic vectors of five angles are collected at the same position, the optimal ellipse coefficient can be obtained through the normalized least square method. The coefficients of the obtained ellipses are the conversion equations for the measured geomagnetism x- and y-axis values. Therefore, the geomagnetism values of the acquired z-axis must be considered for positioning factor in order to improve the positioning accuracy. In general, the Euclidean distance, which is the position estimation method used in the positioning phase in the indoor positioning system, is the simplest and fastest algorithm to determine the similarity between two values to be compared. Since this algorithm uses the method of summing the difference of all the values to be compared, when any one of these values is very large, the result is greatly influenced by the large value. On the other hand, when any one of these values is very small, the value is a very low effect on the positioning, enough to be ignored. In order to improve the positioning accuracy, the variables to be compared in the positioning phase include not only the proposed ellipse coefficient (a to f) but also the measured z-axis geomagnetism values in real-time. However, z-axis geomagnetism values are relatively too large compared to the ellipse coefficients. Therefore, in order to maintain the same effect of ellipse coefficients and z in the positioning phase, the z-axis geomagnetism values are equalized before being stored in the database. The weighted geomagnetic signal equalization method is defined as follows. $E Q z = R a t i o × G e o z$ $E Q z$ is the compensated z-axis geomagnetism value by the weighted geomagnetic signal equalization method, $R a t i o$ is the ratio to compensate the z-axis geomagnetism value, and $G e o z$ is the z-axis geomagnetic value. $R a t i o$ is determined by the geomagnetism values measured to produce the database at each RP. Based on the geomagnetism z-axis values and the a to f values derived from the geomagnetism values measured in the geomagnetism x- and y-axes, the optimal $R a t i o$ is derived through an iterative comparison of the positioning result by Euclidean distance using the elements and real measured positions. Since this is different in the range of geomagnetism to be measured for each indoor environment, it is essential to derive a $R a t i o$ suitable for the applied indoor environment. After the weighted geomagnetic signal equalization method, the database is created using the ellipse coefficient and $E Q z$ Figure 7 shows the DB structure built from the “EC” via the ellipse equation and the compensated geomagnetic z-values via the weighted geomagnetic signal equalization method. Unlike existing databases, which store geomagnetism intensity and angular values, the proposed database stores the elements derived by using the geomagnetism x-, y-, and z-axis values measured five times at each RP. The x-axis represents the RP, which represents each measurement position, and the y-axis consists of the ellipse coefficients at each RP and $E Q z$. At this time, as mentioned above, the value of f is equal to 1, so it is not necessary to store it separately. Therefore, in the positioning phase, it is possible to estimate the position of all angles by storing only five elliptic coefficients and one compensated yaw-axis at one position. 3.2. The UFEE in Positioning Phase The positioning phase estimates the position based on the ellipse coefficient map constructed in the training phase. The collected real-time two-dimensional magnetic vectors and geomagnetism z-values at arbitrary positions can be expressed as follows: $Geomagnetism R T = [ M V x R T M V y R T G e o m a g n e t i c z R T ]$ $Geomagnetism R T$ is the real-time geomagnetism value, $M V x R T$ $M V y R T$ are the x- and y-values of the real-time magnetic vectors, and $G e o m a g n e t i c z R T$ are the real-time geomagnetism z-values. (1) The real time two-dimensional magnetic vectors are calculated via the ellipse equation built in the ellipse coefficient map, which can be expressed as ${ a R P 1 M V x R T 2 + b R P 1 M V x R T M V y R T + c R P 1 M V y R T 2 + d R P 1 M V x R T + e R P 1 M V y R T = − f 1 + 1 ≅ 0 a R P 2 M V x R T 2 + b R P 2 M V x R T M V y R T + c R P 2 M V y R T 2 + d R P 2 M V x R T + e R P 2 M V y R T = − f 2 + 1 ≅ 0 ⋮ a R P n M V x R T 2 + b R P n M V x R T M V y R T + c R P n M V y R T 2 + d R P n M V x R T + e R P n M V y R T = − f n + 1 ≅ 0$ , and e are the , and n is the number of reference points. Since f is set to one in the training phase, f is shifted to the right-hand side and compensated by one in the positioning phase, such that the result of the left-hand side takes an absolute value. Therefore, the minimum f value yields the estimated position. (2) The real-time geomagnetism z-values are calculated via the $C o m p e n s a t e d y a w$ in the ellipse coefficient map, which can be expressed as $Compared y a w 1 = C o m p e n s a t e d y a w R P 1 − C o m p e n s a t e d y a w R T ≅ 0 Compared y a w 2 = C o m p e n s a t e d y a w R P 2 − C o m p e n s a t e d y a w R T ≅ 0 ⋮ Compared y a w n = C o m p e n s a t e d y a w R P n − C o m p e n s a t e d y a w R T ≅ 0$ $C o m p e n s a t e d y a w R P$ represents the compensated geomagnetism z-values constructed in the ECM using the reference points, $C o m p e n s a t e d y a w R T$ represents the compensated real-time geomagnetism z-values and $C o m p a r e d y a w$ is derived by comparing the absolute values of $C o m p e n s a t e d y a w R P$ $C o m p e n s a t e d y a w R T$ . Therefore, the minimum $C o m p a r e d y a w$ value yields the estimated position. The result of method (2) should be smoothed with the result of method (1), such that no weighting is introduced in either method. Figure 8 shows the results for the optimal $R a t i o$ in the basic experiment. Here the x-, y-, and z-axes refer to the reference points, $R a t i o$ , and positioning accuracy, respectively. The positioning accuracy of 1000 iterations was compared for $R a t i o$ in the $10 − 4$ $10 2$ range based on the collected 2D magnetic vectors and geomagnetism z-values. When dividing the sections along the y-z and x-z axes, the curvature and average positioning accuracy values are maximized $R a t i o$ $10 − 2$ . The optimal $R a t i o$ is therefore unaffected when the weighting for each component is $10 − 2$ . After applying the ratio, the process of comparing with the Ellipse Coefficient Map in real-time is as follows as $D i s t ( i ) = ∑ j = 1 n ( | f j | + | c o m p e n s a t e d y a w j | ) 2 .$ Here, $i$, which has the smallest value among $D i s t$ calculated for each position $i$, is estimated as the final position of the user. This means that the difference between the stored values and the real-time measurement is the smallest. 4. Experiment Setup and Results 4.1. Experiment Setup As with Wi-Fi Fingerprint, which has been widely studied in indoor positioning, a 3 m reference point interval is commonly designated. Therefore, the reference points were set at intervals of 3 m and the system was constructed. In this case, two systems can be built at the same time. The experimental spaces are shown in Figure 9 . To verify geomagnetic characteristics and positioning accuracy, we selected the 1st building of the Korea Maritime and Ocean University, which consists of indoor common corridors and rooms. Experiments were conducted on the 2nd to 4th floors to verify the characteristics in the space of 48 m × 30 m. The experiments were conducted on three floors, with 23 reference points collected per floor. A low-cost geomagnetic sensor (YAS537) that is installed in many smartphone devices was used to collect the geomagnetism values. The geomagnetic signals were collected at 30 data points, with a 10° rotation sensor resolution at each reference point. Fifteen data points were obtained at 0, 70, 140, 210 and 280° azimuths to construct the ellipse coefficient map, and the other 15 data points were used for positioning each reference point. 4.2. Experiment Results The result of positioning using the geomagnetic signal obtained according to the azimuth angle is shown in Figure 10 . The x-axis is the reference position, and the y-axis is the frequency obtained from the position. Experiments based on 15 values (36 × 15 = 540) per angle in 36 angles were performed. Here, the blue dot and the line are a method using magnitudes of the geomagnetic intensity based fingerprint (MGIF), and the average value of the geomagnetic total intensity with respect to all directions at the same position is constructed as a database and the location is estimated by calculating the geomagnetic intensity. The yellow dot and line are a method using Fingerprint based on the ellipse equation (UFEE). This is a method of estimating the position of a vector by computing with an ellipse coefficient map. The red dot and line indicate the position recognition result using the smoothing technique in the proposed method. Figure 11 shows the positioning results from the existing method (MGIF) [ ] and UFEE for each floor. The x- and y-axes represent the reference points and position results, respectively, with the positions estimated according to the movement path. The results are displayed in a confusion matrix to express all the estimated positions according to the reference points. The positioning accuracy improves as the x = y diagonal becomes clearer. The display color becomes hotter (yellow) as more points recognize the same position and it becomes colder (blue) as fewer points recognize the same position. The MGIF uses the values of the total intensity of the geomagnetism, it is insensitive to the changes of their total value geomagnetic intensity even though the individual axis signals are show changes, and an error is recognized at multiple different positions having approximately similar values. However, given that the UFEE is compared with the EC extracted from the geomagnetism value of each reference point, and the recognition rate is high even if the geomagnetism values change and this recognition rate is better for positioning because the signal is equalized to reflect the characteristics of the individual axis signals. The confusion matrix shows that the existing method primarily deviated from the diagonal where accurate positioning was judged. The UFEE confirms that the estimated positions match most reference points. The positioning accuracy and distance errors results for the MGIF and UFEE are provided in Table 1 Table 2 5. Conclusions The existing geomagnetic based indoor positioning has been actively studied because it has stable received signals and high positioning resolution because of its high positioning accuracy even over time. However, positioning errors occurs because the geomagnetism values by the azimuth change are not constant even at the same location. In this paper, we propose a novel indoor positioning algorithm based geomagnetic fingerprint that fuses 2-Dimensional geomagnetic vectors and the weighted geomagnetic signal equalization method, called UFEE. In the proposed UFEE, the curvature is less biased heavily by using the ellipse coefficient map of the geomagnetism based on the normalized linear least squares method even when the database size was reduced, and the accuracy of positioning was improved by applying the weighted geomagnetic signal equalization method. To verify the validity of the proposed algorithm in general indoor spaces of 48 m × 30 m, the results of the proposed method were compared with results obtained existing research based on geomagnetism intensity. The results show that the positioning accuracy was improved by 62.14% and the error distance was reduced by 3.98 m. Author Contributions Conceptualization, J.-H.S. and S.-H.L.; methodology, J.-H.S.; software, S.-H.L.; validation, K.-K.Y. and D.-H.S.; formal analysis, K.-K.Y.; investigation, K.-K.Y.; resources, J.-H.S.; data curation, S.-H.L.; Writing—Original Draft preparation, J.-H.S.; Writing—Review and Editing, J.H.S.; visualization, S.-H.L.; supervision, D.-H.S.; project administration, D.-H.S.; funding acquisition, D.-H.S. This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (No.2016R1D1A1B03934812). This paper is based on Seung-Hyun Lee’s master’s thesis (“A Study on Indoor Positioning using 3-Dimensionalization Geomagnetic Fingerprint, Graduate School of Korea Maritime University”). Conflicts of Interest The authors declare no conflict of interest. Figure 10. The acquired positioning frequencies of existing and proposed methods (a) second floor; (b) third floor; (c) fourth floor. Figure 11. Positioning results by MGIF and proposed UFEE as (a) a confusion matrix of positioning at the second floor; (b) a confusion matrix of positioning at the third floor; (c) a confusion matrix of positioning at the fourth floor. Floors Accuracy (<1.5 m) MGIF [10] Proposed UFEE 2F 39.01% 91.85% 3F 15.80% 89.14% 4F 26.17% 86.42% Average 27.00% 89.14% Floors Value Distance Error MGIF [10] Proposed UFEE 2F Min 1.00 m 0.20 m Max 12.00 m 7.45 m 3F Min 1.00 m 0.67 m Max 12.00 m 4.53 m 4F Min 1.00 m 0.13 m Max 11.40 m 6.27 m © 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http:/ Share and Cite MDPI and ACS Style Seong, J.-H.; Lee, S.-H.; Yoon, K.-K.; Seo, D.-H. Ellipse Coefficient Map-Based Geomagnetic Fingerprint Considering Azimuth Angles. Symmetry 2019, 11, 708. https://doi.org/10.3390/sym11050708 AMA Style Seong J-H, Lee S-H, Yoon K-K, Seo D-H. Ellipse Coefficient Map-Based Geomagnetic Fingerprint Considering Azimuth Angles. Symmetry. 2019; 11(5):708. https://doi.org/10.3390/sym11050708 Chicago/Turabian Style Seong, Ju-Hyeon, Seung-Hyun Lee, Kyoung-Kuk Yoon, and Dong-Hoan Seo. 2019. "Ellipse Coefficient Map-Based Geomagnetic Fingerprint Considering Azimuth Angles" Symmetry 11, no. 5: 708. https://doi.org/ Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details Article Metrics
{"url":"https://www.mdpi.com/2073-8994/11/5/708","timestamp":"2024-11-10T14:33:15Z","content_type":"text/html","content_length":"448541","record_id":"<urn:uuid:0de0631a-ee1c-42f8-bc9d-fed9eda41496>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00119.warc.gz"}
The relationship between digitalization and profitability A cross-sectional study of firms in the Norwegian shipping industry GRA 19703 Master Thesis Thesis Master of Science The relationship between digitalization and profitability A cross-sectional study of firms in the Norwegian shipping industry Navn: Åshild Turmo Lornstad, Mari Rosvoll Viken Start: 15.01.2021 09.00 Finish: 01.07.2021 12.00 The relationship between digitalization and profitability A cross-sectional study of firms in the Norwegian shipping industry Master of Science in Business Åshild Turmo Lornstad Mari Rosvoll Viken In this paper, we study the relationship between profitability and the level of digitalization of firms in the Norwegian shipping industry. We found the digitalization level by using a Likert scale on a survey sent to companies within this industry. Our main goal with this study was to investigate whether there is a correlation between profitability and the level of digitalization, and to identify whether this correlation is positive or negative. The subject was chosen based on the lacking research within this field. In addition to using cross-sectional data on digitalization, we collected data on years since the digitalization started, ROA as a measurement of profitability, the age and the size of the company, all calculated from 2019. We retrieved 39 usable responses to our survey, which were used as the sample size. The results of our regression models were inconclusive, meaning that we could not conclude with the null hypothesis or the alternative hypothesis. Because digitalization will only affect the operating part of companies, other drivers affected by the market will not be influenced. We believe our results are a consequence of not straining out the unaffected drivers, and not because digitalization has no impact on the profitability of firms in the Norwegian shipping industry. Firstly, we would like to express a deep gratitude to our supervisor, Siv Staubo. Her knowledge and interest in the shipping industry gave us valuable information about the topic for our master thesis. Furthermore, we would like to thank all the participants in our survey and those who contributed to spread the survey. The responses made it possible for us to write this thesis. Lastly, we wish to extend a big thank you to our connections in Kongsberg Maritime. Their expertise gave us much cherished insight in the Norwegian shipping industry and valuable help when constructing the survey. Thank you! Table of Contents Chapter 1: Introduction... 1 1.1 The Research Question ... 1 1.2 Background information ... 2 1.2.1 The shipping industry in Norway ... 2 1.2.2 Digitalization in the shipping industry ... 2 1.3 Key drivers ... 3 1.3.1 Demand and supply ... 4 1.3.2 Exchange rates ... 4 1.3.3 Freight rates ... 4 1.3.4 The cost of running ships... 5 1.4 Our motivation and contribution ... 5 Chapter 2: Theoretical Methodology ... 6 2.1 Choice of Subject ... 6 2.2 Preconceptions ... 6 2.3 Methodology ... 6 Chapter 3: Literature Review and Theory ... 7 3.1 Previous Research ... 7 3.2 Literature Review ... 8 3.2.1 Narrative Literature Review ... 8 3.2.2 Systematic Literature Review ... 8 3.4 Theories ... 8 Chapter 4: Research Methodology ... 9 4.1 Data Selection ... 9 4.1.1 Cross-sectional study ... 9 4.1.2 Industry and Geographical Delimitations ... 10 4.1.3 Research on digitalization level ... 10 4.1.4 Profitability ... 10 4.1.5 Number of observations... 11 4.1.6 Number of Dispatch... 12 Niemand et. al. (2020, p. 5) sent their survey to 300 addresses, retrieving a response rate of 34%. Ibem et al. (2018) sent their survey to 129 firms and received usable answers from 75 of the questionnaires. This represents a response rate of 58%. ... 12 4.2 Data Collection ... 12 4.2.1 Data on digitalization... 12 4.2.2 Survey ... 12 4.2.3 Net Income, Total Assets, Company Age and Company Size ... 14 4.3 Data Processing ... 14 4.3.1 Data Handling ... 14 4.3.2 Calculating the Level of Digitalization ... 15 4.4 Variables ... 15 4.4.1 Return on Assets (ROA) ... 15 4.4.2 Digitalization ... 15 4.5 Model ... 15 4.5.1 Multiple Linear Regression Model ... 15 4.5.2 Dependent variable ... 16 4.5.3 Independent Variables ... 16 4.5.4 Models ... 17 4.5.5 Simple Linear Regression Model ... 18 4.5.6 Robust Linear Regression ... 19 4.6 Hypotheses & Hypothesis Testing ... 19 4.6.1 Hypothesis Testing ... 19 4.6.2 The Hypotheses ... 20 4.6.2 Significance Level ... 20 4.7 Multicollinearity ... 22 4.8 Omission of an important variable ... 23 4.9 Zero conditional mean and normality ... 23 4.10 Homoskedasticity ... 24 4.11 Sample Variation in the Explanatory Variable ... 24 Chapter 5: Empirical Results & Analysis ... 25 5.1 Descriptive Statistics ... 25 5.2 Test for Multicollinearity ... 26 5.3 Residuals ... 27 5.4 Test for heteroskedasticity ... 28 5.5 Regression Models ... 29 5.5.1 Regression Model 1 ... 30 5.5.2 Regression Model 2 ... 31 5.5.3 Regression Model 3 ... 32 5.5.4 Regression Model 4 ... 33 5.5.5 Regression Model 5 ... 34 5.6.1 Model fit ... 35 5.6.2 Outliers ... 35 5.6.3 Robust Regression Model ... 36 5.7 Hypothesis testing ... 37 Chapter 6: Discussion ... 38 6.1 The results ... 38 6.2 Sample size ... 38 6.3 Key drivers ... 38 6.3.1 Digitalization and market price on ships ... 38 6.3.2 Digitalization and exchange rates ... 39 6.3.3 Digitalization and freight rates ... 39 6.3.4 Digitalization and the cost of running ships ... 39 Chapter 7: Conclusion ... 40 7.1 Conclusion ... 40 7.2 Contribution ... 41 7.3 Further Research ... 41 7.4 Unforeseen challenges... 42 7.5 Social and Ethical Considerations ... 42 References ... 43 Chapter 1: Introduction 1.1 The Research Question Several industries around the world have digitalized their business in order to become more profitable. This can be done either by decreasing their costs, increasing customer satisfaction followed by revenues, or both. However, sometimes the industry is not ripe for this change. There are numerous examples of failed digital transformation processes in many different industries. Companies such as GE, Ford and Procter & Gamble, have all failed at digital transformation (Morgan, 2019). Although one industry is ready to be digitalized does not necessarily mean that other industries will succeed with digitalization. The shipping industry has not been particularly forward-leaning in regard to digitalization. However, in the past few years, the industry has experienced an increased interest within this field. In our Master Thesis, we want to look into the relationship between digitalization in the Norwegian shipping industry and profitability. Are there any visible trends? We want to investigate whether the industry is ready for digitalization, or whether it is wise to let the industry ripe a bit longer. Based on data collected on the level of digitalization from companies in the Norwegian shipping industry, we want to investigate whether there is a correlation between digitalization and For this reason, we have chosen to articulate the following research question: “Is there either a negative or positive correlation between digitalization and profitability in the Norwegian shipping industry?” 1.2 Background information 1.2.1 The shipping industry in Norway Norway became one of the leading shipping nations in the second half of the 19^th century (Grytten & Koilo, 2019), and is still one of the major maritime nations in the world (Reve, 2009). Over time, the Norwegian maritime industry has developed into one of the world’s most complete maritime environments, consisting of businesses in all parts of the value chain and with strong positions in specialized segments (Regjeringen, 2020). The industry is among the largest and most important in Norway, with a total value creation near 175 billion and more than 110 thousand employees (Innovasjon Norge, 2020). According to Torger Reve (2009) the unique combination between ocean, technology and knowledge are the main factors that have led the Norwegian shipping industry into its leading position. In addition, Norway has one of the highest cost-levels in the world (Reve, 2009). Compared to other nations within the industry, the Norwegian shipowners have to continuously develop smarter and more cost-efficient solutions to stay competitive. Because of this, we found it even more interesting to narrow our research down to the Norwegian shipping industry. 1.2.2 Digitalization in the shipping industry The global shipping industry as a whole, appears to be quite conservative, in terms of digitalization compared to industries such as automotive and aviation. The slow innovative processes within the industry can be explained by the network of players connected both horizontally and vertically, with a great use of standards (Arduino, et al., 2013). Moreover, since linear ships are characterized by network problems, a solid solution for the whole network is required when investing in new technology (Bavassano, Ferrari, & Alessio, 2020). As a consequence, the cost of investing increases exponentially. The combination of high investment costs and lack of evidence supporting payoff from digital investments, can explain why shipowners are resilient to digitalize (Jotun Marine, 2018). Another barrier is cyber security, as some believe that more technology increases the risk of hacking. Nevertheless, there is an ongoing interest to digitalize the shipping industry, which is linked to the possibility of reducing costs and making transport services more efficient. Several of the new digital technologies that recently have been introduced within the industry are technologies such as Internet of Things (IoT), Artificial Intelligence (A.I.), data analytics and blockchain (Bavassano, Ferrari, & Alessio, 2020). The new digital technologies include satellites, telematics and management systems, which provides the opportunity for the shore-based staff to influence the ship in the daily operations. In addition, digitalization helps to gather, process and make information available, which is necessary for better decision making (Splash 24/7, 2017). Shipping Analyst Martin Stopford believe that shipping companies could reap great benefits from implementing new digital technology. However, Stopford argues that this implementation will take time, because the business model is not ready for new technology yet and must therefore be changed first. Changing the business model include implementing smart ships, introducing smart fleets with integrated management systems, and implementing smart global logistics (Jotun Marine, 2018). We find Stopford’s statement interesting, but because of the lacking research, we will not use this as a base in our study. 1.3 Key drivers There are several key drivers in the shipping industry which will affect the cash flow and the financial results of a company. We will now consider some of the most important key drivers within the shipping industry. 1.3.1 Demand and supply Many companies within the shipping industry either sell or buy ships, or both, at a regular basis. Since it is a competitive market, the price in this business depends on demand and supply. The demand is influenced by shipping freight rates, second-hand prices, market expectations and sentiment, and liquidity and credit availability, while the supply is influenced by available shipbuilding berths, shipyard unit costs, exchange rates and production subsidies (Stopford, 2009, p. 631). There are several factors that affects the demand and supply, and therefore the price in this market. A high market price when buying ships would lead to higher costs, meanwhile a low market price when selling ships leads to lowered income. The effect of the market price on the financial results may therefore be decisive. 1.3.2 Exchange rates The shipping industry is a highly international industry. As earlier discussed, exchange rates will affect the supply of ships, and could in addition affect a company’s financial results directly. Consider a company who are building ships in Norway and selling to customers worldwide. The majority of the expenses will therefore be in domestic currency, while the main income will be in a foreign currency. Although the exchange rate does not affect the expenses, it can be crucial for the size of the income and the present value of that specific sale transaction. 1.3.3 Freight rates Freight rates is a highly important driver for shipowners since high freight rates will directly lead to increased earnings. Moreover, freight rates will increase the demand for new ships, which is easily explained by the fact that shipowners want to expand the size of their fleets while the business is more profitable (Stopford, 2009, p. 631) 1.3.4 The cost of running ships While freight rates are important for the income, the cost of running ships is important for the expenses. These costs can be divided into five categories, which is operating costs, periodic maintenance, voyage costs, cargo-handling costs and capital structure (Stopford, 2009, p. 225). While one single company’s digitalization will not affect the categories above, it will perhaps affect the cost of running ships, and therefore have an effect on the total profitability. Based on this assumption, we identify this driver as the most important in light of our research question. 1.4 Our motivation and contribution In the early stages of this thesis, we contacted Kongsberg Maritime and their office at Grilstad, Trondheim in Norway. They told us about their current process of converting manual operations into digitalized procedures and the large costs connected to such a transformation. This made us think about the profitability attached to these investments. Knowing that several industries have failed with investments in digitalization, we wanted to investigate whether the shipping industry in Norway profits from such investments or not. According to Ferreira et. al (2019), the impact of digitalization on firm performance is a largely unexplored topic. In addition, there is no research looking at digitalization and profitability in the Norwegian shipping industry. Our contribution is therefore a unique study which we know, after talking with Kongsberg Maritime, the industry is interested in. Chapter 2: Theoretical Methodology 2.1 Choice of Subject Due to the gap in research within the shipping industry, we found it interesting to study the impact digitalization has on profitability in this specific industry. We also saw that Norway have been forced to be more innovative than the other nations within the industry, and therefore we narrowed our research down to the Norwegian shipping industry. 2.2 Preconceptions As aforementioned, the shipping industry is a conservative industry. In addition, it is said to be highly volatile, which makes it difficult to predict the results from our findings. Since this is an unexplored topic, we have no preconceptions towards this analysis. 2.3 Methodology Digitalization is not numerical and public information; therefore, we will collect these data using a quantitative method. A survey sent to several companies within the shipping industry will give us the information needed to calculate the digitalization level of each company. Data collected from both the survey and the data source proff.no, will be the input to our research. From proff.no will we find data from each of the companies that have answered our survey. The collected data will be inserted in Excel to get a good view of the numbers and calculations. Then, we will use MATLAB to perform both regression analysis and hypothesis testing. Chapter 3: Literature Review and Theory 3.1 Previous Research A previous study on digitalization within the banking sector had results indicating the probability not being affected by the sheer level of the digitalization in a bank. The data used in the study was collected through a survey answered by 102 banks in Germany, Switzerland and Luxembourg. The level of digitalization was measured in their survey by asking the respondents about the degree of digitalization within the company. The other questions were based on a five-point Likert scale. As they mention, their measured level of digitalization suffers from the respondent’s self-interpretation to which level a business model is digitalized. For further studies, they recommend including the types of digitalization tools used and the different departments’ use of those (Niemand et. al., 2020). However, a rapport by Bank of Finland state that digitalization of banks in the long run expects to increase profitability. They argue that measures to improve profitability in the long run, such as investments in digitalization, will in the short run decrease the profitability. This is caused by investments connected to digital solutions demands a lot of resources and capital (Koskinen & Manninen, 2019). Another relevant study examines the impact digitalization has on profitability in small- and medium-size private healthcare companies in Finland. The data was obtained from a national survey in Finland where 680 private healthcare companies responded. Studies prior to this indicated that digitalization impacted profitability both positively and negatively. This study used two different performance measurements, EBIT and ROA. The results from EBIT as measurement, indicated that digitalization negatively affected the company’s profitability, especially for small companies. Meanwhile, when performance was measured with ROA, the digitalization in innovation processes had a positive effect on profitability. The study concluded with the company size and business area being decisive for how digitalization impacts profitability. The study also found that the relation was impacted by the chosen profitability indicator (Holopainen, Niskanen, & Rissanen, 2019). 3.2 Literature Review There are many types of literature reviews. As examples, we have narrative, systematic, meta-analysis and meta-synthesis (O'Gorman & MacIntosh, 2015, p. 31). In our study, we have used both narrative literature review and systematic literature review. 3.2.1 Narrative Literature Review The purpose of this type of literature review is to analyze and summarize a body of literature. That is “achieved by presenting a comprehensive background of the literature within the interested topic to highlight new research steams, identify gaps or recognize inconsistencies.” (O'Gorman & MacIntosh, 2015, p. 31). After we decided about the topic we wanted for our thesis, we saw that there was no existing research on digitalization and firm profitability in the shipping industry in Norway. Here, we identified a gap in previous research, and therefore decided to take a deeper look at the subject. 3.2.2 Systematic Literature Review A systematic literature review can be defined as the attempt “to identify, appraise and synthesize all the empirical evidence that meets pre-specified criteria to answer a given research question” (Piper, 2013, p. 2). Since we found a gap in the literature we wanted to fill, we had to look at similar research. We found many studies which looked at the relationship between profitability and an independent variable. This was highly useful for us, both in looking at how other studies had defined profitability, but also which control variables we should include in our models. 3.4 Theories We chose to research the relationship between digitalization and profitability within the shipping industry in Norway. The industry is chosen due to various reasons mentioned earlier. Results from previous studies in other sectors indicated both positive and negative impact from digitalization on profitability, although this relation has not been studied within the shipping industry. Chapter 4: Research Methodology 4.1 Data Selection 4.1.1 Cross-sectional study Before starting to collect data, we chose which data to include in our research. The first thing we took into consideration, was the time aspect. With this in mind, there are several different study designs to choose. Times series is such a study design. This type of research requires that every individual is observed at several different occasions (Lantz, 2014). For our research, it would be useful to look into the relationship between the different variables at years before 2019, but this would either require a lot of time- consuming follow ups, or a highly advanced survey. The opposite of a time series study is a cross-sectional study, where each individual is only observed once (Lantz, 2014). This type of study has a one-time point of view and provides a ‘snapshot’ of the situation (Levin, 2006). When using a cross-sectional study, we would not lose any data due to missing follow- up, and the survey would not be very time-consuming for the participants. Based on this, we believed that a cross-sectional study was the best fit for our When using a cross-sectional study design, there follows some disadvantages. The main disadvantage we must take into consideration when discussing our results, is the possibility of a different result if we were to choose another timeframe (Levin, 2006). 4.1.2 Industry and Geographical Delimitations Since we found it interesting to only look at the Norwegian shipping industry, we delaminated our search to Norwegian shipping companies. Within Norway, we had no other geographical delimitations. In order to obtain an overview of the different shipping companies in Norway, we chose to focus on the companies within the industry “Shipping and sea transport” at proff.no. In addition, we added some companies who had “rederi” within their name. 4.1.3 Research on digitalization level By looking into how previous studies measure the level of digitalization, we could not find a common way to do this. There were few previous studies about relating subjects, and none of them calculated the digitalization level as we intend to do. However, common for most of the studies is the use of the Likert scale from 1 to 5, (1: “does not fit at all”, 5: “fits perfectly”), in their surveys when collecting information about how digitalized firms are. One of the studies assessed the level of digitalization as a singular construct developed for that study. In the survey, the respondents were asked about the concrete level of digitalization in percentage, in addition, a Likert scale was used for the rest of the questions (Niemand et. al., 2020). 4.1.4 Profitability Firstly, we wanted our measure of profitability to be comparable between the different companies in our study. We therefore decided that a ratio of profitability would be best suited. Thereafter, we had to choose one specific ratio which we would include in our model. There are several different ratios for measuring profitability. By looking at relevant previous research we found that Hamid, Abdullah, & Kamaruzzaman (2015) used return on equity (ROE) in their research on profitability and capital structure in family and non-family firms. Dogan (2013) chose to use return on assets (ROA) when studying firm size and profitability, and so did Malik (2011) and Andersson & Minnema (2018) in their studies on profitability. Since ROA is a ratio that measures the performance by comparing the net income to the capital that is invested in assets, it also measures how productive and efficient the management is in their use of economic resources (Corporate Finance Institute, 2020). On the basis of this, we feel that Return on Assets is the most suitable ratio of profitability in our study. 4.1.5 Number of observations We want to collect enough data such that the random sample from the target population, which in this case is all of the shipping firms in Norway, would be generalized. Since it can be difficult to know how many observations needed to conclude with reliability, we looked at other research and their number of observations. Niemand et. al. (2020, p. 5) received 102 responses to their survey, which constituted their final sample. Dong and Su (2010) used a sample based on 130 firms, Salvi et. al. (2021)’s sample consisted of 114 companies, while Ibem et. al. (2018) used a sample consisting of 75 firms. Based on previous research, we hoped to receive 100 responses to our survey, which will form our sample in the regression model. 4.1.6 Number of Dispatch Niemand et. al. (2020, p. 5) sent their survey to 300 addresses, retrieving a response rate of 34%. Ibem et al. (2018) sent their survey to 129 firms and received usable answers from 75 of the questionnaires. This represents a response rate of 58%. Yehuda Baruch (1999) has done a study about the reasonable response rate in academic studies. In 1995 the average response rate was 48.4%, and the author found that the response rate was declining through the years (from 55.6% average from 1975-1995) (Baruch, 1999). Based on this, we hope to retrieve a response rate at 35%. Given our wishes for 100 responses, we would send the survey to at least 285 firms in the Norwegian shipping industry. This is equal to approximately 11% of the total number of businesses registered within shipping in Norway, as the total number is 2478 firms. 4.2 Data Collection 4.2.1 Data on digitalization A big challenge attached to our study is how to collect data on digitalization properly. Different approaches were considered, such as looking at the company’s investments in IT solutions, interviews and surveys. In order to collect data efficiently and as accurate as we saw possible, we decided to make a survey asking about their digitalization in different areas of their operations. 4.2.2 Survey Our survey is developed in collaboration with Kongsberg Maritime to ensure the quality. Previous studies have asked the respondents to specify their digitalization level, as mentioned in these studies this is a question of interpretation. We want to make the responses more valid and remove some of the self-interpretation. By following the recommendation from the study by Niemand et. al (2019) we therefore included both digitalization tools and areas of operations in our survey. An overview of the structure of our survey is presented in the table below. Table 1: The structure of our survey Table 1 presents an overview of the structure of our survey. The areas of operations were divided into three main areas with three subareas each. The main area is presented in the first column, the subarea is presented in the second column. Main area Subarea Crew Management Crew Training & Welfare Document Handling Fuel & Performance Management Fuel Performance Fleet Tracking Voyage & Navigation Maintenance & Operations Maintenance Port Operations Logistics & Procurement Under each subarea, the participants of the survey were asked to answer a scale from 1 to 5, with the following explanation of the numbers: Figure 1: The five-point Likert scale used in the survey. The respondents were asked to rate their level of digitalization from 0-5 under each subarea. To reduce the self-interpretation even more, we chose to write some examples under each subarea. Explanations for the subarea Voyage & Navigation can be observed below: Figure 2: A specific example of the five-point Likert scale. This example is taken from the main area Fuel & Performance Management and the subarea Voyage & Navigation. At the end of the survey, the following question was asked in order to map when digitalization in the different companies began: “If you have started digitalizing certain processes in your business, in what year did this digitalization start?” 4.2.3 Net Income, Total Assets, Company Age and Company Size For our further calculations, we collected data for net income, total assets, company age and company size from the companies who answered our survey. We retrieved this data from the accounting figures published at proff.no. 4.3 Data Processing 4.3.1 Data Handling To ensure that the collected data do not violate the Privacy Act and GDPR, we used Qualtrics, which has a collaboration with BI Norwegian Business School. In addition, our survey did not ask about the respondent’s name or positions in the company. In accordance with NSD, our survey did not collect IP addresses, and therefore we do not handle personal information. 4.3.2 Calculating the Level of Digitalization The level of digitalization will have a value between 0 and 5, depending on the answers from the survey. The three different main areas in our survey are weighted equally. Meanwhile, the subareas are weighted equally within the main area. Therefore, all of the main areas received a weight equal to ^1 3, giving each subarea a weight equal to ^1 Some of the areas are not relevant for all of the companies. If they do not handle business in one or more areas, they do not have to respond to that category. The unanswered areas will not be included in the calculation of the total level of digitalization. 4.4 Variables 4.4.1 Return on Assets (ROA) 𝑅𝑂𝐴 = 𝑁𝑒𝑡 𝐼𝑛𝑐𝑜𝑚𝑒 𝑇𝑜𝑡𝑎𝑙 𝐴𝑠𝑠𝑒𝑡𝑠∙ 100 4.4.2 Digitalization The level of digitalization will have a value between 0 and 5, depending on the answers from the survey. Since each subarea is equally weighted, the level of digitalization will be the mean value of the answered areas. 4.5 Model 4.5.1 Multiple Linear Regression Model A multiple regression examines the correlation between several independent variables and one dependent variable. The regression looks at the degree to which each independent variable predicts the dependent variable (Ross & Wilson, 2017, p. 49). In order for the estimators to be unbiased for the population parameters, a set of assumption for the multiple linear regression model must be met (Wooldridge, 2015, p. 73). The assumptions are used to obtain exact statistical inference and to conclude that the OLS estimators have the smallest variance among all unbiased estimators (Wooldridge, 2015, p. xiv). Wooldridge (2015) have called these assumption MLR.1 to MLR.6, where MLR.1 and MLR.2 says that the model must be linear in parameters and that the sample must be random with 𝑛 observations, respectively. The third assumption state no perfect collinearity and the fourth require that the error term has an expected value of zero and a zero population mean. MLR.5 says that the model cannot experience heteroskedasticity, while the last assumption requires the error term to be normally distributed (Wooldridge, 2015). 4.5.2 Dependent variable In our study, we want to examine how the level of digitalization affect the profitability. Therefore, profitability will be the dependent variable in our multiple regression model. As mentioned, we will measure profitability as return on assets. 4.5.3 Independent Variables The main independent variable in this study is the level of digitalization, as it is the variable directly linked to our research question. We also want to add some control variables. These will be the years since digitalization began, the age of the company and the size of the company. In our survey, we asked in which year they started to digitalize. We believe this is a highly relevant variable to include in our model, since previous research have shown that the investment can have a negative correlation with the profitability in the first years, and thereafter a positive correlation (Koskinen & Manninen, 2019). Based on previous research, we believe that the age of the company is an important control variable. A study conducted by Loderer and Waelchli (2010, pp. 32-33) found a highly significant negative relation between company age and profitability, while Haykir and Çelik (2018, p. 135) found a convex relation between these two measures. In our study, the age will be measured as the number of years since the company was founded. The same applies for the size of the company: studies show inconsistent conclusions. Hall and Weiss (1967) found that size tends to result in high profit rates. Niresh and Velnampy (2014, p. 63) found a weak positive relation between profitability and company size, and so did Babalola (2013, p. 92). In our study, the size of a company will be measured as the number of employees in the company. To be certain that we do not add any irrelevant control variables, we will include five different models further in this research where we eliminate one or more of the independent variables which are not directly linked to our research question. 4.5.4 Models Model 1: 𝑅𝑂𝐴[𝑖,2019] = 𝛽[0]+ 𝛽[1]𝐿𝑜𝐷[𝑖] + 𝛽[2]𝑌[𝑖,2019]+ 𝛽[3]𝐴[𝑖,2019]+ 𝛽[4]𝑆[2019]+ 𝜀[𝑖] Model 2: 𝑅𝑂𝐴[𝑖,2019] = 𝛽[0]+ 𝛽[1]𝐿𝑜𝐷[𝑖] + 𝛽[2]𝑌[𝑖,2019]+ 𝛽[3]𝑆[2019]+ 𝜀[𝑖] Model 3: 𝑅𝑂𝐴[𝑖,2019] = 𝛽[0]+ 𝛽[1]𝐿𝑜𝐷[𝑖] + 𝛽[2]𝑌[𝑖,2019]+ 𝛽[3]𝐴[𝑖,2019]+ 𝜀[𝑖] Model 4: 𝑅𝑂𝐴[𝑖,2019] = 𝛽[0]+ 𝛽[1]𝐿𝑜𝐷[𝑖] + 𝛽[2]𝑌[𝑖,2019]+ 𝜀[𝑖] Model 5: 𝑅𝑂𝐴[𝑖,2019] = 𝛽[0]+ 𝛽[1]𝐿𝑜𝐷[𝑖] + 𝜀[𝑖] 𝑅𝑂𝐴[2019]= 𝑅𝑒𝑡𝑢𝑟𝑛 𝑜𝑛 𝐴𝑠𝑠𝑒𝑡𝑠 𝑖𝑛 2019 𝐿𝑜𝐷 = 𝐿𝑒𝑣𝑒𝑙 𝑜𝑓 𝐷𝑖𝑔𝑖𝑡𝑎𝑙𝑖𝑧𝑎𝑡𝑖𝑜𝑛 𝑌[2019]= 𝑦𝑒𝑎𝑟𝑠 𝑠𝑖𝑛𝑐𝑒 𝑡ℎ𝑒 𝑑𝑖𝑔𝑖𝑡𝑎𝑙𝑖𝑧𝑎𝑡𝑖𝑜𝑛 𝑠𝑡𝑎𝑟𝑡𝑒𝑑 𝑖𝑛 2019 𝐴[2019]= 𝐶𝑜𝑚𝑝𝑎𝑛𝑦 𝐴𝑔𝑒 𝑖𝑛 2019 𝑆[2019]= 𝑆𝑖𝑧𝑒 𝑜𝑓 𝑡ℎ𝑒 𝑐𝑜𝑚𝑝𝑎𝑛𝑦 𝑖𝑛 2019 Here, we can observe that our models meet assumption MLR.1. Our analysis has 39 random observations retrieved from the survey, which meets assumption MLR.2. As aforementioned, we wanted to retrieve over 100 observations based on calculations from previous similar research, which we were unable to collect. If a sample size is not very large, then the t distribution can be a poor approximation to the distribution of the t-statistics when the error term is not normally distributed. Although, how large a sample size must be for the approximation to be good enough is not defined. Some econometricians state that a sample with 30 random observation is satisfactory, depending on the distribution of 𝑢 (Wooldridge, 2015, p. 157). 4.5.5 Simple Linear Regression Model Above, we can see that Model 5 only looks into the relationship between two variables, namely the profitability and the level of digitalization. This is called a simple linear regression model (Wooldridge, 2015, p. 20). A simple linear regression model needs to meet the same assumptions as the multiple linear regression model, except from the third assumption. SLR.3 states that there needs to be a sample variation in the explanatory variable (Wooldridge, 2015, p. 42). 4.5.6 Robust Linear Regression Another linear regression model one can use, is something called a robust linear regression. This is an approach which can be useful when looking at violations of the assumptions above, such as outliers and nonnormal distributions of errors (Freund, Wilson, & Sa, 2006, p. 156). The method of robust estimation is called the iteratively reweighted least square (IWLS) procedure. This procedure will attempt to reduce the influence of observations with large residuals (Freund, Wilson, & Sa, 2006, p. 157). If we discover that one or more of the assumptions above is violated, we will try to use a robust linear regression to see if that gives us a better fit to the data. 4.6 Hypotheses & Hypothesis Testing 4.6.1 Hypothesis Testing Since we would like to test a single parameter in the multiple regression model, we have to perform a t-test. The t-test indicates whether the level of digitalization is statistically associated with the profitability (ROA) or not. Under the test of significance approach, the null hypothesis will not be rejected if the test statistic lies within the non-rejection area (Brooks, 2014, p. 106). The non-rejection area, or the critical values, will be calculated in MatLab using 𝑇 = 39 for all five models, 𝑘 = 5 for Model 1, 𝑘 = 4 for Model 2 and Model 3, 𝑘 = 3 for Model 4 and 𝑘 = 2 for Model 5, with a suitable level of significance. 4.6.2 The Hypotheses In the hypothesis testing framework, there are always two hypotheses, specifically the null hypothesis and the alternative hypothesis, where the null hypothesis is the one actually being tested while the alternative hypothesis represents the remaining outcomes (Brooks, 2014, p. 99). For our research question, we have the following hypotheses, equal for all of the five models: The Null Hypothesis 𝐻[0]: There is no correlation between ROA and the level of digitalization. 𝐻[0]: 𝛽[1] = 0 The Alternative Hypothesis 𝐻[𝐴]: There is either a negative or a positive correlation between ROA and the level of digitalization. 𝐻[𝐴]: 𝛽[1]≠ 0 Since there is no previous research regarding the relationship between digitalization and profitability in the Norwegian shipping industry, a two-sided test is the most suited test for our study. 4.6.2 Significance Level We have to consider the suitable level of significance thoroughly, since this could be of high impact for the conclusion of our study. It is also important to be impartial and objective, especially since this choice have shown to be arbitrary and depending on the desire of an investigator to reject or accept a hypothesis (Keuzenkamp & Magnus, 1995, p. 20). It is also wise to consider both Type I and Type II errors. Type I error refers to the situation where you reject the null hypothesis when it is true, while Type II error refer to the situation where you do not reject the null hypothesis when it is false (Brooks, 2014, p. 110). If the Norwegian shipping industry finds this study interesting, the consequence for a Type I error is that digitalization seems more profitable than it really is. On the other hand, the consequence of a Type II error is that digitalization appears to be undesirable when it actually has a positive effect on the profitability. These consequences are also important to take into consideration when choosing a suitable level of significance. We will consider Type I and Type II errors to be equally important, and the expected losses from the two errors to be the same. For our test, we want the power to be as high as possible. Studies show that when the expected loss of Type I and Type II errors are identical, one can achieve a balance between the probabilities of committing the two types of errors. Balancing those, one can get a high power of the test while taking the sample size into consideration (Kim, 2015, p. 3). Jae Kim (2015) presents the following table: Table 2: Level of significance Table 2 shows the level of significance when taking the sample size into consideration. The first column presents the sample size, the second and third column presents, respectively, the probability for Type I error (𝛼) and the probability for Type II error (𝛽). Meanwhile, the fourth and last column presents the power of the test. The table is prepared by Jae Kim (2015 p. 11). Sample size 𝜶 𝜷 Power of the test 10 0.35 0.35 0.65 50 0.19 0.19 0.81 100 0.11 0.11 0.89 200 0.04 0.04 0.96 In comparison, if the sample size is 50 and the level of significance (𝛼) is set to be 5%, the probability of a Type II error is 45% resulting in a power of the test of only 55% (Kim, 2015, p. 11). Given our sample size of 39 observations, we are closest to 50 following the table above. We therefore set our significance level to 19%, giving our test a power equal to 81%. 4.7 Multicollinearity In any practical context, the correlation between the independent variables will be non-zero and there will be a small degree of association between the independent variables. When this association, or correlation, becomes very high, we call it multicollinearity (Brooks, 2014, p. 217). Multicollinearity is a problem because it undermines the statistical significance of the explanatory variables (Allen, 1997, p. 176). One way to see if it exists is to look at the matrix of correlations between the individual independent variables. Multicollinearity would be visible as a high correlation between two of the explanatory variables (Brooks, 2014, p. 218). Given our models, we would need to look at the correlation matrix of the level of digitalization, company age, company size and the number of years since the digitalization started. In addition, we can detect multicollinearity by using Variance Inflation Factors, a so-called VIF-test. As an example, the VIF for slope coefficient 1 is 𝑉𝐼𝐹[1] = 1 1 − 𝑅[1]^2 , which is the term in 𝑉𝑎𝑟(𝛽̂[1]) that is determined by correlation between the level of digitalization (LoD) and the other independent variables. As a guideline, the value 10 is often chosen as a limit (Wooldridge, 2015, p. 86). No multicollinearity will mean no perfect collinearity, which will make our sample meet assumption MLR.3. On the other hand, discovering multicollinearity does not mean perfect collinearity. If this is the case, we need to perform a test to decide perfect collinearity or multicollinearity. Since the only difference between our five models is that we exclude one or more control variables, we only need to do this once – where all of the variables are included. The solution to the problem will vary, depending on the impact multicollinearity has on the outcome of the model. This can either be to ignore it, dropping one of the collinear variables or transforming the highly correlated variables into a ratio (Brooks, 2014, p. 219). If multicollinearity occurs in either size or age of the company, the simplest solution would be to eliminate the model consisting both of the variables. 4.8 Omission of an important variable A problem that easily can occur, is omission of an important variable. In our original model we have chosen to include the level of digitalization, years since digitalization began, company age and company size. There are several more variables which can explain the differences in the profitability. The consequence of excluding an important independent variable would be that the estimated coefficients on all the other variables will be biased and inconsistent unless the excluded variable is uncorrelated with all of the other explanatory variables (Brooks, 2014, p. 225). If this were to be the case, the constant term would be biased. Dealing with this issue, it can be useful to look at the adjusted R-squared. Since our dependent variable is profitability, more specific ROA, it is likely that we exclude important variables. This is based on the fact that there are a lot of different factors which make an impact on the profitability. We therefore expect the R-squared and the adjusted R-squared to be low. In our study, we want to examine the effect digitalization has on profitability. As long as the excluded variables do not correlate with the other variables in our models, omission of an important variable would only affect the constant term, and therefore not affect the conclusion of the research question. 4.9 Zero conditional mean and normality Assumption MLR.4 and MLR.6 says that the error terms must have zero population mean and be normally distributed. The way this can be tested is by plotting the error terms in a histogram. If the mean is centered around zero, and the histogram does not experience any skewness, these assumptions are met (Andersson & Minnema, 2018, p. 33). 4.10 Homoskedasticity Since we have collected data from firms that differ in size, there is a possibility that the larger firms have more factors affecting the error terms, than the smaller ones. Therefore, the error terms will be larger (Newbold, Carlson, & Thorne, 2013, p. 578), and heteroskedasticity can occur. A consequence of heteroskedasticity being present in our data is that our analysis may not be valid. Therefore, it is important to check if our model meet assumption MLR.5. One way to do this is to plot the residuals versus the independent variable and the predicted values from the regression. If the plot does not show any systematic relationship between the errors and the independent variable, there is no evidence of nonuniform variance (Newbold, Carlson, & Thorne, 2013, p. 578). We can also check for autoregressive conditional heteroskedasticity (ARCH) disturbance in our model introduced by Engle. To test for this disturbance, it is recommended to use the Lagrange multiplier test (Lee, 1991), which is the test we will conduct in addition to the residual plot. 4.11 Sample Variation in the Explanatory Variable Assumption SLR.3 for Model 5 can easily be tested for by calculating the sample standard deviation of the level of digitalization. As long as it is not zero, assumption SLR.3 holds. Chapter 5: Empirical Results & Analysis 5.1 Descriptive Statistics Table 3: Descriptive Statistics Table 3 shows the descriptive statistics of our data collected from proff.no and the survey sent to companies within the Norwegian shipping industry. It contains data from 39 observations, including ROA, the level of digitalization, years since the digitalization started, the age and size of the company, all variables calculated from 2019. The descriptive statistics shown in this table is the mean value, the standard deviation, the minimum value and the maximum value. Variable Mean Std. Dev. Min Max 𝑅𝑂𝐴[2019] 0.010858 0.26164 -0.94943 0.60185 𝐿𝑜𝐷 2.7435 0.72142 1.2222 4.25 𝑌 6.1282 7.0977 0 33 𝐴 21.179 20.36 1 107 𝑆 59.949 78.812 2 328 From table 3, we observe an average ROA for the companies in our data equal to 0.01858, or approximately 1.086%. There are a lot of fluctuations between the different companies, which appears in the standard deviation, the minimum value and the maximum value. Earlier, we explained that the shipping industry is a volatile industry, but looking at this table, it looks like the profitability depends on much more than the market. These companies are doing business in the same market, and the numbers are retrieved from the same year (2019). Here, we can also observe that assumption SLR.3 for Model 5 holds. As aforementioned in chapter 4.10, it is important to test our data for heteroskedasticity. Since a wider range between the firms in a dataset could mean a larger possibility for heteroskedasticity, it is especially important in our study. From the range in company age and size in the table, we can observe huge differences across our data. Previously we discussed that the shipping industry is a conservative industry. The companies in our study hold an average value of 2.74 looking at the level of from firms in other industries, we cannot compare this number with anything outside of this paper. 5.2 Test for Multicollinearity As explained in chapter 4.7, we can test for multicollinearity by looking at the matrix of correlations between the individual independent variables, in addition to conducting a VIF-test. Table 4: Correlation Matrix Table 4 shows the correlation between the level of digitalization, years since the digitalization started, the age of the company and the size of the company. The correlation matrix is based on 39 observations, which were the respondents to our survey. The table shows the correlation between the variable on the left-hand side and the corresponding variable on the top row. LoD Y A S LoD 1 Y 0.2360 1 A -0.0265 0.1940 1 S -0.1201 0.1896 0.0561 1 The table above shows the output for the correlation matrix after calculating this in MatLab. Here, we can observe the highest correlation between the level of digitalization and years since the digitalization started, which is 0.2360. This is not considered a high correlation, indicating that multicollinearity is not present in our data. Table 5: VIF-test Table 5 shows the Variance Inflation Factors of the level of digitalization, years since the digitalization started, the age of the company and the size of the company. The VIF-test is performed to detect multicollinearity in the data and the limit value used is 10. The first row presents the variable, and the second row presents the Variance Inflation Factor. Variable LoD Y A S VIF 1.0975 1.1622 1.0452 1.0693 From table 5, we observe that the variance inflation factors all have values under our chosen limit, and by a good margin. Both of the methods for detecting multicollinearity indicate multicollinearity not to be an issue in our model, and we are therefore confident when stating that multicollinearity is not present in our data. This also indicates that our sample meet assumption MLR.3. 5.3 Residuals For assumption MLR.4 and MLR.6 to be met, the error terms must have a zero population mean and be normally distributed. Firstly, we calculated the mean value of the residuals to be the following: Table 6: Residuals population mean This table shows the residuals population mean for model 1 to 5. The first column presents the model, and the second column presents the residuals population mean. Model 1 are presented in row 2, going down to Model 5 presented in row 6. Residuals population mean Model 1 −2.9179𝑒^−17 Model 2 −4.3413𝑒^−17 Model 3 −5.2664𝑒^−17 Model 4 2.1250𝑒^−17 Model 5 8.9672𝑒^−17 From table 6, we can observe that the population mean of the residuals of all five models are significantly close to zero, meaning that assumption MLR.4 is met for all of our models. To check for normality, we plotted the residuals in five histograms. Figure 3: Histogram of the residuals. From the left: Model 1, Model 2, Model 3, Model 4 and Model 5. Here, we can observe that neither of the histograms looks like its normally distributed. Therefore, we decided to conduct an additional test to check for this. We conducted a one-sample Kolmogorov-Smirnov Test. The Kolmogorov- Smirnov test, which is a goodness-of-fit test, test whether the residuals could reasonably have come from a normal distribution (IBM, 2021). The Kolmogorov-Smirnov test rejected the null hypothesis for all of the models, which claims that the data comes from a standard normal distribution. In other words, our data does not meet assumption MLR.6. Therefore, we will conduct a robust linear regression to see if this is a better fit. 5.4 Test for heteroskedasticity As mentioned in chapter 4.10, we have to check whether heteroskedasticity is present in our models or not. Firstly, we plotted the residuals vs. the fitted values in our models. Figure 4: Plot of residuals vs. the fitted values in our models. From the left: Model 1, Model 2 and Model 3. Figure 5: Plot of residuals vs. the fitted values in our models. From the left: Model 4 and Model 5. Here, there is no obvious systematic relationship, which implies no heteroskedasticity. As an additional test, we performed Lagrange multiplier tests to check for ARCH disturbance. The result of our tests indicate that we should not reject the null hypothesis of no conditional heteroskedasticity, and we therefore conclude that our sample meets assumption MLR.5. 5.5 Regression Models In the tables below we can observe the results for our multiple linear regressions from data containing 39 observations. The coefficient of each independent variable addresses the percentage change in ROA for an increase of 1 unit of the explanatory variable, given that the other independent variables are set. 5.5.1 Regression Model 1 Table 7: Model 1 Table 7 shows the results of the multiple linear regression from Model 1, containing 39 observations. This table shows the coefficient, the standard error, the t-statistics and the p-value of the intercept, the level of digitalization, years since the digitalization started, the age of the company and the size of the company. In addition, the R-squared and the adjusted R- squared for Model 1 are presented. Coefficient Std. Err. 𝒕-stat 𝒑-value Intercept −0.1467 0.1795 −0.8172 0.4195 𝑳𝒐𝑫 0.0734 0.0608 1.2085 0.2352 𝒀 0.0047 0.0064 0.7379 0.4657 𝑨 −0.0013 0.0021 −0.6119 0.5447 𝑺 −0.0008 0.0005 −1.3801 0.1766 𝑹^𝟐 0.13 Adjusted 𝑹^𝟐 0.0279 Model 1 suggests that both the age and size of a company in the Norwegian shipping industry will have a negative impact on profitability. On the other hand, the level of digitalization and years since digitalization started will have a positive impact on profitability, looking at the coefficients and not taking the p-value into account. Using this model, one can say that the level of digitalization has a positive relationship with profitability if the p-value was as high as 23.5%. Since this is above our chosen significance level, this regression finds no significant relationship between profitability and the level of digitalization. Given a significance level of 19%, the only relationship this regression reveals are that size has a minor negative impact on In addition, we can observe that the R-squared is 0.13 and the adjusted R-squared is 0.028. As expected, these are quite low. If we had included more control variables of relevance, this number could have gotten higher, but with our research question in mind, we decided to not include too many control variables. 5.5.2 Regression Model 2 Table 8: Model 2 Table 8 shows the results of the multiple linear regression from Model 2, containing 39 observations. This table shows the coefficient, the standard error, the t-statistics and the p-value of the intercept, the level of digitalization, years since the digitalization started and the size of the company. In addition, the R-squared and the adjusted R-squared for Model 2 are presented. Coefficient Std. Err. 𝒕-stat 𝒑-value Intercept −0.1765 0.1712 −1.0311 0.3096 𝑳𝒐𝑫 0.0762 0.0601 1.2682 0.2131 𝒀 0.0039 0.0062 0.6341 0.5301 𝑺 −0.0008 0.0005 −1.3971 0.1712 𝑹^𝟐 0.121 Adjusted 𝑹^𝟐 0.0453 The results for Model 2 shows that the p-value for LoD has decreased slightly compared to Model 1. The significance level has to be 21.3% for the relationship between profitability and the level of digitalization to be statistically significant. Similar to Model 1, this is above our chosen level. In other words, this regression finds no significant relationship between the level of digitalization and profitability. As for Model 1, the only significant variable at our level is the size of the company. For Model 2 where age of the company is excluded, the R-squared are somewhat lower than for Model 1. Since models including more variables often have a higher R-squared, this is as expected. An interesting thing with Model 2, is that the adjusted R-squared are higher than for Model 1, which can be an implication that this model is more suited for our data. 5.5.3 Regression Model 3 Table 9: Model 3 Table 9 shows the results of the multiple linear regression from Model 3, containing 39 observations. This table shows the coefficient, the standard error, the t-statistics and the p-value of the intercept, the level of digitalization, years since the digitalization started and the age of the company. In addition, the R-squared and the adjusted R-squared for Model 3 are presented. Coefficient Std. Err. 𝒕-stat 𝒑-value Intercept −0.2194 0.1738 −1.2623 0.2152 𝑳𝒐𝑫 0.0878 0.0606 1.4488 0.1563 𝒀 0.0028 0.0063 0.4395 0.6630 𝑨 −0.0013 0.0021 −0.6139 0.5433 𝑹^𝟐 0.0815 Adjusted 𝑹^𝟐 0.0028 Looking at Model 3, the p-value of the level of digitalization is now below our chosen significance level of 19%, as the regression shows a p-value equal to 15.6%. In other words, Model 3 finds a significant positive relationship between the level of digitalization and profitability. According to this regression, ROA increase by 8.8% for each increased level of digitalization defined in our survey. With size being excluded from the model, this is the only significant relationship When we exclude size, we observe that the adjusted R-squared has dropped significantly from both Model 1 and Model 2, which is now only at 0.0028. 5.5.4 Regression Model 4 Table 10: Model 4 Table 10 shows the results of the multiple linear regression from Model 4, containing 39 observations. This table shows the coefficient, the standard error, the t-statistics and the p-value of the intercept, the level of digitalization and years since the digitalization started. In addition, the R- squared and the adjusted R-squared for Model 4 are presented. Coefficient Std. Err. 𝒕-stat 𝒑-value Intercept −0.2499 0.1651 −1.514 0.1388 𝑳𝒐𝑫 0.0907 0.0599 1.5129 0.1391 𝒀 0.002 0.0061 0.3226 0.7488 𝑹^𝟐 0.0716 Adjusted 𝑹^𝟐 0.0201 In this regression, both size and age of the company are excluded. Similar to Model 3, the relationship between the level of digitalization and profitability is here significant given a significance level equal to 19%. The p-value of LoD has now decreased further, and in this regression, it is equal to 13.9%. The relationship between profitability and years since the digitalization started is here less significant than for all of the previous models and has a p-value as high as 74.9%. The adjusted R-squared is here higher than for Model 3, but lower than both Model 1 and Model 2, and is equal to 0.02. 5.5.5 Regression Model 5 Table 11: Model 5 Table 11 shows the results of the simple linear regression from Model 5, containing 39 observations. This table shows the coefficient, the standard error, the t-statistics and the p-value of the intercept and the level of digitalization. In addition, the R-squared and the adjusted R-squared for Model 5 are presented. Coefficient Std. Err. 𝒕-stat 𝒑-value Intercept −0.2504 0.1631 −1.5357 0.1331 𝑳𝒐𝑫 0.0952 0.0575 1.6554 0.1063 𝑹^𝟐 0.069 Adjusted 𝑹^𝟐 0.0438 Here we have excluded all other variables but the one directly linked to our research question, namely the level of digitalization, and are now left with a simple linear regression. Now, the p-value of the level of digitalization has decreased even further and is now equal to 10.6%. Under our chosen significance level of 19%, this relationship is now significant. This model suggests that if the level of digitalization increase by 1 unit, ROA will increase by 9.5%. In addition, the adjusted R-squared has now increased and are up to 0.044. This is the second highest adjusted R-squared, right below Model 2, which have an adjusted R-squared equal to 0.0453. 5.6 Robust Linear Regression Model Using the Robust linear regression name-value pair argument in MatLab, we fitted a robust regression model based on Model 1. 5.6.1 Model fit We wanted to see if the robust model had a better fit to our data than the linear fit from Model 1. Therefore, we plotted both of the models and looked at the residuals, as you can see below. Figure 6: Plot of the model fit vs. the residuals. The left-hand sided figure shows the linear fit of Model 1 while the right-handed side shows the robust fit. Here, we can see that the robust fit seems to have a slightly better fit to the residuals, since it is not affected by the observations with large residuals as much as the linear fit. 5.6.2 Outliers In MatLab, we can find which observations who are defined as outliers. 𝑂𝑢𝑡𝑙𝑖𝑒𝑟𝑠 = 9, 11, 22, 24, 33 The robust regression reduces the weight of the outlier observations in the regression. This is shown below.
{"url":"https://9pdf.net/document/y4wx45nv-relationship-digitalization-profitability-cross-sectional-norwegian-shipping-industry.html","timestamp":"2024-11-09T09:52:50Z","content_type":"text/html","content_length":"217297","record_id":"<urn:uuid:e5a51789-39ef-4a53-8bb7-cac0b77be2aa>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00087.warc.gz"}