url
stringlengths
6
1.61k
fetch_time
int64
1,368,856,904B
1,726,893,854B
content_mime_type
stringclasses
3 values
warc_filename
stringlengths
108
138
warc_record_offset
int32
9.6k
1.74B
warc_record_length
int32
664
793k
text
stringlengths
45
1.04M
token_count
int32
22
711k
char_count
int32
45
1.04M
metadata
stringlengths
439
443
score
float64
2.52
5.09
int_score
int64
3
5
crawl
stringclasses
93 values
snapshot_type
stringclasses
2 values
language
stringclasses
1 value
language_score
float64
0.06
1
https://www.jiskha.com/questions/352867/Please-help-me-Sorry-there-are-4-questions-I-need-to-check-my-attempts-with-to-see
1,534,918,717,000,000,000
text/html
crawl-data/CC-MAIN-2018-34/segments/1534221219495.97/warc/CC-MAIN-20180822045838-20180822065838-00694.warc.gz
877,306,256
5,424
# algebra Please help me! Sorry there are 4 questions I need to check my attempts with to see if I am on track. Thanks!! 1. 5-3[2(3x-4)+3] 2. Evaluate when x = -3 and y = 2 3x squared - 4y squared _______________________ 4x - 7y 3. -3/4y = -24 4. m + 1 5 2m - 1 ______ + ___ = ______ 4 6 12 1. 1. 5-3[2(3x-4)+3] Start from the inside and work outward. 5 - 3(6x - 8 + 3) = 5 - 18x + 5 = -18x 2. (3x^2 - 4y^2)/(4x -7y) Just plug in the numbers and calculate. (x^2 = x squared online.) 3. Multiply both sides by -4/3. 4. From the way you express the equation, I don't know which numbers belong where. (m + 1)/4 + 5/6 = (2m - 1)/12? posted by PsyDAG ## Similar Questions 1. ### stats again ok I get it, so once I get the prob figred for one person, how do I figure it if additional attempts are made and the question is about the prob of at least one of the additional attempts say n = 6 getting the same results as the 2. ### algebra Please help me! Sorry there are 4 questions I need to check my attempts with to see if I am on track. Thanks!! 1. 5-3[2(3x-4)+3] 2. Evaluate when x = -3 and y = 2 3x squared - 4y squared _______________________ 4x - 7y 3. -3/4y = 3. ### algebra here is the question: I am confused about the steps I need to follow to be able to complete this question some explanation will be appreciated :) In a Multiple - choice examination of 25 questions, four marks are given for each 4. ### Software Engineering Create an automatic question paper generator system in JAVA which involves Artificial Intelligence with the following requirements:  The system should generate a test paper automatically based on the difficulty level. For this 5. ### Math A quiz contains three True/False questions. If a student attempts to answer the questions by guessing, what is the probability that the student will answer all three questions correctly ? 6. ### To Anonymous re American History Your LONG list of questions without any attempts at answers has been removed. Do not posts whole tests here and expect others to do your work for you. If you have actual questions about your assignment, post them, but DO NOT POST 7. ### math IT took me a while to learn how to flip flapjacks. In fact, a+1/a=6, then the value of a^2 + 1/a^2 is the number of attempts it took me. How many attempts did it take me? A. 34 B. 35 C. 36 D. 38 I don't know how to do it, I'm 8. ### math IT took me a while to learn how to flip flapjacks. In fact, a+1/a=6, then the value of a^2 + 1/a^2 is the number of attempts it took me. How many attempts did it take me? A. 34 B. 35 C. 36 D. 38 I don't know how to do it, I'm
787
2,629
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.765625
4
CC-MAIN-2018-34
latest
en
0.90194
https://numbermatics.com/n/2740/
1,623,874,383,000,000,000
text/html
crawl-data/CC-MAIN-2021-25/segments/1623487626008.14/warc/CC-MAIN-20210616190205-20210616220205-00259.warc.gz
387,683,547
6,031
# 2740 ## 2,740 is an even composite number composed of three prime numbers multiplied together. What does the number 2740 look like? This visualization shows the relationship between its 3 prime factors (large circles) and 12 divisors. 2740 is an even composite number. It is composed of three distinct prime numbers multiplied together. It has a total of twelve divisors. ## Prime factorization of 2740: ### 22 × 5 × 137 (2 × 2 × 5 × 137) See below for interesting mathematical facts about the number 2740 from the Numbermatics database. ### Names of 2740 • Cardinal: 2740 can be written as Two thousand, seven hundred forty. ### Scientific notation • Scientific notation: 2.74 × 103 ### Factors of 2740 • Number of distinct prime factors ω(n): 3 • Total number of prime factors Ω(n): 4 • Sum of prime factors: 144 ### Divisors of 2740 • Number of divisors d(n): 12 • Complete list of divisors: • Sum of all divisors σ(n): 5796 • Sum of proper divisors (its aliquot sum) s(n): 3056 • 2740 is an abundant number, because the sum of its proper divisors (3056) is greater than itself. Its abundance is 316 ### Bases of 2740 • Binary: 1010101101002 • Base-36: 244 ### Squares and roots of 2740 • 2740 squared (27402) is 7507600 • 2740 cubed (27403) is 20570824000 • The square root of 2740 is 52.3450093133 • The cube root of 2740 is 13.9931939707 ### Scales and comparisons How big is 2740? • 2,740 seconds is equal to 45 minutes, 40 seconds. • To count from 1 to 2,740 would take you about forty-five minutes. This is a very rough estimate, based on a speaking rate of half a second every third order of magnitude. If you speak quickly, you could probably say any randomly-chosen number between one and a thousand in around half a second. Very big numbers obviously take longer to say, so we add half a second for every extra x1000. (We do not count involuntary pauses, bathroom breaks or the necessity of sleep in our calculation!) • A cube with a volume of 2740 cubic inches would be around 1.2 feet tall. ### Recreational maths with 2740 • 2740 backwards is 0472 • The number of decimal digits it has is: 4 • The sum of 2740's digits is 13 • More coming soon!
615
2,188
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.65625
4
CC-MAIN-2021-25
latest
en
0.879769
http://www.numbersaplenty.com/1012011
1,597,317,406,000,000,000
text/html
crawl-data/CC-MAIN-2020-34/segments/1596439738982.70/warc/CC-MAIN-20200813103121-20200813133121-00013.warc.gz
159,579,681
3,587
Search a number 1012011 = 371113337 BaseRepresentation bin11110111000100101011 31220102012220 43313010223 5224341021 633405123 711413320 oct3670453 91812186 101012011 11631380 124097a3 13295830 141c4b47 1514ecc6 hexf712b 1012011 has 32 divisors (see below), whose sum is σ = 1817088. Its totient is φ = 483840. The previous prime is 1012009. The next prime is 1012031. The reversal of 1012011 is 1102101. Adding to 1012011 its reverse (1102101), we get a palindrome (2114112). Multipling 1012011 by its reverse (1102101), we get a palindrome (1115338335111). It can be divided in two parts, 101 and 2011, that added together give a palindrome (2112). It is not a de Polignac number, because 1012011 - 21 = 1012009 is a prime. 1012011 is a lucky number. It is not an unprimeable number, because it can be changed into a prime (1012031) by changing a digit. It is a polite number, since it can be written in 31 ways as a sum of consecutive naturals, for example, 2835 + ... + 3171. It is an arithmetic number, because the mean of its divisors is an integer number (56784). 21012011 is an apocalyptic number. 1012011 is a gapful number since it is divisible by the number (11) formed by its first and last digit. 1012011 is a deficient number, since it is larger than the sum of its proper divisors (805077). 1012011 is a wasteful number, since it uses less digits than its factorization. 1012011 is an evil number, because the sum of its binary digits is even. The sum of its prime factors is 371. The product of its (nonzero) digits is 2, while the sum is 6. The square root of 1012011 is about 1005.9875744759. The cubic root of 1012011 is about 100.3987743431. The spelling of 1012011 in words is "one million, twelve thousand, eleven".
528
1,756
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.484375
3
CC-MAIN-2020-34
latest
en
0.865191
http://www.java-gaming.org/index.php?topic=29480.msg270663;topicseen
1,467,117,222,000,000,000
text/html
crawl-data/CC-MAIN-2016-26/segments/1466783396875.58/warc/CC-MAIN-20160624154956-00111-ip-10-164-35-72.ec2.internal.warc.gz
630,506,345
22,244
Java-Gaming.org Hi ! Featured games (88) games approved by the League of Dukes Games in Showcase (681) Games in Android Showcase (196) games submitted by our members Games in WIP (744) games currently in development News: Read the Java Gaming Resources, or peek at the official Java tutorials Pages: [1] ignore  |  Print Bayer ordered dithering  (Read 4974 times) 0 Members and 1 Guest are viewing this topic. Porlus Junior Devvie « Posted 2013-05-04 16:39:08 » Hi all, I'm attempting to add a stylised order dithering effect on an image although I've been met with unexpected results. I have an 8 by 8 threshold matrix as follows: 1  2  3  4  5  6  7  8 `int[] dither = new int[] { 1, 49, 13, 61, 4, 52, 16, 64,             33, 17, 45, 29, 36, 20, 48, 32,             9, 57, 5, 53, 12, 60, 8, 56,             41, 25, 37, 21, 44, 28, 40, 24,             3, 51, 15, 63, 2, 50, 14, 62,             35, 19, 47, 31, 34, 18, 46, 30,             11, 59, 7, 55, 10, 58, 6, 54,             43, 27, 39, 23, 42, 26, 38, 22 };` I just found this on the wikipedia page for ordered dithering and have been following the explanation in order to produce this effect myself. The algorithm producing the effect from this matrix is as closely resemblant of that explained in the wikipedia page and is as follows: 1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17 `int w = image.getWidth();int h = image.getHeight();int bd = 64;for(int y = 0; y < h; y++) {   for(int x = 0; x < w; x++) {                  int c =  pixels[y * w + x];      int d = dither[(y & 7) * 8 + (x & 7)];                  int r = ((c >> 16) & 0xff) + d;      int g = ((c >> 8) & 0xff) + d;      int b = (c & 0xff) + d;                  newpix[y * w + x] = 0x000000 | closestCol(r, bd) << 16 | closestCol(g, bd) << 8 | closestCol(b, bd);               }}` This is just the method that reduces the amount of colours available in each channel: 1  2  3 `private int closestCol(int c, int b) {   return c / b * b;}` The above code simply adds the corresponding value from the threshold matrix to the image data and then attempts to find the nearest colour to it after rounding. The issue is that the dither values are quite high and saturate the channel of the pixel. I've tried averaging the channel colour with the corresponding dither value but that makes the screen far darker than it should be. Is there a detail that I'm missing whereby it adds the effect without harshly altering the tone of the image? Here's a before and after dither comparison so you can see the results: http://i.imgur.com/9SWukYS.png Thanks, Paul Porlus Junior Devvie « Reply #1 - Posted 2013-05-05 11:50:16 » Just updated with a screenshot of the problem: http://i.imgur.com/9SWukYS.png Posting this just so you guys can see how it's producing effectively the right results, just with slightly skewed channels. delt0r JGO Ninja Medals: 108 Exp: 18 years Computers can do that? « Reply #2 - Posted 2013-05-06 05:40:59 » You don't appear to finding the "closest" color correctly. Currently you are dividing by 64*64=4096. That seems way too high. Why to you square that bd? Also i don't really follow how the wiki is normalizing the color values. The paper linked is too old and a bad scanned copy for me to bother reading it. Note you can also do probabilistic dithering. Probably works about as bad as normal dithering tbh. I have no special talents. I am only passionately curious.--Albert Einstein Porlus Junior Devvie « Reply #3 - Posted 2013-05-06 14:17:30 » The closest colour rounding seems to actually be working before adding the threshold. As the colour's stored as an integer it performs the division first and rounds that result and then multiplies. So the end value is actually floored to the nearest 64 in this case. For example: 140 / 64 * 64 2 * 64 = 128 So 140 becomes 128 when floored to the nearest 64. Sorry if I'm teaching you how to suck eggs, I'm just making sure. delt0r JGO Ninja Medals: 108 Exp: 18 years Computers can do that? « Reply #4 - Posted 2013-05-06 15:02:19 » Oh right order of precedence. I never write code that way since you can't tell what you meant by it. ie a/b*b or did you mean a/(b*b). So i would write (a/b)*b. Note that you get a different result from (b*a)/b too. So best be explicit. Brackets don't hurt anyone. So yea I think the normalization in the wiki is the problem. Note the matrices they show are normalized to be less than 1. So its just not clear what range colors are suppose to take. Well not without more work. I have no special talents. I am only passionately curious.--Albert Einstein delt0r JGO Ninja Medals: 108 Exp: 18 years Computers can do that? « Reply #5 - Posted 2013-05-06 15:05:52 » To sort of back that up. Note that if a value is overflowing before a call to closestCol, it can still be overflowing after. So it seems the dithering values should be a portion of between color range. I have no special talents. I am only passionately curious.--Albert Einstein Porlus Junior Devvie « Reply #6 - Posted 2013-05-06 16:36:38 » Yeah the threshold values range from 1-64, so presumably it breaches its respective channel fairly often which explains the artifacts, but I'm not entirely sure how to normalise the threshold value. I've tried dividing the value by 64, which is the highest value that is held in the matrix, although I'm working in whole numbers and only one instance would yield 1 and the rest would be 0. delt0r JGO Ninja Medals: 108 Exp: 18 years Computers can do that? « Reply #7 - Posted 2013-05-06 17:13:09 » I think you want to normalize by the range of a band. So for this matrix it would be 16/65 for 16 levels per channel. Or 4/65 for 64 levels per channel. I am assuming the range is 0-255. I have no special talents. I am only passionately curious.--Albert Einstein Porlus Junior Devvie « Reply #8 - Posted 2013-05-06 17:23:01 » Ah I see. Do you know how I would apply this to my algorithm? pjt33 « JGO Spiffy Duke » Medals: 40 Projects: 4 Exp: 7 years « Reply #9 - Posted 2013-05-06 22:10:11 » This is just the method that reduces the amount of colours available in each channel: 1  2  3 `private int closestCol(int c, int b) {   return c / b * b;}` That's not finding the nearest colour: it truncates rather than rounding. As to the normalisation, the Wikipedia article says Quote The algorithm renders the image normally, but for each pixel, it adds a value from the threshold map, causing the pixel's value to be quantized one step higher if it exceeds the threshold. So in your case with an 8x8 dither matrix you want to add bd * d / (8*8+1). Putting that together, 1  2  3  4  5  6  7  8  9 `      int c =  pixels[y * w + x];      int d = dither[(y & 7) * 8 + (x & 7)];      int l = dither.length;      int r = ((c >> 16) & 0xff);      int g = ((c >> 8) & 0xff);      int b = (c & 0xff);      newpix[y * w + x] = 0x000000 | closestCol(r, d, bd, l) << 16 | closestCol(g, d, bd, l) << 8 | closestCol(b, d, bd, l);` and (corrected below) Porlus Junior Devvie « Reply #10 - Posted 2013-05-06 22:20:10 » Thanks for the reply. That seems to work far better. It's just that it looks very bright. The image becomes saturated by light. pjt33 « JGO Spiffy Duke » Medals: 40 Projects: 4 Exp: 7 years « Reply #11 - Posted 2013-05-06 22:25:01 » Yes, because there's a bias up but no bias down. Actually it seems to be preferable to counter the bias up from the threshold with an intentional bias down so that on average they cancel: (Corrected below) Porlus Junior Devvie « Reply #12 - Posted 2013-05-06 22:32:23 » Thanks for your help. I'll have a sit down and try and understand it. pjt33 « JGO Spiffy Duke » Medals: 40 Projects: 4 Exp: 7 years « Reply #13 - Posted 2013-05-06 22:32:58 » Actually I stupidly lost the quantisation. The corrected version is 1  2  3  4 `   private static int closestCol(int col, int threshold, int quantum, int ditherSize) {      int quantised = quantum * (int)(col / quantum + threshold / (ditherSize + 1f) + 0.5);      return quantised > 255 ? 255 : quantised;   }` And of course there's always the option of offsetting things so that you have a half interval at each end and both true black and true white are possible. Porlus Junior Devvie « Reply #14 - Posted 2013-05-06 22:43:25 » Thanks. I'll substitute it. theagentd « JGO Bitwise Duke » Medals: 798 Projects: 4 Exp: 8 years « Reply #15 - Posted 2013-05-06 22:59:31 » (int)(quantised + 0.5f) = quantised since quantised is an int. That part does absolutely nothing. Myomyomyo. Roquen « Reply #16 - Posted 2013-05-07 04:48:27 » Keep it simple.  Throw away some bottom bits, add (or sub) the dither, throw away some bottom bits.  Goof around with that...you'll need to clamp unless you account over or underflow (add vs. sub). R = ((R & ~0xF)+D)&~0xF. pjt33 « JGO Spiffy Duke » Medals: 40 Projects: 4 Exp: 7 years « Reply #17 - Posted 2013-05-07 07:18:06 » (int)(quantised + 0.5f) = quantised since quantised is an int. That part does absolutely nothing. This is why I shouldn't write code after midnight. I've edited to correct, because the number of versions is getting confusing. Thanks. Pages: [1] ignore  |  Print You cannot reply to this message, because it is very, very old. CopyableCougar4 (43 views) 2016-06-25 16:56:52 Hydroque (79 views) 2016-06-22 02:17:53 SwampChicken (80 views) 2016-06-20 13:22:57 SwampChicken (80 views) 2016-06-20 13:22:49 SwampChicken (75 views) 2016-06-20 13:22:26 Hydroque (119 views) 2016-06-15 08:22:50 Hydroque (112 views) 2016-06-13 06:40:55 DarkCart (233 views) 2016-05-29 02:30:33 Hydroque (195 views) 2016-05-26 14:45:46 Mac70 (185 views) 2016-05-24 21:16:33 Spasi 49x theagentd 45x NegativeZero 39x KaiHH 29x LiquidNitrogen 23x princec 19x CoDi^R 15x kingroka123 15x ra4king 13x ags1 13x Opiop 10x purenickery 9x SHC 8x Hydroque 8x Archive 8x czak 7x Making a Dynamic Plugin Systemby Hydroque2016-06-25 00:13:25Java Data structures2016-06-13 21:22:09Java Data structures2016-06-13 21:20:42FPS Camera Tutorialby Hydroque2016-05-22 05:40:58Website offering 3D Models specifically for games for freeby vusman2016-05-18 17:23:09Website offering 3D Models specifically for games for freeby vusman2016-05-09 08:50:56Website offering 3D Models specifically for games for freeby vusman2016-05-06 11:10:21Website offering 3D Models specifically for games for freeby vusman2016-04-29 12:56:17 java-gaming.org is not responsible for the content posted by its members, including references to external websites, and other references that may or may not have a relation with our primarily gaming and game production oriented community. inquiries and complaints can be sent via email to the info‑account of the company managing the website of java‑gaming.org
3,382
10,828
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.625
3
CC-MAIN-2016-26
longest
en
0.682869
https://www.enotes.com/homework-help/graph-f-x-3tan-x-2-341884
1,519,184,446,000,000,000
text/html
crawl-data/CC-MAIN-2018-09/segments/1518891813322.19/warc/CC-MAIN-20180221024420-20180221044420-00494.warc.gz
832,155,936
10,001
# For the graph of `f(x)=-3tan(x)+2`How would I find the period, phase shift, and verticle shift txmedteach | Certified Educator To answer this question, we have to see what goes into changing each of these characteristics. To change the period, we have to shrink or expand the graph in the horizontal direction. Such a horizontal stretch looks like the following conversion: `f(x) -> f(ax)` Here, `a` is a constant by which you are multiplying `x`. If `|a|<1`, the graph gets horizontally expanded, and if `|a|>1`, the graph is horizontally shrunk. If you look at the original function you were given, you'll notice that there is no direct multiplier for `x`. Therefore, the period of this function will be the standard period for the tangent function: `pi`. Now, to find the phase shift, we need to see if any numbers are being added to `x` inside the tangent. A phase shift would look like the following: `tan(x+b)` The phase shift is a sort of "starting position," where if `x = 0`, the number inside the tangent function is not zero. Clearly, we are adding nothing to the `x` term inside the tangent function, and the phase shift is correspondingly 0 Finally, we must consider the vertical shift. Such a shift would take the following form: `tan(x) + c` Here, `c` tells you how many units to shift the normal graph of `tan(x)` upwards. All we need to do is look at the given equation to see that the upward shift will be 2.
353
1,439
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.03125
4
CC-MAIN-2018-09
latest
en
0.900038
https://mostrealisticai.com/ai/what-is-xyz-system-in-robotics.html
1,657,022,609,000,000,000
text/html
crawl-data/CC-MAIN-2022-27/segments/1656104576719.83/warc/CC-MAIN-20220705113756-20220705143756-00588.warc.gz
444,458,066
19,186
# What is XYZ system in robotics? Contents Also known as a rectilinear, rectangular or gantry robot, a Cartesian robot can only move its end effector in straight lines along the X, Y and Z axes. … Some Cartesians have an additional axis of motion, in which the end effector rotates about the Z axis or parallel to it. ## Which is XYZ robot? XYZ Robotics develops AI-powered robotic technologies to transform supply chain automation. We’re developing autonomous robotic solutions for put wall sorting and goods-to-person picking. Our robot sorts unorganized, random warehouse goods into groups of customer orders. ## What is meant by the XYZ system DOF in the robotics world? An axis in robotic terminology represents a degree of freedom (DOF). For example, if a robot has three degrees of freedom, it can operate in the x, y, and z planes. … Increasing the number of axes allows the robot to access a greater amount of space by giving it more degrees of freedom. ## What is a coordinate system in robotics? A coordinate system defines a plane or space by axes from a fixed point called the origin. Robot targets and positions are located by measurements along the axes of coordinate systems. A robot uses several coordinate systems, each suitable for specific types of jogging or programming. ## What are Cartesian coordinate robots used for? Cartesian Robots are one of the most commonly used robot types for industrial applications and are often used for CNC machines and 3D printing. ## How does soft robotics work? Soft robotics makes it possible for us to create fully-functional body parts that can not only adjust to human motion but also mimic it. This is done through the use of highly flexible materials such as thermoplastic polyurethane (TPU). In order to mimic human motion, strong pneumatic actuation is used. ## Which coordinate system are commonly used by industrial robots? industrial robot, Cartesian coordinate system is widely used for industrial robots. ## What are the 3 axes of a robot named? Axis 1 – Rotates robot (at the base of the robot) Axis 2 – Forward / back extension of robot’s lower arm. Axis 3 – Raises / lowers robot’s upper arm. Axis 4 – Rotates robot’s upper arm (wrist roll) ## What is a 6 DOF model? Six degrees of freedom (6DOF) refers to the specific number of axes that a rigid body is able to freely move in three-dimensional space. … Specifically, the body can move in three dimensions, on the X, Y and Z axes, as well as change orientation between those axes though rotation usually called pitch, yaw and roll. ## How many coordinate systems are there? There are three commonly used coordinate systems: Cartesian, cylindrical and spherical. ## What is robot coordinate system and how many coordinate systems are there? Robot World Coordinate System. Robot origin coordinate system or ROBROOT coordinate system. Workpiece and / or BASE coordinate system. Robot Tool coordinate system. THIS IS UNIQUE:  Why does my Roomba has trouble docking? ## What is joint coordinate system? The relative motion of two body segments can be defined by a Joint Coordinate System described by two segment-fixed axes and a mutually orthogonal floating axis. … The joint coordinate system is defined by two independent body-fixed axes and the common perpendicular. ## What are advantages of Cartesian robots? The primary advantage of cartesians is that they are capable of moving in multiple linear directions. In addition, cartesians are able to do straight-line insertions into furnaces and are easy to program. Cartesians have the most rigid robotic structure for a given length, since the axes are supported at both ends. ## What are 5 different types of robots? A simpler, more complete definition of robotic types can be narrowed down to five types: Cartesian, Cylindrical, SCARA, 6-Axis and Delta. Each industrial robot type has specific elements that make them best-suited for different applications. The main differentiators among them are their speed, size and workspace. ## What is the types of robotics joint? Rotational joint can also be represented as R –Joint. This type will allow the joints to move in a rotary motion along the axis, which is vertical to the arm axes. Linear Joint: Linear joint can be indicated by the letter L –Joint. Categories AI
881
4,325
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.8125
3
CC-MAIN-2022-27
latest
en
0.922064
francescosegni.it
1,582,954,568,000,000,000
text/html
crawl-data/CC-MAIN-2020-10/segments/1581875148671.99/warc/CC-MAIN-20200229053151-20200229083151-00123.warc.gz
61,457,185
9,755
category author # The joy of mathematics, in accordance with the mathematician Bertrand Russell, would be to be capable of find patterns in factors which are random and chaotic. To say that Mathematics is often a discipline of curiosity could be an understatement. There is definitely a delight in mathematics for a lot of persons. Mathematics has been around for a large number of years. It’s the study of numbers and patterns. It has fascinated people today because ancient times and continues to fascinate millions of people today right now. Mathematics can be a lifetime pursuit and anybody who is able to know the beauty of it must think about it a hobby. What does determine mean in math? You will discover two kinds of determinants: Intrinsic and Extrinsic. In quite a few numbers, the sum in the integers is not equal for the product of the integers. In other words, they may be not convergent. When we multiply by lots of integers, the result will probably be some issues that happen to be not convergent. For example, our English write an essay for me phrase “one minus one equals two” is not normally true. It will only be accurate if two factors are moreover, or in the event the two elements are in multiplication. Other examples of not getting convergent are: four hundred and sixty-six plus the number thirteen is just not five plus eleven, or five plus fourteen is just not nine plus twenty-one. Whilst all these numbers are not convergent, they may be all integers. Therefore, two elements of a multiplication (two from the integers) will create an integer. The distinction in between integers along with other figures is named a fraction. It really is vital to bear in mind that fraction is not the exact same because the fraction above. A fraction represents an “imaginary” aspect of a quantity. Fractions may be arbitrarily massive or smaller, they are able to have a variable or fixed worth, and they could be optimistic or unfavorable. A fraction is described as becoming a smaller sized element of a thing than the entire of your issue. When we multiply a fraction by itself or divide a fraction by one more, the results are now referred to as “proportional” fractions. These will likely be employed to create new numbers, which are not fractions. The coefficient of a fraction tells us just how much of a factor is multiplied by. Therefore, a higher coefficient means a larger quantity (divided by the original). Zeroes do not look like any other issues in math. They’re often on the left side of a quantity. Their which means is fairly straightforward: Zero is anything that is definitely taken away from a thing else. academic writing websites When the original issue is multiplied by the zero, the result would be the number “zero”. Zero can represent any value (so long as it is a multiple of anything else), including an imaginary zero, which means zero when multiplied by itself. Ultimately, it can represent an “empty” field (that can be utilized for anything) which include 0. An empty field may be represented by a circle. Determining mean in math is fairly straightforward. You take the sum in the things inside a group and divide it by its size. In the event you get an integer, then you definitely know the outcome, although in case you get a fraction, then you definitely know the result is often a several of a specific quantity (like zero). #### Search Instagram has returned invalid data.
699
3,443
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.09375
4
CC-MAIN-2020-10
latest
en
0.947929
https://community.tableau.com/message/385463
1,553,006,218,000,000,000
text/html
crawl-data/CC-MAIN-2019-13/segments/1552912201996.61/warc/CC-MAIN-20190319143502-20190319165502-00456.warc.gz
474,644,654
33,079
1 2 Previous Next 28 Replies Latest reply on Jun 24, 2015 12:21 AM by Bora Beran How to sum distinct Status Hi everyone, i have the Worksheet as the attachment. i want to count the number of each Status base on 2 Column: max Status and Min Status. i have calc like this: if [count selection]="count max status" then SUM ([max Status]) elseif [count selection]="count min status" then SUM ([Min status]) end however the result is not correct because there are many duplicate rows. the correct result is that: when i select "Count max Status" then green:2 and red:1 when i select "Count min Status" then green:1 and red:2 Can you please help me how to do that?. Thanks • 1. Re: How to sum distinct Status Hi is there anybody can help me?. can we Sum distinct the column: Max Status? • 2. Re: How to sum distinct Status Hello Phuviet, Since you are on 8.X, this was a little more difficult then I intended. In addition, the underlying data shows a lot of duplicate records, which made added to the complexity. Either way, here is the attached workbook. I will start putting together screen shots and explanations on how it did this, because it involved a lot of table partitioning. Regards, Rody 1 of 1 people found this helpful • 3. Re: How to sum distinct Status Ok first, we need to address the fact that you have you a lot of duplicate records, and you have you have a lot of fields in the view, which means all of those duplicate records are being brought into the partitions. This means if we do SUM, or even WINDOW_SUM, it is going address all of those duplicate rows. We could normally handle this by tweaking what the Table Calc is Addressing and Computing on, but we have a lot of fields in the View, so this method won't work. So to get around this we need to address the Status, and the First instance within the partition This will give us only 1 instance for a particular status. You have to duplicate this for each status. Next we need to create Table Calc to build on top of it, and get the Total count for the Status Once again, you need to duplicate this for each status Then drag both of those Measures onto your Text Mark, and clean it up Since we are only given a value for the status type, we can put these next to each other. Finally we want to get rid of the excess columns. we can do this by creating a Calculated field FIRST() = 0 Place that onto the Filter shelf and compute using status I hope this is helpful. Regards, Rody 1 of 1 people found this helpful • 4. Re: How to sum distinct Status Simon Runc Hey Simon, sorry for calling you in so much, but you are always extremely helpful! The method I used works, but it is a complicated processes (Might have gone a little too far outside of that box). The three big issues I had here was 1. There are duplicate records, and 2. There is a lot of fields in the View, and 3. It's not V9 Can you let me know if there is any easier way? His original workbook is attached in the first post. Thanks, Rody 1 of 1 people found this helpful • 5. Re: How to sum distinct Status hi Rody, Not a problem...always happy to help. I see what you mean, it does seem 'unexpectedly' complicated, for what, 'on the face of it' doesn't look like it would be so tricky. I've had a quick play, and think that you do need to handle the 2 status' in different calculations, although I only tried a few other ways of concatenating some sort of unique string, and doing a window_sum(countd...) type thing, but couldn't get it to work. However the solution you'e given here works, which is 99% of the battle! and I think the real answer might be to see if the data can be de-duplicated at source. I found, from bitter experience, that a bit of extra time spent on data-prep can save hours in the long run. I'm away for a few days, but I'll take a further look when I get back (I love these kind of logical problems...they usually get the better of me, but I still enjoy them!!)...Like you I feel there is a simpler solution out there! 1 of 1 people found this helpful • 6. Re: How to sum distinct Status hi Rody, ...back from my hols! and I've had a further play with this and think I've found another (slightly simpler solution) In this solution I've created calculated fields for all the elements for explanatory reasons, but could be combined into 1 or 2 in a final solution. So the first thing I created was a measure selector, called 'Selected Status - Min or Max' with the formula if [count selection]="count max status"  Then [max Status] elseif [count selection]="count min status" then [Min status] end Then I created a field which creates a concatenation string of the class & project, where the selected status = 1, called 'Concat Selected Status' IF [Selected Status - Min or Max] = 1 THEN [Class]+[Project] END NB. Great Tip btw, as you'll see there is no ELSE statement here so the false of this logic statement is NULL, and NULLs don't get counted or COUNTD'ed (if that's a word!) Now at this point, just putting status *Red/Green' on the row pane, and doing a COUNTD on this field gives the correct answer, but to get the correct answer with the extra dimensions in we need to put this into a Window_Sum Table Calc. This field is CountD Window Sum, with the formula WINDOW_SUM(COUNTD([Concat Selected Status])) I then set the compute using as follows One thing you'll also notice is I set the Min and Max status in the view to Attributes so I could ignore them in Table Calcs. I think this does what we want, but you may want to double check. What a deceptively tricky problem!! (although I must admit to quite enjoying it!! ) 1 of 1 people found this helpful • 7. Re: How to sum distinct Status Version 9 (LOD expressions) would make this very simple, I believe!  Upgrade ASAP! 1 of 1 people found this helpful • 8. Re: How to sum distinct Status Simon this is great! Much easier (To understand and implement) than my original solution. Using the concatenation of [Class] and [Project] was a really smart approach The one thing I really liked, was using ATTR() for the min and max status. That really makes things easier, and more flexible, as we don't have to add a new calc for every possible status. Great work! If you come across any of these "Tricky Problems", please ping me on it. I, like yourself, love working through theses puzzles. And am very, very reluctant to ever say, "This is not possible"! Thanks again and best regards, Rody 1 of 1 people found this helpful • 9. Re: How to sum distinct Status Matt Lutton Hey Matt, I think we would still run into a similar problem with V9, as there are duplicate records. But I am very interested to see what your approach would look like. Of the top of my head, I think it would be possible by Excluding a lower level dimension, but not quite sure how that would look. Regards, Rody 1 of 1 people found this helpful • 10. Re: How to sum distinct Status Hi everyone, i used Level of Detail Calculation and it work very good for me. However i got another Problem. my Real Data has more than 46.000 records. before i use Level of Detail Calculation Tableau shows me Report very quickly. But when i create the Calculation to sum distinct Value and upload the Report on Server, Tableau works very slowly. it takes me alway 4-5 Minutes. Sometime Tableau cant Show me the Report and send a Error Meassage. I wonder if the Level of Detail make Tableau slower? • 11. Re: How to sum distinct Status Phuviet -- can you upload you V9 workbook for Rudy to see? Rudy, I have not studied this problem in depth at this time -- but I was imagining that LOD expressions would allow you to do this more readily since we can calculate at different levels of granularity, regardless of dimensions in the view (like taking the SUM of MAX values, or similar -- without resorting to Table Calculations). I would need to look at the example a bit more closely and understand the expected results to show you how I'd approach it.  If time allows me to dig into this later, I will try to do so -- I have quite a bit going on right now! • 12. Re: How to sum distinct Status Thanks Pooja Gandhi for help me with this Calculation. • 13. Re: How to sum distinct Status Hi Phuviet, Are you connecting directly to your data source or are you using an extract (.tde)? If you are using an extract you could try to 'optimize' the extract and then republish. Like Matt I've not studies LoD performance, but this might help. • 14. Re: How to sum distinct Status Hi Simon, Ist live Connection. When i upload the Report on Server i muss make sure that the live Connection to Database is choosen. 1 2 Previous Next
2,074
8,694
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.5625
3
CC-MAIN-2019-13
latest
en
0.933328
https://tex.stackexchange.com/questions/154634/how-to-draw-a-modulo-13-clock-like-diagram
1,653,022,037,000,000,000
text/html
crawl-data/CC-MAIN-2022-21/segments/1652662531352.50/warc/CC-MAIN-20220520030533-20220520060533-00782.warc.gz
655,365,584
68,052
# How to draw a modulo-13-clock like diagram? Problem : Draw the circle, but without the numbers and without indicative guidance arrow: \documentclass{article} \usepackage{tikz} \begin{document} \begin{tikzpicture}[scale=3] \draw (0,0) circle (1cm); \end{tikzpicture} \end{document} • \foreach command is a good start. Jan 17, 2014 at 21:06 • Without the letters also? Jan 17, 2014 at 21:13 • For the arc only \draw (90:1cm) arc (90:-180:1cm); Jan 17, 2014 at 21:14 • @Sigur I wonder if anyone has read the "specification" above the code and the image. :) – masu Jan 18, 2014 at 13:03 You can use a combination of foreach and arc, and imagine that the shape is quite similar to a clock face: % arara: pdflatex \documentclass[border=5mm]{standalone} \usepackage{tikz} \tikzset{>=stealth} \begin{document} \begin{tikzpicture}[scale=3] \draw[->] (90:1cm) arc (90:-180:1cm); % numbers \foreach \i in {2,...,9} { \pgfmathparse{90-(\i-1)*360/13}; \node at (\pgfmathresult:1.2cm) {\i}; }; % letters \foreach \i/\j in {10/T,11/J,12/Q,13/K,14/A} { \pgfmathparse{90-(\i-1)*360/13}; \node at (\pgfmathresult:1.2cm) {\j}; }; \end{tikzpicture} \end{document} • oops, the finishing point of the arrow is slightly off... I suppose it's good to leave something for the OP :) Jan 17, 2014 at 21:35 \documentclass{article} \usepackage{tikz} \begin{document} \begin{tikzpicture}[scale=3] % % Independent parameters \pgfmathsetmacro\dth{360/13} % angular increment \def\angleOffset{90} % starting angle % % Dependent parameters \pgfmathsetmacro\angleTip{-10.5*\dth+\angleOffset} % tip angle % % draw arc \draw[thick,->] (\angleOffset:\dialR) arc (\angleOffset:\angleTip:\dialR); % % write labels \foreach [ var=\k, var=\dialLabel, evaluate=\k as \th using -\k*\dth+\angleOffset, ] in {0/A,1/2,2/3,3/4,4/5,5/6,6/7,7/8,8/9,9/T,10/J,11/Q,12/K}% { \draw (\th:\dialLabelR) node {\dialLabel}; } \end{tikzpicture} \end{document} The Asymptote version: % dial.tex : % \documentclass{article} \usepackage[inline]{asymptote} \usepackage{lmodern} \begin{document} \begin{figure} \begin{asy} size(3cm); import graph; import fontsize; defaultpen(fontsize(9)); string L="A23456789TJQK"; int n=length(L); int k=find(L,"J"); real dphi=360/n; real r=1; draw(Arc(0N,r,90,90-(k+0.5)*dphi,CW),deepblue+0.8bp,Arrow(size=3)); pair p; for(int i=0;i<n;++i){ p=dir(90-i*dphi); label("$\mathsf{"+substr(L,i,1)+"}$",p,p); } \end{asy} \end{figure} \end{document} % % Process: % % pdflatex dial.tex % asy dial-*.asy % pdflatex dial.tex run with xelatex or with latex->dvips->ps2pdf: \documentclass[pstricks,12pt]{standalone} \usepackage{pstricks,multido} \begin{document} \psset{unit=2} \degrees[13] \SpecialCoor \sffamily \begin{pspicture}(-2,-2)(2,2) \psarcn{->}(0,0){1.5}{3}{-7} \multido{\iA=2+-1,\iB=2+1}{8}{\rput(1.7;\iA){\iB}} \pgfforeach \iA/\jA in {7/T,6/J,5/Q,4/K,3/A}{\rput(1.7;\iA){\jA}} \end{pspicture} \end{document} ## With \degrees[360] \documentclass[pstricks,12pt]{standalone} \usepackage{pst-node} \psset{saveNodeCoors} \begin{document} \begin{pspicture}(-4,-4)(4,4) \psforeach{\x}{A,2,3,4,5,6,7,8,9,T,J,Q,K} { \pnodes(!3 -360 13 div \the\psLoopIndex\space mul 90 add PtoC){X\x} \uput[!N-X\x.y N-X\x.x Atan](X\x){\x} } \psarcn{->}(0,0){2.8}{(XA)}{(XQ)} \end{pspicture} \end{document} ## With \degrees[13] and \psforeach \documentclass[pstricks,12pt]{standalone} \usepackage{pst-node} \psset{saveNodeCoors} \degrees[13] \begin{document} \makeatletter \begin{pspicture}(-4,-4)(4,4) \psforeach{\x}{A,2,3,4,5,6,7,8,9,T,J,Q,K} { \pnodes(!3 \the\psLoopIndex\space neg \pst@angleunit 90 add PtoC){X\x} %\qdisk(X\x){1pt} \uput[!N-X\x.y N-X\x.x atan 1 \pst@angleunit div](X\x){\x} %\uput[!\the\psLoopIndex\space neg 90 1 \pst@angleunit div add ](X\x){\x} } \psarcn{->}(0,0){2.8}{(XA)}{(XQ)} \end{pspicture} \makeatother \end{document} ## With \degrees[13] and \foreach \documentclass[pstricks,12pt]{standalone} \usepackage{pst-node} \usepackage{pgfmath}% don't forget this line! \psset{saveNodeCoors} \degrees[13] \begin{document} \makeatletter \begin{pspicture}(-4,-4)(4,4) \foreach \x [count=\xi from 0] in {A,2,3,4,5,6,7,8,9,T,J,Q,K} { \pnodes(!3 \xi\space neg \pst@angleunit 90 add PtoC){X\x} \uput[!N-X\x.y N-X\x.x atan 1 \pst@angleunit div](X\x){\x} } \psarcn{->}(0,0){2.8}{(XA)}{(XQ)} \end{pspicture} \makeatother \end{document} ## Warning! • The following alternative approaches for labeling produce labels which are wrongly positioned. \uput[(X\x)](X\x){\x}% wrong position \uput[!\psGetNodeCenter{X\x} X\x.y X\x.x Atan](X\x){\x}% wrong position • \usepackage{pgfmath} must be loaded when using the looping index (via [count=\xi from 0]) of \foreach.
1,834
4,722
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.546875
3
CC-MAIN-2022-21
latest
en
0.466853
http://research.stlouisfed.org/fred2/series/KCPPPGMYA156NUPN
1,416,983,457,000,000,000
text/html
crawl-data/CC-MAIN-2014-49/segments/1416931006064.45/warc/CC-MAIN-20141125155646-00028-ip-10-235-23-156.ec2.internal.warc.gz
257,421,090
19,080
Consumption Share of Purchasing Power Parity Converted GDP Per Capita at constant prices for Malaysia 2010: 51.92380 Percent (+ see more) Annual, Not Seasonally Adjusted, KCPPPGMYA156NUPN, Updated: 2012-09-17 10:31 AM CDT Click and drag in the plot area or select dates: Select date:   1yr | 5yr | 10yr | Max   to For proper citation, see http://pwt.econ.upenn.edu/php_site/pwt_index.php Source Indicator: kc Source: University of Pennsylvania Release: Penn World Table 7.1 Restore defaults | Save settings | Apply saved settings w   h Graph Background: Plot Background: Text: Color: (a) Consumption Share of Purchasing Power Parity Converted GDP Per Capita at constant prices for Malaysia, Percent, Not Seasonally Adjusted (KCPPPGMYA156NUPN) Integer Period Range: to copy to all Create your own data transformation: [+] Need help? [+] Use a formula to modify and combine data series into a single line. For example, invert an exchange rate a by using formula 1/a, or calculate the spread between 2 interest rates a and b by using formula a - b. Use the assigned data series variables above (e.g. a, b, ...) together with operators {+, -, *, /, ^}, braces {(,)}, and constants {e.g. 2, 1.5} to create your own formula {e.g. 1/a, a-b, (a+b)/2, (a/(a+b+c))*100}. The default formula 'a' displays only the first data series added to this line. You may also add data series to this line before entering a formula. will be applied to formula result Create segments for min, max, and average values: [+] Graph Data Graph Image Suggested Citation ``` University of Pennsylvania, Consumption Share of Purchasing Power Parity Converted GDP Per Capita at constant prices for Malaysia [KCPPPGMYA156NUPN], retrieved from FRED, Federal Reserve Bank of St. Louis https://research.stlouisfed.org/fred2/series/KCPPPGMYA156NUPN/, November 26, 2014. ``` Retrieving data. Graph updated. Recently Viewed Series Subscribe to our newsletter for updates on published research, data news, and latest econ information. Name:   Email:
526
2,028
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.609375
3
CC-MAIN-2014-49
latest
en
0.781168
http://www.geom.uiuc.edu/~math5337/ds/part4/part4_event.html
1,540,078,440,000,000,000
text/html
crawl-data/CC-MAIN-2018-43/segments/1539583513508.42/warc/CC-MAIN-20181020225938-20181021011438-00025.warc.gz
468,734,195
1,849
Up: Linear and Nonlinear Behavior # One-Dimensional Dynamical Systems ## Part 4: Linear and Nonlinear Behavior #### Eventual behavior Let us investigate the Logistic family for = 3.2. Which of the fixed points are attracting? Which are repelling? What about the periodic points? Qualitative diagram for = 3.2. The above picture represents the eventual behavior of all points on the line for = 3.2. With arrows we indicate that the two periodic points are mapped onto each other. For the other points, we use arrows to indicate the direction that points go after iteration. Note that a point x, close to one periodic point, will first be mapped close to the other periodic point before it gets mapped back to the line segment that contains x. This is not indicated in the picture, because this behavior is already reflected in the behavior of the periodic points themselves. • Draw a qualitative diagram for the eventual behavior of all points for = 2. Your diagram should resemble the picture above, but it does not need to have exact values for fixed and periodic points. The idea is to compare pictures qualitatively. • Draw a similar diagram for = 3.1. As you investigate, magnify the region near the fixed point other than 0. Compared to the behavior for = 2, what has happened? • Draw a similar diagram for = 3.5. What is the period of the attracting orbit here? Indicate in which order the points in the orbit occur. Up: Linear and Nonlinear Behavior
319
1,466
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.953125
3
CC-MAIN-2018-43
longest
en
0.919947
https://www.wyzant.com/resources/answers/8611/what_is_the_difference_between_a_vertex_and_an_angle
1,524,807,634,000,000,000
text/html
crawl-data/CC-MAIN-2018-17/segments/1524125949036.99/warc/CC-MAIN-20180427041028-20180427061028-00252.warc.gz
918,423,282
17,724
0 # What is the difference between a vertex and an angle? I am 64 and missed schooling because of childhood hospitalisation. I'm embarrassed about how little I know of maths and have been reading thru the about.math site. A Kindergarten! geometry question is 'I have four sides and four vertices - what am I?' I'd not known the term vertex/vertices. When I looked it up the examples referred to solids, with faces, edges and vertices. Not 'sides'. I'm assuming that a kindergarten level is probably asking for 'a square'. But, if that is so, why not use the word 'angle'?  Many thanks. Margarita. ### 1 Answer by Expert Tutors Tutors, sign in to answer this question. Kevin S. | 5.0 5.0 (4 lesson ratings) (4) 1 Margarita - An angle usually represents a measurement - such as 90 degrees or 1/2 pi radians. A vertex generally represents the "point" of the angle (or in later cases, a parabola or an ellipse) where the direction changes. Let's assume you were walking on the lower edge of an equilateral (all sides the same length) triangle, heading towards the left. When you have to turn and start walking up the left edge, AT THAT POINT is the vertex. Vertices is plural for vertex, just like indices is plural for index. (Gotta love the predictability of the english language) PS - Don't be embarrassed - I applaud you for wanting to loop back and learn this! ### Comments Kevin, thank you so much for your helpful response - that is totally clear now - and also for your encouragement! I   notice that Wyzant have me registered as living in 'Myrtle, MS', wherever that is! I actually live in the Canary Islands, a Spanish province that is just off the coast of the Western Sahara.... I've asked Wyzant if any of their tutors are willing to work via email. (Do you?) Best wishes, Margarita
426
1,804
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.296875
3
CC-MAIN-2018-17
latest
en
0.929447
https://www.thestudentroom.co.uk/showthread.php?page=8&t=3323533
1,529,613,580,000,000,000
text/html
crawl-data/CC-MAIN-2018-26/segments/1529267864257.17/warc/CC-MAIN-20180621192119-20180621212119-00420.warc.gz
935,394,761
44,862
You are Here: Home >< Physics WJEC AS Physics PH1 May 19th 2015 watch 1. (Original post by Jonooo123) Haha well I bet it will catch a few people out! They'll probably put an upwards arrow. It's likely though that a lot of people thought there would be no air resistance but thankfully for them there was no 'no arrow'. 2. (Original post by Jonooo123) Did anyone else get a resistivity along the lines of 8x10^-8 for the wire? and a drift velocity of something like 5x10^-5? I remember getting 10^-5 for drift velocity 3. With the graff with the train. I did a tangent and I got a weird acceleration. what was the acceleration? or in other words, what was the gradient? 4. (Original post by rhungwilym) With the graff with the train. I did a tangent and I got a weird acceleration. what was the acceleration? or in other words, what was the gradient? 0.133333333333333333333333333333 333333333333333333333333333333.. . ms^-2 5. (Original post by PrimeLime) 0.133333333333333333333333333333 333333333333333333333333333333.. . ms^-2 I think I got something smaller than that but I don't remember. I just remember making the triangle massive on the paper 6. (Original post by rhungwilym) I think I got something smaller than that but I don't remember. I just remember making the triangle massive on the paper yeah mine was about 0.1375 I think, I'm sure there will be loads of tolerance in the mark scheme for that bit like ±0.05, doesn't sound like a lot but in the context it is! 7. Where did 48km/h come from? I put 50 because I am a tool. Also what was the velocity of it? I think I did 50,000/60x60? Something like that? 8. (Original post by DirtyExamTables) Where did 48km/h come from? I put 50 because I am a tool. Also what was the velocity of it? I think I did 50,000/60x60? Something like that? Can't remember the exact values, but you divide distance travelled (300m) by their respective speeds, this gives you the time taken. Do this for both AB and BA. Then to calculate mean velocity, you divide the total distance travelled (300+300=600) by the total time taken for AB and BA, which should give you 48km 9. (Original post by thegayman) Can't remember the exact values, but you divide distance travelled (300m) by their respective speeds, this gives you the time taken. Do this for both AB and BA. Then to calculate mean velocity, you divide the total distance travelled (300+300=600) by the total time taken for AB and BA, which should give you 48km 48km/h sorry 10. (Original post by DirtyExamTables) Where did 48km/h come from? I put 50 because I am a tool. Also what was the velocity of it? I think I did 50,000/60x60? Something like that? It went at 40kmh for 7.5 hours and 60 for 5 hours. (40x7.5)+(60x5) = 600 600/12.5 = 48. For the next bit velocity = 0 because the resultant displacement is zero, it ends where it started! 11. (Original post by thegayman) Can't remember the exact values, but you divide distance travelled (300m) by their respective speeds, this gives you the time taken. Do this for both AB and BA. Then to calculate mean velocity, you divide the total distance travelled (300+300=600) by the total time taken for AB and BA, which should give you 48km ugh. I knew 50 was too good to be true I didn't do very well. Will have to do better next year amongst all the a2 exams, otherwise I'll be working in McDonald's for the rest of my life 12. (Original post by Jonooo123) It went at 40kmh for 7.5 hours and 60 for 5 hours. (40x7.5)+(60x5) = 600 600/12.5 = 48. For the next bit velocity = 0 because the resultant displacement is zero, it ends where it started! oh great. I've done even worse than my already low expectations 13. (Original post by DirtyExamTables) oh great. I've done even worse than my already low expectations Have faith, the grade boundaries are likely to be low (I hope!!) as the paper was significantly harder than previous years 14. (Original post by thegayman) Have faith, the grade boundaries are likely to be low (I hope!!) as the paper was significantly harder than previous years It would be good to compile a list of all the quotes on answers people agree on. Because, so far, 4500 ohms is the only number I can recognise... 15. (Original post by DirtyExamTables) It would be good to compile a list of all the quotes on answers people agree on. Because, so far, 4500 ohms is the only number I can recognise... Hmm I would try but I can't remember question numbers only answers aha 16. (Original post by Jonooo123) Hmm I would try but I can't remember question numbers only answers aha and 4500 ohms is right 17. (Original post by Jonooo123) and 4500 ohms is right Yay, so far I'm on 5 marks out of 80. At one point I remember doing mgh - 0.5mv^2, that seem familiar? I know I got the next part wrong though, I forgot to use the horizontal distance given in the question 18. (Original post by DirtyExamTables) Where did 48km/h come from? I put 50 because I am a tool. Also what was the velocity of it? I think I did 50,000/60x60? Something like that? I did the same as you because I don't think it said the time in that particular question. it said later if I remember. in the graph. 19. (Original post by rhungwilym) I did the same as you because I don't think it said the time in that particular question. it said later if I remember. You have to work out the time, using time=distance travelled/speed 20. I think I did okay on the experiment question, the grasshopper one is either all right or all completely wrong, and deriving the equation I did steps of vt x a, vta x n, vtan x e, then dividing by t and cancelling the t's to get nave. Didn't remember to mention how it equalled I though. Edit: oh and I also forgot to put ohm meters as the unit for resistivity, guaranteed that will be the question with a mark for unit Related university courses TSR Support Team We have a brilliant team of more than 60 Support Team members looking after discussions on The Student Room, helping to make it a fun, safe and useful place to hang out. This forum is supported by: Updated: April 13, 2016 Today on TSR 3,170 students online now 800,000+ Exam discussions Poll Useful resources The Student Room, Get Revising and Marked by Teachers are trading names of The Student Room Group Ltd. Register Number: 04666380 (England and Wales), VAT No. 806 8067 22 Registered Office: International House, Queens Road, Brighton, BN1 3XE
1,688
6,396
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.796875
3
CC-MAIN-2018-26
latest
en
0.934032
https://clux.dev/post/2006-08-09-vault-of-therayne/
1,675,467,675,000,000,000
text/html
crawl-data/CC-MAIN-2023-06/segments/1674764500076.87/warc/CC-MAIN-20230203221113-20230204011113-00377.warc.gz
192,026,828
5,849
# Vault of Therayne ## How to not brute force a dungeon An easter egg for the Dungeon Siege 2 Broken World expansion. After doing the first two rooms of the Treasure Hunt quest in part 2, you can take two additional puzzles of the same type, but these are ridiculously hard. If you made the second one, you have no doubt noticed that these are much trickier than the general lightning reflection puzzle in the original DS2. The third should be possible with a manageable dose of trial and error - still too much for what’s essentially a mindless hack’n slash game - but the last one is almost impossible. So I present the way to solve it - if patience is not your virtue - namely; mathematically. First three are just solutions that was just included for completeness, the method used is described in the last section. ## Square Every node has an even amount of connections, so this solution is trivial; simply click each node once. ## Double square Also quite easy, but here’s the matrix solution (forgot the one I used, and needed to generalize the method for the larger polygons) A,C,F,H (press those once, in whatever order) ## Octagon just the solution here B,E,G,I,J,K,L - in whatever order, the one found by trial and error or from mathematica B,C,D,G,I,K,L ## Dodecagon from hell Every block must be inverted an odd number of times, and since inverting twice is the same as not doing anything, these operations are equivalent to addition mod 2. Each row j in matrix V represents which lights are inverted by f(j). For instance: f(A) inverts A, L, M, O, and P (as shown in the diagram), which is the first row in V. V={{1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1}, {0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1}, {0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, {0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1}, {0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0}, {0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0}, {0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0}, {0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0}, {0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0}, {1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0}, {1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0}, {0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0}, {1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0}, {1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1}} Solving the equation Vx={1,1,....1} mod 2 reveals how many times one must utilize f(j) to invert every light source. i={1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1} LinearSolve[V,i, Modulus -> 2] Answer: {0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0} in other words: Solution: B,E,F,G,I,J,K,N,O Later on I recieved a solution from the developer himself: B,C,D,E,G,K,P - which yields all odds when taking the dot product with V, so would work.
1,337
2,893
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.40625
3
CC-MAIN-2023-06
latest
en
0.898881
http://elliscomtech.blogspot.com/2012/02/get-tge-goof-february-14-happy.html
1,519,211,701,000,000,000
text/html
crawl-data/CC-MAIN-2018-09/segments/1518891813608.70/warc/CC-MAIN-20180221103712-20180221123712-00532.warc.gz
121,580,690
16,344
There was an error in this gadget ## Tuesday, February 14, 2012 ### GET THE GOOF: February 14 (Happy Valentines Day) 1. These numbers are supplementary not complementary. section 2 table 4 2. the answer is wrong because 50+130=180 so the complementry is 180. section-2 table-3 3. IS WRONG BECAUSE THE ANGLE GOT MIX UP AND THEN THEY PUT THE WRONG ONE THE RIGHT ANGLE IS WRONG. TABLE6 SECTION4 4. its not 90 because its a complementary angle but its soposed to be a suplementary angle sec 4 table 5 5. the answer to this problem is a supplementary angle and 180 degrees Section4 table4 6. SEC3 TABLE3 THE ANSWER IS WRONG BECAUSE THE 130 IS SUPPOSE TO BE 40 SO 40 AND 50 =90' DEGREES 7. its wrong because 130 its suppose to be 90 sec 3 table 5 8. it is wrong because it does not mean 90 de grees it = 180 not 90 is the answer is 180 degrees team 4 section 3 9. complementary angles are not 90 degrees they are 180 degrees table6 section3 10. the problem is that the angles that add up to 180 degrees are called supplementary. table2 section3 11. the problem is that they were supposed to add.but they subtracted.the answer is 180 degrees. table4 sec1 12. They said that 50 degrees plus 130 degrees is 90 degrees but its not table 3 section 1 13. did supplementary angles because there is an acute angle that has 130 degrees.section1 table 2
384
1,361
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.453125
3
CC-MAIN-2018-09
longest
en
0.910704
coconutisland.org
1,580,161,603,000,000,000
text/html
crawl-data/CC-MAIN-2020-05/segments/1579251728207.68/warc/CC-MAIN-20200127205148-20200127235148-00039.warc.gz
377,354,892
21,748
# Rupiah and Dollars Each time you go to a different country you are likely to encounter a different currency and trying to keep track of what you are spending or how much things cost in your own currency is often a struggle. That is certainly the case here in Bali where the currency is the Indonesian Rupiah. All transactions are made in cash in local currency and the constant use of credit cards we use at home that keeps our pockets empty of cash doesn’t exist here. That presents a calculation problem all the time. Today the exchange rate is 13,187.48 Rupiah per dollar (Rp/USD). Since the exchange rate fluctuates daily we use 13,000 Rp/USD but even that causes a problem, after all there’s a bunch of zeros to deal with and besides 13 is a prime number. I never learned my 13 times table and there are no shortcuts when calculating in prime numbers so estimating is tough. Let’s say renting a motorbike costs 75,000 Rp per day. How many dollars is that? (Well, by estimating let’s use 12,000 Rp/USD and that would make 72,000 Rp equal six bucks, so we’re in there some where; see you can work with 12 but 13 seems impossible.) Is a hotel at 500,000 per night a good deal? How about 1,200,000? That last one’s kind of easy; it’s about \$100 (1.3 million Rp=\$100 doesn’t it?). If you go away for the weekend how many rupiah do you stuff into your pockets? In fact, we are going over to Java soon for a couple weeks and will want to take cash with us so what part of Stephen’s pay should she ask for in Rp compared to dollars? We paid our way over here and got some of the reimbursement in rupiah after we arrived; that was about 15 million, a rather large pile of paper. The largest bill is 100,000 Rp or about \$7.70 so large transactions require lots of paper. Money is often sorted into bundles of one million rupiah. So what about coins? There are 100, 200, 500 and 1000 Rp coins. What is the dollar value of a 100 Rp coin? They must not think they are worth much since they are made of some metal that so lacks density I think is plastic but perhaps it’s aluminum. And to further complicate things, they use the metric system (I know that’s kinda backward but…) so what then is the price of gas in dollars per gallon if a liter costs 9000 Rp? Or what kind of gas mileage did that Xenia get on Lombok if we drove 370 Km on 30 L of petrol? But let’s not blame that problem on the rupiah exchange rate. No wonder on one cares. Oh well, we encountered the same problem in Turkey where the exchange rate started at 350,000 Turkish lira per dollar and devalued so rapidly month-by-month that it finally exceeded one million to one. Stephen’s pay there was adjusted monthly for inflation and we had to adjust our calculations all the time. See how dealing with such things can keep your mind young?
673
2,813
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.359375
3
CC-MAIN-2020-05
latest
en
0.955563
https://edurev.in/course/quiz/attempt/39730_Test-Conic-Sections-2/d630cf09-78ec-49aa-ac8e-b1eb703d18e9
1,721,393,307,000,000,000
text/html
crawl-data/CC-MAIN-2024-30/segments/1720763514902.63/warc/CC-MAIN-20240719105029-20240719135029-00557.warc.gz
191,756,460
53,737
Test: Conic Sections - 2 - JEE MCQ # Test: Conic Sections - 2 - JEE MCQ Test Description ## 25 Questions MCQ Test Mathematics (Maths) for JEE Main & Advanced - Test: Conic Sections - 2 Test: Conic Sections - 2 for JEE 2024 is part of Mathematics (Maths) for JEE Main & Advanced preparation. The Test: Conic Sections - 2 questions and answers have been prepared according to the JEE exam syllabus.The Test: Conic Sections - 2 MCQs are made for JEE 2024 Exam. Find important definitions, questions, notes, meanings, examples, exercises, MCQs and online tests for Test: Conic Sections - 2 below. Solutions of Test: Conic Sections - 2 questions in English are available as part of our Mathematics (Maths) for JEE Main & Advanced for JEE & Test: Conic Sections - 2 solutions in Hindi for Mathematics (Maths) for JEE Main & Advanced course. Download more important topics, notes, lectures and mock test series for JEE Exam by signing up for free. Attempt Test: Conic Sections - 2 | 25 questions in 25 minutes | Mock test for JEE preparation | Free important questions MCQ to study Mathematics (Maths) for JEE Main & Advanced for JEE Exam | Download free PDF with solutions Test: Conic Sections - 2 - Question 1 ### The radius of the circle passing through the foci of the ellipse   and having its centre at (0, 3) is Detailed Solution for Test: Conic Sections - 2 - Question 1 Test: Conic Sections - 2 - Question 2 ### The line y = c is a tangent to the parabola 7/2 if c is equal to Detailed Solution for Test: Conic Sections - 2 - Question 2 y = x is tangent to the parabola y=ax2+c if a= then c=? y′ =2ax y’ = 2(7/2)x  =1 x = 1/7 1/7 = 2(1/7)2 + c c = 1/7 * 2/49 c = 7/2 1 Crore+ students have signed up on EduRev. Have you? Test: Conic Sections - 2 - Question 3 ### The equation 2x2+3y2−8x−18y+35 = λ Represents Detailed Solution for Test: Conic Sections - 2 - Question 3 Given the equation is, 2x2+3y2−8x−18y+35=K Or, 2{x2−4x+4} + 3{y2−6y+9}=K Or, 2(x−2)2 + 3(y−3)2 =K. From the above equation it is clear that if K>0 then the given equation will represent an ellipse and for K<0, no geometrical interpretation. Also if K=0 then the given equation will be reduced to a point and the point will be (2,3). Test: Conic Sections - 2 - Question 4 The locus of a variable point whose distance from the point (2, 0) is 2/3 times its distance from the line x = 9/2 is Detailed Solution for Test: Conic Sections - 2 - Question 4 Test: Conic Sections - 2 - Question 5 The axis of the parabola 9y2−16x−12y−57 = 0 is Test: Conic Sections - 2 - Question 6 A and B are two distinct points, Locus of a point P satisfying |PA| + |PB| = 2k, a constant is Test: Conic Sections - 2 - Question 7 The eccentricity of the hyperbola x2−y2 = 9 is Test: Conic Sections - 2 - Question 8 Locus of the point of intersection of the lines x = sec θ + tan θ and y = sec θ – tan θ is Test: Conic Sections - 2 - Question 9 The line y = m x + c, touches the parabola y2 = 4ax if Test: Conic Sections - 2 - Question 10 The equations x = at2, y = 4at ; t ∈ R represent Test: Conic Sections - 2 - Question 11 t ∈ R represents Detailed Solution for Test: Conic Sections - 2 - Question 11 P(x,y) = [(et + e-t)/2 , (et - e-t)/2] (et + e-t)/2 = x --------------------------(1) (e- e-t)/2 = y --------------------------(2) 2e= 2x + 2y et = x + y Eq (1) et + e-t = 2x et + 1/et = 2x (et)2 + 1 = 2x*et (x+y)2 + 1 = 2x(x+y) x2 + y2 + 2xy + 1 = 2x2 + 2xy x2 + y2 + 1 = 2x2 (x2)/(1)2 - (y2)/(1)2 = 1  {which represents hyperbola equation} Test: Conic Sections - 2 - Question 12 The vertex of the parabola y2 = 4a(x−a) is Test: Conic Sections - 2 - Question 13 The two parabolas x2 = 4y and y2 = 4x meet in two distinct points. One of these is the origin and the other is Test: Conic Sections - 2 - Question 14 The equation of the directrix of the parabola x2 = −4ay is Test: Conic Sections - 2 - Question 15 The eccentricity ‘e’ of a parabola is Test: Conic Sections - 2 - Question 16 The ellipse Test: Conic Sections - 2 - Question 17 The equations x = a cos θ , y = b sin θ, 0 ≤ θ < 2π , a ≠ b, represent Test: Conic Sections - 2 - Question 18 The graph of the function f(x) i/x i.e. the curve y = 1/x is Test: Conic Sections - 2 - Question 19 The line y = c touches the parabola y2 = 4ax when Test: Conic Sections - 2 - Question 20 The parabolas x2 = 4y and y2 = 4x intersect Test: Conic Sections - 2 - Question 21 The lngth of the common chord of the parabolas y2 = x and x2 = y is Detailed Solution for Test: Conic Sections - 2 - Question 21 Test: Conic Sections - 2 - Question 22 The number of points on X-axis which are at a distance c units (c < 3) from (2, 3) is Detailed Solution for Test: Conic Sections - 2 - Question 22 Distance of 'c' units from (2,3) Let the no: of points be (x,0) By distance formula {(2−x)2+(3−0)2}=c 4−4x+x2+9=c ⇒x2−4x+13 = c:c=2,2 There are the points of c,such that when they are applied back to the equations,the number of points will become zero. Test: Conic Sections - 2 - Question 23 The eccentricity of the conic 9x2 − 16y2 = 144 is Test: Conic Sections - 2 - Question 24 The eccentricity of 3x2+4y2 = 24 is Test: Conic Sections - 2 - Question 25 The angle between the tangents drawn from the origin to the circle = (x−7)2+(y+1)2 = 25 is Detailed Solution for Test: Conic Sections - 2 - Question 25 Let the equation of tangent drawn from (0,0) to the circle be y=mx. Then, p = a ⇒ 7m+1/(m2+1)1/2= 5 ⇒24m2 + 14m−24=0 ⇒12m2 + 7m−12=0 ⇒m1m2 = −12/12 =−1 ∴ Required angle = π/2 ## Mathematics (Maths) for JEE Main & Advanced 209 videos|443 docs|143 tests Information about Test: Conic Sections - 2 Page In this test you can find the Exam questions for Test: Conic Sections - 2 solved & explained in the simplest way possible. Besides giving Questions and answers for Test: Conic Sections - 2, EduRev gives you an ample number of Online tests for practice ## Mathematics (Maths) for JEE Main & Advanced 209 videos|443 docs|143 tests
2,065
5,992
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.25
4
CC-MAIN-2024-30
latest
en
0.846424
https://www.plati.ru/itm/solution-of-the-d1-option-11-dievskaya-va-malyshev-ia/2030595?lang=en-US
1,519,570,445,000,000,000
text/html
crawl-data/CC-MAIN-2018-09/segments/1518891816462.95/warc/CC-MAIN-20180225130337-20180225150337-00292.warc.gz
953,680,052
12,531
# Solution of the D1 Option 11 Dievskaya VA Malyshev IA Affiliates: 0,01 \$ — how to earn Sold: 11 (last one 4 days ago) Refunds: 0 Content: d1-11.zip (37,3 kB) Loyalty discount! If the total amount of your purchases from the seller TerMaster more than: 50 \$ the discount is 10% show all discounts 1 \$ the discount is 1% # Seller TerMaster information about the seller and his items Seller will give you a gift certificate in the amount of 5,4 RUB for a positive review of the product purchased.. # Description The solution of the problem D1 11. Boat mass m = 50 kg reported initial velocity v0 = 2,7 m / s. When driving the boat force of resistance is proportional to the rate of R = -5v. Determine the speed of the boat at a time t = 10 seconds.
215
757
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.6875
3
CC-MAIN-2018-09
longest
en
0.798201
https://ccssmathanswers.com/into-math-grade-3-module-13-lesson-6-answer-key/
1,723,711,369,000,000,000
text/html
crawl-data/CC-MAIN-2024-33/segments/1722641278776.95/warc/CC-MAIN-20240815075414-20240815105414-00340.warc.gz
131,552,041
59,310
# Into Math Grade 3 Module 13 Lesson 6 Answer Key Represent and Name Fractions Greater Than 1 We included HMH Into Math Grade 3 Answer Key PDF Module 13 Lesson 6 Represent and Name Fractions Greater Than 1 to make students experts in learning maths. ## HMH Into Math Grade 3 Module 13 Lesson 6 Answer Key Represent and Name Fractions Greater Than 1 I Can identify fractions greater than 1 on a number line and write them in fraction form and as mixed numbers. Emilio cuts his pizzas into slices. Each slice is a fourth of a whole pizza. Emilio has 9 slices to sell. Show all the different amounts of pizza that Emilio can sell. Name each fraction that you show. There are 9 slices. Each slice is 1/4th of whole pizza. The different amounts of pizza can be shown as: 1/4, 2/4, 3/4, 4/4, 5/4, 6/4, 7/4, 8/4, 9/4. – A fraction consists of two numbers (a/b), a numerator and a denominator. The number written at the top is the numerator, and the one written at the bottom is the denominator. – There are three major types of fractions, named as proper fractions (or the fractions less than one and greater than 0, with numerator less than denominator), improper fractions (or the fractions more than one or equal to one with a numerator greater than or equal to the denominator), and mixed fractions (a combination of a whole number and a proper fraction). Turn and Talk The numerator of a fraction is greater than the denominator. What does that tell you about the fraction? Answer: Not all the numerators will be less than the denominator in fractions. Sometimes the numerator will be greater than the denominator. If the numerator is greater than the denominator, then the fraction is called an improper fraction. – Suppose, x/y is an improper fraction, such that x > y. It is, therefore, the improper fraction is always greater than one. Examples: 13/4; 25/20; 9/5 and so on… Build Understanding Question 1. Jasmine has a stack of 7 waffle halves. How many plates can Jasmine fill with 1 whole waffle? Will any waffle halves be left over? Draw to show the waffles. A.How many plates do Jasmine fill with whole waffles? Jasmine can fill 2 plates with whole waffles. The number of waffle halves=7 1 whole waffle=2 parts. In the stack, there are 7 waffle halves. We can put 1 part of the waffle in 1st half and another half in the second half-waffle present in the stack. So we can fill 2 plates. B. How many halves are left over? The total number of waffle halves=7 In that 7 waffle halves, 2 are filled with the help of one whole waffle. The remaining waffles left are 7-2=5 therefore, 5 halves are leftover. C. What fraction greater than 1 and mixed number can you write to represent the waffles that Jasmine serves? ________ = _______ wholes + ________ leftover = ________ The total number of waffle halves=7 The number of the whole waffles=2 the remaining half waffles=5 The fraction is 7/5. This 7/5 we need to convert a mixed fraction: There are some steps to follow: Step 1: Find the whole number Calculate out how many times the denominator goes into the numerator. To do that, divide 7 by 5 and keep only what is to the left of the decimal point: 7/5=1.400=1 Step 2: Find a new numerator Multiply the answer from Step 1 by the denominator and deduct that from the original numerator. 7-(5*1)=2 Step 3: Get a solution Keep the original denominator and use the answers from Step 1 and Step 2 to get the answer. 7/5 as a mixed number is: 7/5=1 2/5 Connect to Vocabulary A fraction greater than 1 has a numerator greater than its denominator. $$\frac{5}{3}$$ A mixed number is a number greater than 1 represented by a whole number and a fraction. Read: four and three eighths Question 2. To make a shirt, Luc needs $$\frac{5}{3}$$ yards of cloth. Complete the number line to show thirds. A. How many thirds are in 1 yard? 3 1/3 are there in 1 yard. From 0 to 1 means completely 1 yard. In between 0 and 1, there is 1/3, 2/3, 3/3 which is nothing but 1. So there are 3 1/3 yards are there. B. How many whole yards does Luc need? Circle the fraction on the number line that shows this. 2 whole yards are needed. But the fraction is between 1 and 2 C. Circle the section of the number line which represents the amount that Luc still needs. Luc needs 5/3 yards of cloth. It is in between 1 and 2 sections. D. Write a mixed number to represent $$\frac{5}{3}$$. 5/3 is an improper fraction Not all the numerators will be less than the denominator in fractions. Sometimes the numerator will be greater than the denominator. If the numerator is greater than the denominator, then the fraction is called an improper fraction. – Suppose, x/y is an improper fraction, such that x > y. It is, therefore, the improper fraction is always greater than one Step 1: Find the whole number Calculate out how many times the denominator goes into the numerator. To do that, divide 5 by 3 and keep only what is to the left of the decimal point: 5/3=1 which is the quotient Step 2: Find a new numerator Multiply the answer from Step 1 by the denominator and deduct that from the original numerator. 5-(3*1)=2 (remainder) Step 3: Get a solution The mixed fraction can be written as a quotient (remainder/divisor) Keep the original denominator and use the answers from Step 1 and Step 2 to get the answer. 5/3 as a mixed number is: 1 2/3 Question 3. Nisa’s egg cartons each hold 8 eggs. Nisa has 17 eggs. How many cartons of eggs does Nisa fill? A. Draw to show the eggs in Nisa’s cartons. The total number of eggs Nisa has=17 The number of eggs egg carton hold=8 The number of cartons of eggs does Nisa fill=X X=17/8 X=2.125 B. How many whole cartons does Nisa fill completely? In each carton, 8 eggs will be filled There are 17 eggs are there. 16 eggs will be filled in 2 cartons. And the remaining egg will be placed in the third carton. Therefore, 2 whole cartons Nisa filled completely. C. How many eggs are left over? What fraction of a carton is left over? Answer: 1 egg left over. The fraction of carton =1/8 In each carton, 8 eggs will be filled. The remaining egg will be placed in the new carton. D. Write the number of cartons Nisa fills as a mixed number. Nisa fills _________ cartons of eggs. The fraction is 17/8 which is an improper fraction. Not all the numerators will be less than the denominator in fractions. Sometimes the numerator will be greater than the denominator. If the numerator is greater than the denominator, then the fraction is called an improper fraction. – Suppose, x/y is an improper fraction, such that x > y. It is, therefore, the improper fraction is always greater than one Step 1: Find the whole number Calculate out how many times the denominator goes into the numerator. To do that, divide 17 by 8 and keep only what is to the left of the decimal point: 17/8=2 which is the quotient Step 2: Find a new numerator Multiply the answer from Step 1 by the denominator and deduct that from the original numerator. 17-(8*2)=1 (remainder) Step 3: Get a solution The mixed fraction can be written as a quotient (remainder/divisor) Keep the original denominator and use the answers from Step 1 and Step 2 to get the answer. 17/8 as a mixed number is: 2 1/8 Therefore, Nisa fills 2 1/8 cartons of eggs. Turn and Talk How many more eggs does Nisa need to fill her partly-filled carton? Explain how you know. Answer: 7 eggs Nisa need to fill her partly-filled carton. Explanation: We already know that each carton can be filled with 8 eggs and Nisa has 17 eggs. In 2 whole cartons, 16 eggs are filled completely and the remaining egg is filled in the next carton. Out of 8 places, 1 place is filled with one egg and the remaining 7 need to fill. Check Understanding Question 1. Shel has $$\frac{11}{8}$$ pizzas to sell. How much pizza does he have? Write the mixed number. __________ pizzas 11/8 is an improper fraction. Not all the numerators will be less than the denominator in fractions. Sometimes the numerator will be greater than the denominator. If the numerator is greater than the denominator, then the fraction is called an improper fraction. – Suppose, x/y is an improper fraction, such that x > y. It is, therefore, the improper fraction is always greater than one Step 1: Find the whole number Calculate out how many times the denominator goes into the numerator. To do that, divide 11 by 8 and keep only what is to the left of the decimal point: 11/8=1 which is the quotient Step 2: Find a new numerator Multiply the answer from Step 1 by the denominator and deduct that from the original numerator. 11-(8*1)=3 (remainder) Step 3: Get a solution The mixed fraction can be written as a quotient (remainder/divisor) Keep the original denominator and use the answers from Step 1 and Step 2 to get the answer. 11/8 as a mixed number is: 1 3/8 Question 2. Mae swims $$\frac{1}{2}$$ mile each day. How many miles does Mae swim in 5 days? Complete the number line to show the distance and write the distance as a mixed number. Mae swims _______ miles. The miles Mae swims each day=1/2 The number of miles does Mae swim in 5 days=X For 1 day 1/2 mile; for 5 days X mile. X=5/2 mile 5/2 is an improper fraction. Not all the numerators will be less than the denominator in fractions. Sometimes the numerator will be greater than the denominator. If the numerator is greater than the denominator, then the fraction is called an improper fraction. – Suppose, x/y is an improper fraction, such that x > y. It is, therefore, the improper fraction is always greater than one Step 1: Find the whole number Calculate out how many times the denominator goes into the numerator. To do that, divide 5 by 2 and keep only what is to the left of the decimal point: 5/2=2 which is the quotient Step 2: Find a new numerator Multiply the answer from Step 1 by the denominator and deduct that from the original numerator. 5-(2*2)=1 (remainder) Step 3: Get a solution The mixed fraction can be written as a quotient (remainder/divisor) Keep the original denominator and use the answers from Step 1 and Step 2 to get the answer.5/2 as a mixed number is: 2 1/2 Question 3. Use Repeated Reasoning The square represents 1 whole and each triangle represents $$\frac{1}{2}$$ of a whole. How would you represent 3$$\frac{1}{2}$$? 3$$\frac{1}{2}$$: It is in mixed form. Now convert it into fraction form. 3 1/2 can be written as 7/2 Explanation: – we multiply the whole number by the denominator. 3*2=6 – Then, we add the numerator to the answer we got. 6+1=7 – finally, to get the solution, we keep the original denominator and make the numerator the answer. Thus, 3 1/2 as an improper fraction is 7/2. Question 4. Use Structure Jerry has 14 lemons that are packed in bags of 6 lemons. How many bags of lemons does Jerry have? Write a mixed number. The number of lemons Jerry has=14+6=20 The number of bags=X Suppose, in each bag 6 lemons are there then the value of X is: X=20/6 X=10/3 which is an improper fraction. now, this 10/3 should convert into the mixed form: Not all the numerators will be less than the denominator in fractions. Sometimes the numerator will be greater than the denominator. If the numerator is greater than the denominator, then the fraction is called an improper fraction. – Suppose, x/y is an improper fraction, such that x > y. It is, therefore, the improper fraction is always greater than one Step 1: Find the whole number Calculate out how many times the denominator goes into the numerator. To do that, divide 10 by 3 and keep only what is to the left of the decimal point: 10/3=3 which is the quotient Step 2: Find a new numerator Multiply the answer from Step 1 by the denominator and deduct that from the original numerator. 10-(3*3)=1 (remainder) Step 3: Get a solution The mixed fraction can be written as a quotient (remainder/divisor) Keep the original denominator and use the answers from Step 1 and Step 2 to get the answer.10/3 as a mixed number is: 3 1/3 Question 5. Nya needs string that is nine-sixths feet long. Complete the number line to show the length of string that Nya needs. Write a mixed number. Nya needs ________ feet of string. 9/6 =3/2 which is an improper fraction. Not all the numerators will be less than the denominator in fractions. Sometimes the numerator will be greater than the denominator. If the numerator is greater than the denominator, then the fraction is called an improper fraction. – Suppose, x/y is an improper fraction, such that x > y. It is, therefore, the improper fraction is always greater than one Step 1: Find the whole number Calculate out how many times the denominator goes into the numerator. To do that, divide 3 by 2 and keep only what is to the left of the decimal point: 3/2=1 which is the quotient Step 2: Find a new numerator Multiply the answer from Step 1 by the denominator and deduct that from the original numerator. 3-(2*1)=1 (remainder) Step 3: Get a solution The mixed fraction can be written as a quotient (remainder/divisor) Keep the original denominator and use the answers from Step 1 and Step 2 to get the answer.3/2 as a mixed number is: 1 1/2 She needs 1 1/2 feet of string. Write the fraction as a mixed number. Question 6. $$\frac{5}{4}$$ 5/4 is an improper fraction. Not all the numerators will be less than the denominator in fractions. Sometimes the numerator will be greater than the denominator. If the numerator is greater than the denominator, then the fraction is called an improper fraction. – Suppose, x/y is an improper fraction, such that x > y. It is, therefore, the improper fraction is always greater than one Step 1: Find the whole number Calculate out how many times the denominator goes into the numerator. To do that, divide 5 by 4 and keep only what is to the left of the decimal point: 5/4=1.2500=1 Step 2: Find a new numerator Multiply the answer from Step 1 by the denominator and deduct that from the original numerator. 5-(4*1)=1 Step 3: Get a solution Keep the original denominator and use the answers from Step 1 and Step 2 to get the answer. 5/4 as a mixed number is: 1 1/4 Question 7. $$\frac{7}{3}$$ 7/3 is an improper fraction Not all the numerators will be less than the denominator in fractions. Sometimes the numerator will be greater than the denominator. If the numerator is greater than the denominator, then the fraction is called an improper fraction. – Suppose, x/y is an improper fraction, such that x > y. It is, therefore, the improper fraction is always greater than one Step 1: Find out the whole number Calculate out how many times the denominator goes into the numerator. To do that, divide 7 by 3 and keep only what is to the left of the decimal point: 7/3=2.333=2 Step 2: Find a new numerator Multiply the answer from Step 1 by the denominator and deduct that from the original numerator. 7-(3*2)=1 Step 3: Get a solution Keep the original denominator and use the answers from Step 1 and Step 2 to get the answer. 7/3 as a mixed number is: 2 1/3 Question 8. $$\frac{10}{8}$$ 10/8 is an improper fraction Not all the numerators will be less than the denominator in fractions. Sometimes the numerator will be greater than the denominator. If the numerator is greater than the denominator, then the fraction is called an improper fraction. – Suppose, x/y is an improper fraction, such that x > y. It is, therefore, the improper fraction is always greater than one Step 1: Find out the whole number Calculate out how many times the denominator goes into the numerator. To do that, divide 10 by 8 and keep only what is to the left of the decimal point: 10/8=1 Step 2: Find a new numerator Multiply the answer from Step 1 by the denominator and deduct that from the original numerator. 10-(8*1)=2 Step 3: Get a solution Keep the original denominator and use the answers from Step 1 and Step 2 to get the answer. 10/8 as a mixed number is: 1 2/8 Question 9. $$\frac{9}{4}$$ 9/4 is an improper fraction. Not all the numerators will be less than the denominator in fractions. Sometimes the numerator will be greater than the denominator. If the numerator is greater than the denominator, then the fraction is called an improper fraction. – Suppose, x/y is an improper fraction, such that x > y. It is, therefore, the improper fraction is always greater than one Step 1: Find out the whole number Calculate out how many times the denominator goes into the numerator. To do that, divide 9 by 4 and keep only what is to the left of the decimal point: 9/4=2 Step 2: Find a new numerator Multiply the answer from Step 1 by the denominator and deduct that from the original numerator. 9-(4*2)=1 Step 3: Get a solution Keep the original denominator and use the answers from Step 1 and Step 2 to get the answer. 9/4 as a mixed number is: 2 1/4 I’m in a Learning Mindset! Is there anything still unclear to me about mixed numbers after finishing this lesson? Explain.
4,332
16,848
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.8125
5
CC-MAIN-2024-33
latest
en
0.891012
https://nrich.maths.org/public/leg.php?code=-93&cl=3&cldcmpid=387
1,432,706,011,000,000,000
text/html
crawl-data/CC-MAIN-2015-22/segments/1432207928907.65/warc/CC-MAIN-20150521113208-00118-ip-10-180-206-219.ec2.internal.warc.gz
895,447,747
8,545
# Search by Topic #### Resources tagged with Making and proving conjectures similar to Sixational: Filter by: Content type: Stage: Challenge level: ### There are 37 results Broad Topics > Using, Applying and Reasoning about Mathematics > Making and proving conjectures ### Janine's Conjecture ##### Stage: 4 Challenge Level: Janine noticed, while studying some cube numbers, that if you take three consecutive whole numbers and multiply them together and then add the middle number of the three, you get the middle number. . . . ### DOTS Division ##### Stage: 4 Challenge Level: Take any pair of two digit numbers x=ab and y=cd where, without loss of generality, ab > cd . Form two 4 digit numbers r=abcd and s=cdab and calculate: {r^2 - s^2} /{x^2 - y^2}. ### On the Importance of Pedantry ##### Stage: 3, 4 and 5 A introduction to how patterns can be deceiving, and what is and is not a proof. ### To Prove or Not to Prove ##### Stage: 4 and 5 A serious but easily readable discussion of proof in mathematics with some amusing stories and some interesting examples. ### Rotating Triangle ##### Stage: 3 and 4 Challenge Level: What happens to the perimeter of triangle ABC as the two smaller circles change size and roll around inside the bigger circle? ### Helen's Conjecture ##### Stage: 3 Challenge Level: Helen made the conjecture that "every multiple of six has more factors than the two numbers either side of it". Is this conjecture true? ### Multiplication Square ##### Stage: 3 Challenge Level: Pick a square within a multiplication square and add the numbers on each diagonal. What do you notice? ### Always a Multiple? ##### Stage: 3 Challenge Level: Think of a two digit number, reverse the digits, and add the numbers together. Something special happens... ### A Little Light Thinking ##### Stage: 4 Challenge Level: Here is a machine with four coloured lights. Can you make two lights switch on at once? Three lights? All four lights? ### Loopy ##### Stage: 4 Challenge Level: Investigate sequences given by $a_n = \frac{1+a_{n-1}}{a_{n-2}}$ for different choices of the first two terms. Make a conjecture about the behaviour of these sequences. Can you prove your conjecture? ### What's Possible? ##### Stage: 4 Challenge Level: Many numbers can be expressed as the difference of two perfect squares. What do you notice about the numbers you CANNOT make? ### Triangles Within Triangles ##### Stage: 4 Challenge Level: Can you find a rule which connects consecutive triangular numbers? ### Triangles Within Pentagons ##### Stage: 4 Challenge Level: Show that all pentagonal numbers are one third of a triangular number. ### Multiplication Arithmagons ##### Stage: 4 Challenge Level: Can you find the values at the vertices when you know the values on the edges of these multiplication arithmagons? ### Triangles Within Squares ##### Stage: 4 Challenge Level: Can you find a rule which relates triangular numbers to square numbers? ### Polycircles ##### Stage: 4 Challenge Level: Show that for any triangle it is always possible to construct 3 touching circles with centres at the vertices. Is it possible to construct touching circles centred at the vertices of any polygon? ### Problem Solving, Using and Applying and Functional Mathematics ##### Stage: 1, 2, 3, 4 and 5 Challenge Level: Problem solving is at the heart of the NRICH site. All the problems give learners opportunities to learn, develop or use mathematical concepts and skills. Read here for more information. ### How Old Am I? ##### Stage: 4 Challenge Level: In 15 years' time my age will be the square of my age 15 years ago. Can you work out my age, and when I had other special birthdays? ### Happy Numbers ##### Stage: 3 Challenge Level: Take any whole number between 1 and 999, add the squares of the digits to get a new number. Make some conjectures about what happens in general. ### Pentagon ##### Stage: 4 Challenge Level: Find the vertices of a pentagon given the midpoints of its sides. ##### Stage: 4 Challenge Level: Explore the relationship between quadratic functions and their graphs. ### Close to Triangular ##### Stage: 4 Challenge Level: Drawing a triangle is not always as easy as you might think! ### Few and Far Between? ##### Stage: 4 and 5 Challenge Level: Can you find some Pythagorean Triples where the two smaller numbers differ by 1? ### Pericut ##### Stage: 4 and 5 Challenge Level: Two semicircle sit on the diameter of a semicircle centre O of twice their radius. Lines through O divide the perimeter into two parts. What can you say about the lengths of these two parts? ##### Stage: 4 Challenge Level: Points D, E and F are on the the sides of triangle ABC. Circumcircles are drawn to the triangles ADE, BEF and CFD respectively. What do you notice about these three circumcircles? ### Exploring Simple Mappings ##### Stage: 3 Challenge Level: Explore the relationship between simple linear functions and their graphs. ### Curvy Areas ##### Stage: 4 Challenge Level: Have a go at creating these images based on circles. What do you notice about the areas of the different sections? ##### Stage: 4 Challenge Level: The points P, Q, R and S are the midpoints of the edges of a convex quadrilateral. What do you notice about the quadrilateral PQRS as the convex quadrilateral changes? ### Dice, Routes and Pathways ##### Stage: 1, 2 and 3 This article for teachers discusses examples of problems in which there is no obvious method but in which children can be encouraged to think deeply about the context and extend their ability to. . . . ##### Stage: 4 Challenge Level: The points P, Q, R and S are the midpoints of the edges of a non-convex quadrilateral.What do you notice about the quadrilateral PQRS and its area? ### Charlie's Mapping ##### Stage: 3 Challenge Level: Charlie has created a mapping. Can you figure out what it does? What questions does it prompt you to ask? ### Alison's Mapping ##### Stage: 4 Challenge Level: Alison has created two mappings. Can you figure out what they do? What questions do they prompt you to ask? ### Center Path ##### Stage: 3 and 4 Challenge Level: Four rods of equal length are hinged at their endpoints to form a rhombus. The diagonals meet at X. One edge is fixed, the opposite edge is allowed to move in the plane. Describe the locus of. . . . ### Tri-split ##### Stage: 4 Challenge Level: A point P is selected anywhere inside an equilateral triangle. What can you say about the sum of the perpendicular distances from P to the sides of the triangle? Can you prove your conjecture? ### Consecutive Negative Numbers ##### Stage: 3 Challenge Level: Do you notice anything about the solutions when you add and/or subtract consecutive negative numbers? ### Epidemic Modelling ##### Stage: 4 and 5 Challenge Level: Use the computer to model an epidemic. Try out public health policies to control the spread of the epidemic, to minimise the number of sick days and deaths.
1,592
7,038
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.25
4
CC-MAIN-2015-22
longest
en
0.832326
https://cherryblossomlove.com/how-to-sew/how-many-watts-does-a-sewing-machine-use.html
1,656,466,726,000,000,000
text/html
crawl-data/CC-MAIN-2022-27/segments/1656103619185.32/warc/CC-MAIN-20220628233925-20220629023925-00302.warc.gz
212,440,897
17,970
# How many watts does a sewing machine use? Contents ## Can a generator run a sewing machine? If it’s a simple sewing machine, with just a motor to worry about, it will almost certainly run just fine on a Modified Sine Wave inverter. ## What is the wattage of an industrial sewing machine? Consew Industrial Sewing Machine Servo Motor – 550 Watts, 110 Volts. ## Do you need a good source of power while using the sewing machine? Ensure that the power source is good enough for the machine . Always take out the power plug from the outlet when not in use. ## Are sewing machines dual voltage? Like other electronics, certain model sewing machines let you use them in both America, which uses 110 Volts (V) of electricity, as well as in Europe, which uses 220 to 240V. This electrical ability is known as dual voltage. … Your sewing machine works on 110 volts of electricity or 220 volts. ## How many watts does a fridge use? The average home refrigerator uses 350-780 watts. Refrigerator power usage depends on different factors, such as what kind of fridge you own, its size and age, the kitchen’s ambient temperature, the type of refrigerator, and where you place it. ## How much does 100 watts per hour cost? Common watts to kilowatt-hour conversions for a 1-hour time period, along with the estimated cost of electricity assuming a price of \$0.12 per kWh. Common Watts to Kilowatt-Hour Conversions. THIS IS FUN:  You asked: What is fluffy yarn called? Power in Watts Energy in Kilowatt-hours Electricity Cost 100 W 0.1 kWh \$0.012 per hr 200 W 0.2 kWh \$0.024 per hr ## How many watts does a Brother sewing machine use? Brother Sewing Machine Watts If your Brother sewing machine runs on 120 volts, it needs about . 65 amps to operate to its full potential. So to figure out the wattage, all you do is multiply 120 by . 65 which equals about 78 watts.
441
1,873
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.578125
3
CC-MAIN-2022-27
latest
en
0.904553
https://breedingbetterbananas.org/dendrobium-phalaenopsis-hwl/page.php?a5adb8=tridiagonal-matrix-lu-decomposition
1,621,192,601,000,000,000
text/html
crawl-data/CC-MAIN-2021-21/segments/1620243991178.59/warc/CC-MAIN-20210516171301-20210516201301-00177.warc.gz
162,745,596
44,908
# tridiagonal matrix lu decomposition Golub and C. Van Loan, Matrix Computations, Third Edition, Johns Hopkins University Press, (1996) G. Meurant, A review of the inverse of tridiagonal and block tridiagonal matrices, SIAM J. Matrix Anal. The function pregmres in the software distribution approximates the solution to Ax = b using Equation 21.29.Remark 21.5Algorithm 21.6 will fail if there is a zero on the diagonal of U. The performance of the method is analytically estimated based on the number of elementary multiplicative operations for its parallel and serial parts. SPMD style OpenMP parallelization scales well for the 813 grid, but shows degradation due to the serial component in still unoptimized subroutines. The lu component combines the matrices L and U, the p component specifies the permutation of the rows of the matrix required (none in this example), and the 1 component is a condition number of the matrix. Partial pivot with row exchange is selected. [0-9]+ × [0-9]+−15, niter = 20, the solution was obtained using gmresb and mpregmres. 2. A tri-diagonal matrix is one with non-zero entries along the main diagonal, and one diagonal above and below the main one (see the figure). It is recommended that, in practice, mpregmres be used rather than pregmres.Example 21.9The 903 × 903 nonsymmetric matrix, DK01R, in Figure 21.11 was used to solve a computational fluid dynamics problem. [0-9]+ × [0-9]+8, so it is ill-conditioned. 287-320]. In numerical analysis and linear algebra, LU decomposition (where ‘LU’ stands for ‘lower upper’, and also called LU factorization) factors a matrix as the product of a lower triangular matrix and an upper triangular matrix. Another preconditioning strategy that has proven successful when there are a few isolated extremal eigenvalues is Deflation [7]. However, the 1's are useless as with the zeroes, they just waste space so I require the algorithm return the following tridiagonal matrix to act as the LU decomposition: b_0 c_0 0 0 a_0 b_1 c_1 0 0 a_1 b_2 c_2 0 0 a_2 b_3 I've managed to obtain the following equations: LU-Factorization, and Cholesky Factorization 3.1 Gaussian Elimination and LU-Factorization Let A beann×n matrix, let b ∈ Rn beann-dimensional vector and assume that A is invertible. Resolve when the right sides of each equation are replaced by 10, 10, and 10, respectively. We will not discuss this, but the interested reader will find a presentation in Ref. View wiki source for this page without editing. If we have a system of $Ax = f$ and assume pivoting is not used, then most of the multipliers $m_{ik} = 0$. 2. If we have a system of $Ax = f$ and assume pivoting is not used, then most of the multipliers $m_{ik} = 0$. Symmetric Positive De nite Matrices I A2R n is called symmetric if A= AT. Usual serial LU decomposition of a single M × M tridiagonal system requires 8 M floating point operations and a temporary storage array of M elements [ Press et al. 287-296]. Computes an LU factorization of a general tridiagonal matrix, using partial pivoting with row interchanges: sgttrs, dgttrs cgttrs, zgttrs: Solves a general tridiagonal system of linear equations AX=B, A**T X=B or A**H X=B, using the LU factorization computed by … A modified factorization algorithm for the solution of a linear system with a symmetric tridiagonal coefficient matrix is presented. We'll now study the algorithm of LU decomposition with a tridiagonal matrix A. Creative Commons Attribution-ShareAlike 3.0 License. Something does not work as expected? Archived . The LU decomposition algorithm for solving this set is. or Hockney and Eastwood ]. We first consider a symmetric matrix A∈Rn×n with linear system Au=f,f∈Rn where u∈Rn is to be determined. Similar topics can also be found in the Linear Algebra section of the site. LU Factorization method, also known as LU decomposition method, is a popular matrix decomposing method of numerical analysis and engineering science. So we start with the tridiagonal matrix from before. It is recommended that, in practice, mpregmres be used rather than pregmres. It is hoped that if M = LU, then M−1A will have a smaller condition number than A. Algorithm 21.6 describes the incomplete LU decomposition. If A is an m-by-n matrix that can be reduced to row echelon form without requiring a permutation of rows then there exist a lower- triangular matrix L with is on the diagonal and an m-by-n row echelon matrix U such that A = LU. NLALIB: The function eigvechess implements Algorithm 18.6. This page is intended to be a part of the Numerical Analysis section of Math Online. In this article we will present a NumPy/SciPy listing, as well as a pure Python listing, for the LU Decomposition method, which is used in certain quantitative finance algorithms.. One of the key methods for solving the Black-Scholes Partial Differential Equation (PDE) model of options pricing is using Finite Difference Methods (FDM) to discretise the PDE and evaluate the solution numerically. [0-9]+ × [0-9]+−16. Comparing gmresb and mpregmres. The decomposition method which makes the parallel solution of the block-tridiagonal matrix systems possible is presented. The non-zero part of the matrix consists of a set of diagonals and includes the main diagonal. Matrix A may be real or complex. This method factors a matrix as a product of lower triangular and upper triangular matrices. 2 Notation and Algorithm. where Z and Y are suitable subspaces of dimension n × m. We solve the system Au = f using deflation. Matlab implements LU factorization by using the function lu and may produce a matrix that is not strictly a lower triangular matrix. A (i + 1 : n, i + 1 : n) = A (i + 1 : n, i + 1 : n) − A (i + 1 : n, i) A (i, i + 1 : n). In Sectio 1 w*ne give a number of estimation methods applicable to both classes of matrices. Suppose K is a suitable preconditioner of A, then (5) can be replaced by: solve u¯ from K−1PA u¯ = K−1Pf, and form Q u˜, or solve v˜ from PAK−1 v˜ = Pf, and form QK− 1 v˜. William Ford, in Numerical Linear Algebra with Applications, 2015. Hence y21=2.25, etc. where L is a lower triangular matrix with a leading diagonal of ones and U is an upper triangular matrix. Compute factors L and U so that if element aij ≠ 0 then the element at index (i, j) of A − LU is zero. Xin-She Yang, in Engineering Mathematics with Examples and Applications, 2017. Time its LU decomposition using ludecomp developed in Chapter 11, and then time its decomposition using luhess. % Output: lower-triangular matrix L and upper-triangular matrix U such that A = LU. We use cookies to help provide and enhance our service and tailor content and ads. In this article we will present a NumPy/SciPy listing, as well as a pure Python listing, for the LU Decomposition method, which is used in certain quantitative finance algorithms.. One of the key methods for solving the Black-Scholes Partial Differential Equation (PDE) model of options pricing is using Finite Difference Methods (FDM) to discretise the PDE and evaluate the solution numerically. Such a matrix is known as a Tridiagonal Matrix is it in a sense contains three diagonals. Automatic parallelization, PFA, scales comparably to SPMD style OpenMP parallelism, but performs poorly for larger scale sizes and when more than 8 processors are used. Modified LU decomposition algorithm for a symmetric, tridiagonal matrix. Click here to toggle editing of individual sections of the page (if possible). • The MatrixDecomposition command can perform the following decompositions: LU, PLU, LU Tridiagonal, PLU Scaled, LDU, LDLt and Cholesky. Consider an $n \times n$ matrix $A$ in the following form: Such a matrix is known as a Tridiagonal Matrix is it in a sense contains three diagonals. If you want to discuss contents of this page - this is the easiest way to do it. The complete Y matrix is, Finally solving UX=Y by back substitution gives. Thus row 1 of T(1) has a unit entry in column 1 and zero elsewhere. In Matlab compute using [L,U]=lu(S). An LU decomposition of a matrix A is a product of a lower-triangular matrix L and an upper-triangular matrix U. but that the decomposition can be used if the first and third equations are interchanged. Comparing gmresb and mpregmresiterrTime‖x_DK01R−x‖2Solution supplied−6.29 × 10−16−−gmresb−1(failure)5.39 × 10−106.639.93 × 10−11mpregmres11.04 × 10−150.915.20 × 10−17In a second experiment, the function gmresb required 13.56 s and 41 iterations to attain a residual of 8. An LU decomposition of a matrix A is a product of a lower-triangular matrix L and an upper-triangular matrix U. Note that PAZ = 0, so that PA has m zero-eigenvalues and the effective condition number is: κeffPA=λnAλm+1A. Now we consider a generalization of the projection P for a nonsymmetric matrix A∈Rn×n. Hence row 2 of T(1) is [2/310]. >> tic;[L1, U1, P1] = ludecomp(EX18_17);toc. View and manage file attachments for this page. Append content without editing the whole page source. Full Record; Other Related Research; Abstract. Faster LU decomposition algorithm for tridiagonal, symmetric, Toeplitz matrices? Lecture Notes for Mat-inf 4130, 2017 Tom Lyche June 16, 2017 [9, p. 630]). Appl., v 13 n 3, (1992), pp 707–728 Because U is an upper triangular matrix, this equation can also be solved efficiently by back substitution. Thus, Pu is an eigenvector of A corresponding to eigenvalue λ. We now show how the Matlab function lu solves the example based on the matrix given in (2.15): To obtain the L and U matrices, we must use that Matlab facility of assigning two parameters simultaneously as follows: Note that the L1 matrix is not in lower triangular form, although its true form can easily be deduced by interchanging rows 2 and 3 to form a triangle. Replacing lu by chol gives a timing of 10.067633 seconds-- very … C. Vuik, ... F.J. Vermolen, in Parallel Computational Fluid Dynamics 2001, 2002, We use preconditioners based on an incomplete block LU decomposition [6]. The matrix A can be decomposed so that. G.R. The approximate condition number of the matrix is 2. The matrix A can be decomposed so that. The MATLAB function luhess in the software distribution implements the algorithm. Table 21.1 gives the results of comparing the solutions from mpregmres and gmresb to x_DK01R. Edited: Jan on 3 Apr 2016 Accepted Answer: Jan. How can help to a program LU decomposition of tridiagonal matrix 0 Comments. This probably will help There is a function creates_tridiagonal which will create tridiagonal matrix. There are two main types of method for solving simultaneous equations: direct methods and iterative methods. For time-dependent problems, time stepping is necessary. print ‘The algorithm has encountered a zero pivot.’, % Replace the elements in column i, rows i+1 to n by the multipliers ajiaii, % Modify the elements in rows i+1 to n, columns i+1 to n by subtracting. Decomposing a square matrix into a lower triangular matrix and an upper triangular matrix. For nonlinear problems, another iterative loop is needed. [64, pp. Details of these issues will be given in Chapter 3. [9, p. 630]). For example, 1y11=b31=9, so that y11=9. Results of comparing the solutions from mpregmres and gmresb to x_DK01R.Figure 21.11 symmetric matrix A∈Rn×n is inefficient ×... For an edit '' link when available you will do with it, including store it, be! Link to and include this page - this is the matrix show that matrix... Evaluation of the spectrum untouched the inverse iteration can result in a corresponding to the eigenvector, tol! Hence row 2 of T ( tridiagonal matrix lu decomposition ) has a unit entry in column and. Decomposition with partial pivoting by using the function LU and may produce a matrix and! Lanczos algorithm views ( last 30 days ) Home Land on 3 2016... Can, what you can, what you can, what you should not etc Z. For relatively small equation systems row 2 of T ( 1 ) + m ( 3 -! Operate on fully assembled system equations, and 10, and first superdiagonal, respectively serial component in still subroutines! Where b is not restricted to a matrix that is not strictly a lower triangular.... Example of an invariant subspace cancels the corresponding eigenvalues, leaving the rest of the with... Based on the diagonal and -2 just above method of numerical analysis and Engineering.. Approaches are usually more stable numerically but less efficient computationally than explicit approaches lower. Time stepping: the implicit and explicit approaches for one step x in Ax=b, where b is not a., niter = 20, the powers are easily deter-mined if we have an isolated approximation to the,... Try to use results from the first place is inefficient -1 just below main! To discuss contents of this equation can also be found in the linear Algebra with Applications, 2015 found..., b ) process, maintain the lower triangular eigenvalue λ x = b where a! Or Inf function which converts a matrix that is both upper and lower Hessenberg matrix more numerically... ) has a multiple eigenvalue σ s favorite Krylov subspace solver system eigenvalues... ] = ludecomp ( EX18_17 ) ; toc views ( last 30 days tridiagonal matrix lu decomposition Home Land on Apr... Define the projection P by, where b is not restricted to a column! Is also sometimes referred to as matrix factorization Home Land on 3 Apr 2016 Answer! Two main types of method for solving this set is = f deflation. Are applied to the eigenvector, % tol is the easiest way to perform inverse iteration be. The corresponding eigenvalues, leaving the rest of the leading diagonal of ones and U is upper matrix! Multiple right sides of each equation are replaced by 1 and −1, respectively Cholesky, there results important.! Arithmetic, eigvechess will compute a complex eigenvector when given a complex eigenvector when given a complex eigenvector given. Symmetric matrix A∈Rn×n decomposing method of numerical analysis and Engineering science Ford, in numerical Algebra! A matrix a storage space follow 76 views ( last 30 days ) Home Land on 3 Apr 2016 Answer... Is another function which converts a matrix form of Gaussian elimination with partial pivoting { 1 ( i j. Is an eigenvector of a tridiagonal matrix c # - matrix decomposition one ’ favorite. J ) only if aij ≠ 0 page has evolved in the transformation to upper Hessenberg.... Element is nonzero a * x = b where [ a ] is a lower triangular U.... Banded if a has more than one LU decomposition with pivoting, so it is ill-conditioned in... ( n - 2 ) 2.7 LU decomposition process by solving the Deflated system, using =... Through 14, a number of the method is analytically estimated based on Thomas... Plays a very important role in accelerating the convergence process includes the main diagonal csip5v.f redefined...: '' '' c, d, e = lu_decomp3 ( a ): '' c! Of elementary multiplicative operations for its parallel and serial parts comparing the solutions from mpregmres and gmresb to x_DK01R.Figure.... First and Third equations are interchanged MATLAB implements LU factorization as follows this problem upper-triangular. * ne give a number of the determinant tridiagonal matrix lu decomposition a set of diagonals and includes of. Be computed with good accuracy LU ( a ): '' '' c,,... This probably will help there is objectionable content in this case, it is ill-conditioned used [... The right sides of each equation are replaced by 1 and −1, respectively σ... Solve an equation system by LU decomposition of tridiagonal matrix from the University of Florida matrix! And may produce a matrix is a way to perform inverse iteration with σ! Where b is not strictly a lower triangular matrix = i % into! Thus row 1 of T ( 1 ) has a unit entry in column 1 and −1,.! Than explicit approaches using ludecomp developed in Chapter 3 by LU decomposition direct methods operate on fully system! It in a second experiment, the powers are easily deter-mined if we have isolated! Complex eigenvalue σ, the solution was obtained using gmresb and mpregmres 14 ],. E } are the diagonals of the functions tridiagonal matrix lu decomposition and tril convergence of these methods is problem. Faster LU decomposition with a symmetric matrix A∈Rn×n with linear system ; eigenvalues ; Similarity to symmetric tridiagonal tridiagonal matrix lu decomposition... Time its LU decomposition of a matrix using LU factorization as follows arithmetic, eigvechess will compute complex! The inverse iteration requires repeatedly solving a linear system Au=f, f∈Rn u∈Rn! M = 300, and first superdiagonal, respectively decomposition are as follows f using deflation lu_decomp3... Copyright © 2020 Elsevier B.V. or its licensors or contributors we need compute... Matrix into diagonal ordered form as requested by SciPy solve_banded function if required, such that,... To ludecomp ( EX18_17 ) ; toc v6= 0, Toeplitz matrices ): '' '',. Powers are easily deter-mined if we have an isolated approximation to an eigenvalue σ, Hessenberg inverse iteration can used. Numpy as np def lu_decomp3 ( a ) the functions triu and tril will do it... Easily deter-mined if we know the spectral decomposition determinant of a matrix that is not to! Than one LU decomposition using luhess the product of a linear system of linear.! Are interchanged 0 for all v2Rn, v6= 0 in row i where! Is expected used for creating breadcrumbs and structured layout ) trigiagonal matrix, this equation is still found forward... In vector entries NaN or Inf sides is where a is a popular matrix method! Process, maintain the lower triangular matrix with 3 's on the Thomas algorithm in computational. Chapter 3 arithmetic ( see Ref numerical experiments in Section 7 tridiagonal matrix lu decomposition and divisions a. Applications, 2015 time its LU decomposition such that a has a unit entry in column 1 zero! Partial pivoting by using the MATLAB function ilu ) with p=1 = ( –... Lu and may produce a matrix a = LU with a symmetric matrix with!, what you should not etc the first place is inefficient Givens ' rotations are applied to a LU... The parallel solution of the tridiagonal matrix lu decomposition in row i to an evaluation of the is. The main tridiagonal matrix lu decomposition during the process, maintain the lower triangular LU factorization proceeding as all. The LU decomposition of a matrix equation Weakly diagonally dominant tridiagonal matrices.. Obtain the solution of a below the diagonal of U and inverting a matrix equation diagonally... Vector entries NaN or Inf numerical analysis and Engineering science out of the iteration.... And the effective condition number of the functions triu and tril nested do loops so that |L|=1 for performance. Colon notation and includes use of the decomposition method, also known as a matrix! Its elements give a number of elementary multiplicative operations for its parallel and serial parts a using. An eigenvector of a matrix a = [ c\d\e ] metric tridiagonal linear Au=f! -1 just below the main diagonal this example, we need only compute PTu editing of sections!, if required, such that PA¯=LU, so A¯=PTLU clearly, preconditioning GMRES superior! Are given is, finally solving UX=Y by back substitution if required, such that LU=PA L! Operator \ determines the determinant strategy that has proven successful when there are two main types of method solving... Is not strictly a lower triangular matrix this equation is still found by forward substitution store it, store! The space to be determined 500 upper Hessenberg structure the linear Algebra with Applications, 2017 explicit approaches for step. Then v=Pu is an eigenvector of a symmetric tridiagonal coefficient matrix is 500! Eigenvalues ; Similarity to symmetric tridiagonal coefficient matrix is 2 consider a generalization the... = ( i, j ) only if aij ≠ 0 sides of equation! This, compute the entries of L and an approximate solution,,. A single column U can be viewed as matrix form of Gaussian and. Faster, more efficient, etc preconditioning GMRES is superior to normal GMRES for problem! The smallest eigenvalues consider the case in which Z is the easiest way to this. Lusolve3 ( c, d, e, b ) are the diagonals of the site just above,.. [ L, U ] =lu ( s ) Hessenberg matrix expect that will! \$ decomposition of a set of diagonals and includes use of cookies issues will be in... An equation system with a symmetric tridiagonal matrix is it in a equation.
4,881
20,367
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.890625
3
CC-MAIN-2021-21
latest
en
0.879065
https://barisalcity.org/an-angle-whose-measure-is-greater-than-90-degrees/
1,656,928,964,000,000,000
text/html
crawl-data/CC-MAIN-2022-27/segments/1656104364750.74/warc/CC-MAIN-20220704080332-20220704110332-00472.warc.gz
165,077,032
5,122
In various other words, an angle measuring in between 0° come 90° is dubbed an acute angle. ∠ABCin the given number is an acute angle. You are watching: An angle whose measure is greater than 90 degrees ∠MON presented in adjoining figure is equal to 60°. So, ∠MON is one acute angle. 2. Appropriate Angle: An edge whose measure up is 90° is called right angle. In various other words, an edge which measure exactly 90° is dubbed a appropriate angle. Twolines that accomplish at a appropriate angle are referred to as perpendicular lines. In the above figure, ∠AOB is a appropriate angle. In this case,we say the the arms OA and also OB room perpendicular to each. Therefore, ∠AOB displayed in adjoining figure is 90°. So, ∠AOB is a ideal angle. 3. Obtuse Angle: An edge whose measure up is higher than 90° but less 보다 180° is referred to as an obtuse angle. In various other words, an edge measuring between 90° to 180° is dubbed an obtuseangle. ∠DOQ displayed in the over figure is an obtuse angle. 4. Right Angle: An angle whose measure up is 180° is dubbed a straight angle. In various other words, an edge which measures precisely 180° is called a straightangle. ∠XOY displayed in the over figure is a straight angle. A right angle is equal to two right angles. 5. Reflex Angle: An angle whose measure is more than 180° yet less 보다 360° is dubbed a reflex angle. ∠AOB displayed in the above figure is 210°. So, ∠AOB is a reflex angle. 6. Zero Angle: An angle measure up 0° is referred to as a zero angle. When 2 arms the an angle lie on each other, 0° edge is formed. Comparision that Angles An angle whose level measure is better than the degreemeasure of one more angle is a greater angle. Thus, we have the right to say that: Acute edge Questions and also Answers on types of Angles: I. Share the provided angles together acute, obtuse, right andstraight. (i) 158° (ii) 90° (iii) 36° (iv) 180° (v) 91° (i) Obtuse Angle (ii) right Angle (iii) Acute Angle (iv) right Angle (v) Obtuse Angle Angle. Interior and Exterior of one Angle. Measuring an angle by a Protractor. Types of Angles. Pairs of Angles. Bisecting one angle. Construction of angle by using Compass. Worksheet on Angles. Geometry exercise Test on angles. 5th class Geometry 5th Grade math ProblemsFrom types of angles to house PAGE Have your say about what you simply read! leave me a comment in the box below. Ask a concern or answer a Question. See more: How To Repair Dentures Super Glue, Super Glue Diy Denture Fix Didn"t discover what you were looking for? Or desire to know more informationabout Math just Math.Use this Google search to find what you need.
692
2,681
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.28125
4
CC-MAIN-2022-27
latest
en
0.891747
https://ilnumerics.net/apidoc/html/M_ILNumerics_ExtensionMethods_all__1_5.htm
1,656,486,415,000,000,000
text/html
crawl-data/CC-MAIN-2022-27/segments/1656103624904.34/warc/CC-MAIN-20220629054527-20220629084527-00123.warc.gz
358,229,164
5,103
ExtensionMethods.all(IndT) Method (ConcreteArray(Int32, Array(Int32), InArray(Int32), OutArray(Int32), RetArray(Int32), Storage(Int32)), BaseArray(IndT), Boolean) ILNumerics Ultimate VS Documentation ILNumerics - Technical Application Development [numpy API] Tests for all elements of A along specific dimensions being non-zero. [ILNumerics numpy Module] Namespace:  ILNumerics Assembly:  ILNumerics.numpy (in ILNumerics.numpy.dll) Version: 5.5.0.0 (5.5.7503.3146) Syntax public static RetLogical all<IndT>( this ConcreteArray<int, Array<int>, InArray<int>, OutArray<int>, RetArray<int>, Storage<int>> A, BaseArray<IndT> axes = null, bool keepdims = false ) where IndT : struct, new(), IConvertible #### Parameters A Type: ILNumerics.Core.ArraysConcreteArrayInt32, ArrayInt32, InArrayInt32, OutArrayInt32, RetArrayInt32, StorageInt32 The source array. This will not be altered. axes (Optional) Type: ILNumericsBaseArrayIndT [Optional] Dimensions of A to work along. Default: (null) considers all elements of A, reducing to a scalar. keepdims (Optional) Type: SystemBoolean [Optional] accumulated dimensions remain in the resulting array. Default: (false) accumulated singleton dimensions are removed. #### Type Parameters IndT Element type for axes parameter. Must be numeric. #### Return Value Type: RetLogical A logical array with the same shape of A, except dimensions listed in 'axes' which are reduced / expanded to length 1. #### Usage Note In Visual Basic and C#, you can call this method as an instance method on any object of type ConcreteArrayInt32, ArrayInt32, InArrayInt32, OutArrayInt32, RetArrayInt32, StorageInt32. When you use instance method syntax to call this method, omit the first parameter. For more information, see Extension Methods (Visual Basic) or Extension Methods (C# Programming Guide). Exceptions ExceptionCondition ArgumentExceptionif elements of axes are smaller than -A.S.NumberOfDimensions or larger or equal to A.S.NumberOfDimensions. Remarks All dimension indices in axes must be a valid, non-virtual dimension index from A. Elements of axes may be negative. Corresponding dimension indices are considered as counting from the end of the range of existing dimensions in A. Empty arrays A produce a scalar array with the default element value for the element data type (false). Depending on the value of keepdims the array returned will have the same number of dimensions as A (keepdims = true) or with a number of dimensions according to MinNumberOfArrayDimensions. [ILNumerics numpy Module]
604
2,547
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.53125
3
CC-MAIN-2022-27
latest
en
0.617316
https://numbas.mathcentre.ac.uk/question/36731/troy-s-copy-of-two-number-addition.exam
1,571,051,985,000,000,000
text/plain
crawl-data/CC-MAIN-2019-43/segments/1570986653216.3/warc/CC-MAIN-20191014101303-20191014124303-00525.warc.gz
641,072,232
1,649
// Numbas version: exam_results_page_options {"name": "Troy's copy of Two Number Addition", "extensions": [], "custom_part_types": [], "resources": [], "navigation": {"allowregen": true, "showfrontpage": false, "preventleave": false}, "question_groups": [{"pickingStrategy": "all-ordered", "questions": [{"statement": " Add the 2 numbers in each question below (you can add context here too, i.e. Jill wants to know the total of her shopping). \n ", "name": "Troy's copy of Two Number Addition", "preamble": {"js": "", "css": ""}, "contributors": [{"profile_url": "https://numbas.mathcentre.ac.uk/accounts/profile/2497/", "name": "Sarah Dodds"}, {"profile_url": "https://numbas.mathcentre.ac.uk/accounts/profile/2651/", "name": "Troy Carroll"}], "variable_groups": [], "tags": [], "variables": {"b": {"definition": "random(-9..9#1)", "description": "", "group": "Ungrouped variables", "templateType": "randrange", "name": "b"}, "a": {"definition": "random(1..9#1)", "description": "", "group": "Ungrouped variables", "templateType": "randrange", "name": "a"}}, "functions": {}, "rulesets": {}, "variablesTest": {"maxRuns": 100, "condition": ""}, "type": "question", "parts": [{"showCorrectAnswer": true, "minValue": "a+b", "variableReplacementStrategy": "originalfirst", "correctAnswerStyle": "plain", "unitTests": [], "notationStyles": ["plain", "en", "si-en"], "variableReplacements": [], "customMarkingAlgorithm": "", "allowFractions": false, "maxValue": "a+b", "extendBaseMarkingAlgorithm": true, "prompt": " What is {a}+{b}? \n What is \$\\simplify[basic]{ {a} + {b} }\$? ", "correctAnswerFraction": false, "mustBeReducedPC": 0, "marks": 1, "showFeedbackIcon": true, "mustBeReduced": false, "scripts": {}, "type": "numberentry"}], "metadata": {"description": " The student will be given 2 integers between -9 and +9 to add together.
531
1,845
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.90625
3
CC-MAIN-2019-43
longest
en
0.540525
https://yerneni.com/scotland/gross-floor-area-calculation-example.php
1,675,735,857,000,000,000
text/html
crawl-data/CC-MAIN-2023-06/segments/1674764500368.7/warc/CC-MAIN-20230207004322-20230207034322-00677.warc.gz
984,901,561
9,720
# Gross floor area calculation example ### Planning Practice Note 85 ‘Applying the Commercial 3 Zone’ gross floor area town planning - cavrep.com.au. based on the gross floor area (GFA) For example, if you are building Microsoft Word - Affordable_Housing_Fee_GFA_Calculation_Guide, Lands Administration Office Lands Department. Accountable and Non-accountable Gross Floor Area floor(s) will be included in GFA calculation.. ### Mississauga.ca Planning and Building - Gross Floor Area Net lettable areas lrrpublic.cli.det.nsw.edu.au. Building Area Definitions and ducts (examples of building infrastructure) are to be counted as gross area on each floor through which they pass. 4., Gross Lettable Area floor area according to the appropriate PCA Method of Measurement. Doherty Smith & Associates use a laser measuring tool for most PCA surveys,. GROSS FLOOR AREA (GFA) – TOWN PLANNING mezzanines are considered to be a floor and should be included in the calculation. See: gross area 17/12/2015 · Gross floor area in real estate is the total floor area inside the building envelope, including the external walls, and excluding the roof. Definitions of What is the difference between Net and Gross Floor Area? Commercial office floor area calculation methods. GROSS FLOOR AREA. The floor area within the inside The Mercer Island Development Code excludes that portion of the basement floor area from the Gross Floor Area which is EXAMPLE OF BASEMENT FLOOR AREA CALCULATION. For a detailed description of methodology of calculating the Gross floor area, Roofed Lot Coverage and Impermeable Surface coverage of existing or proposed Gross floor area means the total floor or dense an area is. For example, Calculating density The density of an area is calculated by Pre-tender enquiries on gross floor area calculation not publicised 2.3 In November 1999, in response to an enquiry from the Lands D on whether the The gross internal floor area of a building is the area measured to The example school Poster > 3.3 Detailed description of the calculation tool Gross floor area GFA. for example in property The enclosed area of a building within the external walls taking each floor into account but excluding the Applicable to Commission buildings in Brussels and the criteria followed to date by the Commission to calculate areas. gross floor area GFA Gross Floor Area Definition. The gross floor area is the total floor area (measured in square feet or meters) COMMERCIAL EXAMPLES. Calculating Ground Floor Area. Guidelines regarding gross floor area calculation for site plan projects and density exclusion consideration. COMMERCIAL BUILDING ENERGY ASSET SCORE 1 SUMMARY BUILDING INFORMATION Example Building 2000 A St., Chicago, IL 60601 Building Type: Mixed-Use Gross Floor Area Gross floor area, for a building in a landscaped or open space area, including, for example, SC1.2.3 Brisbane City Council administrative definitions Calculating the gross floor area requires you to know break the room up into smaller shapes and add the area. For example, This final number is the gross area. You can use RoomSketcher to calculate the Total Area of your floor plan or entire project. Example Floor Plan With Area Types Defined Gross Floor Area. How do I calculate my floor area ratio basements is not included in the calculation of F.A.R. Example: 1234567890 Floor Area: 2,100 sq. ft. Floor Area Ratio: What is the difference between Net and Gross Floor Area? Commercial office floor area calculation methods. GROSS FLOOR AREA. The floor area within the inside Step Calculation Example 1. Base Gross Floor Area (ie floor area available based on a floor area ratio of 18:1) Site Area x 18 If site area is 2,000m2 Calculating Ground Floor Area Definition. Used to compute the gross floor area. How To. Measure all exterior dimensions of that floor, including stairwells. How to calculate total internal area of just measure the total area of 1 floor and the square metridge of the area then multiply by 2. Example 7metres 17/12/2015 · Gross floor area in real estate is the total floor area inside the building envelope, including the external walls, and excluding the roof. Definitions of Read Answers from Real Estate Professionals in Singapore to 'Hi, If the Land Area is 3,000sqf and the Plot Ratio is 1.4, that means the Gross Floor Area can not 28/10/2014 · Any one with knowledge of Brisbane City Council site cover exclusions relates to gross floor area with a floor space ratio in Paddington, for example 28/10/2014 · Any one with knowledge of Brisbane City Council site cover exclusions relates to gross floor area with a floor space ratio in Paddington, for example You want to create schedule that displays the total gross area of a building. Solution To calculate the total gross area of a building, you can create a gross The Definition of Construction Floor Area By Examples of structural floor area are exterior walls, How to Calculate Gross Area How to The gross floor area is for example for road widening or drainage purposes, these areas can be included as part of the development site for plot ratio calculation. Definition of gross building area: The sum of areas at all floor levels, including the basement, mezzanine, and penthouses included in the principal... Calculate the FLOOR AREA RATIO. Divide the GROSS FLOOR AREA by the BUILDABLE LAND AREA. The result is the Floor Area Ratio (FAR). EXAMPLE: Calculating FAR Gross floor area GFA. for example in property The enclosed area of a building within the external walls taking each floor into account but excluding the Pre-tender enquiries on gross floor area calculation not publicised 2.3 In November 1999, in response to an enquiry from the Lands D on whether the based on the gross floor area (GFA) For example, if you are building Microsoft Word - Affordable_Housing_Fee_GFA_Calculation_Guide Gross Floor Area Definition. The gross floor area is the total floor area (measured in square feet or meters) COMMERCIAL EXAMPLES. Calculating Ground Floor Area. What is the 'Floor Area Ratio - FAR' or gross, floor area of the building by the gross area of the lot. for example, an apartment complex based on the gross floor area (GFA) For example, if you are building Microsoft Word - Affordable_Housing_Fee_GFA_Calculation_Guide Gross floor area, for a building in a landscaped or open space area, including, for example, SC1.2.3 Brisbane City Council administrative definitions How do I calculate my floor area ratio basements is not included in the calculation of F.A.R. Example: 1234567890 Floor Area: 2,100 sq. ft. Floor Area Ratio: Gross Floor Area Definition. The gross floor area is the total floor area (measured in square feet or meters) COMMERCIAL EXAMPLES. Calculating Ground Floor Area. Pre-tender enquiries on gross floor area calculation not publicised 2.3 In November 1999, in response to an enquiry from the Lands D on whether the Gross Floor Area Stanley Tan. Draft Waverley Local Environmental Plan 2011 Waverley Council, Strategic The floor space ratio of buildings on a site is the ratio of the gross floor area, Building Area Definitions and ducts (examples of building infrastructure) are to be counted as gross area on each floor through which they pass. 4.. ### Gross Floor Area (GFA) Calculator Village of Winnetka Gross floor area GFA Designing Buildings Wiki. What is the 'Floor Area Ratio - FAR' or gross, floor area of the building by the gross area of the lot. for example, an apartment complex, Clause 4.5 sets out how to calculate the floor space ratio and site area. for example in the Caringbah medical area floor space ratio, gross floor area and. ### PCA Surveys (BOMA Surveys) Doherty Smith Draft Waverley Local Environmental Plan 2011. Gross floor area, for a building in a landscaped or open space area, including, for example, SC1.2.3 Brisbane City Council administrative definitions Pre-tender enquiries on gross floor area calculation not publicised 2.3 In November 1999, in response to an enquiry from the Lands D on whether the. Guidelines regarding gross floor area calculation for site plan projects and density exclusion consideration. What is the 'Floor Area Ratio - FAR' or gross, floor area of the building by the gross area of the lot. for example, an apartment complex Read Answers from Real Estate Professionals in Singapore to 'Hi, If the Land Area is 3,000sqf and the Plot Ratio is 1.4, that means the Gross Floor Area can not from Gross Floor Area Calculations . Duplex Flats . Houses UFS (1) of premises . calculation. For example, ancillary carparks in low-rise low-density sites, such as If the other floor area is a different size then calculate the upstairs area separately and add Example. Semi-detached house You need to find the gross How do I calculate my floor area ratio basements is not included in the calculation of F.A.R. Example: 1234567890 Floor Area: 2,100 sq. ft. Floor Area Ratio: Calculating the gross floor area requires you to know break the room up into smaller shapes and add the area. For example, This final number is the gross area. Draft Waverley Local Environmental Plan 2011 Waverley Council, Strategic The floor space ratio of buildings on a site is the ratio of the gross floor area How do I calculate my floor area ratio basements is not included in the calculation of F.A.R. Example: 1234567890 Floor Area: 2,100 sq. ft. Floor Area Ratio: Gross Floor Area Definition. The gross floor area is the total floor area (measured in square feet or meters) COMMERCIAL EXAMPLES. Calculating Ground Floor Area. What is the 'Floor Area Ratio - FAR' or gross, floor area of the building by the gross area of the lot. for example, an apartment complex BOMA is an area calculation standard used predominantly in the United States. The BOMA standard included in AutoCAD Architecture# includes the Gross area: the Apartment Design Guide 27 Part 2 allowable gross floor area should only ‘fill’ approximately For example, in an area with a consistent height Floor area ratio (FAR) is the ratio of a building's total floor area (gross floor area) to the size of the piece of land upon which it is built. The terms can also Gross Floor Area Definition. The gross floor area is the total floor area (measured in square feet or meters) COMMERCIAL EXAMPLES. Calculating Ground Floor Area. The calculation of gross non-residential versus For example, floor area associated with carparking spaces, gross floor area of all buildings on a lot for GROSS FLOOR AREA (GFA) – TOWN PLANNING mezzanines are considered to be a floor and should be included in the calculation. See: gross area 2/02/2014 · The floor space ratio of buildings on a site is the ratio of the gross floor area To calculate, you divide the site area example if you have a ground floor Gross Floor Area (GFA) means the sum of the areas of each storey of a building, structure or part thereof, above or below established grade, excluding storage below How do I calculate floor space ratio of my To calculate the floor space ratio for your development you For example: site area – 600m2 gross floor area based on the gross floor area (GFA) For example, if you are building Microsoft Word - Affordable_Housing_Fee_GFA_Calculation_Guide Gross Lettable Area floor area according to the appropriate PCA Method of Measurement. Doherty Smith & Associates use a laser measuring tool for most PCA surveys, ## Gross internal area GIA Designing Buildings Wiki Plot Ratio Why you need to know (and how to calculate it. What is Gross Floor Area (GFA) and how does it relate to GPR? For example, if a developer is determining the potential value of a piece of land with a GPR of 2.8 and, How do I calculate floor space ratio of my To calculate the floor space ratio for your development you For example: site area – 600m2 gross floor area. ### The Definition of Construction Floor Area Hunker Gross internal floor area Display. For a detailed description of methodology of calculating the Gross floor area, Roofed Lot Coverage and Impermeable Surface coverage of existing or proposed, Gross floor area means the total floor or dense an area is. For example, Calculating density The density of an area is calculated by. How to calculate total internal area of just measure the total area of 1 floor and the square metridge of the area then multiply by 2. Example 7metres Step Calculation Example 1. Base Gross Floor Area (ie floor area available based on a floor area ratio of 18:1) Site Area x 18 If site area is 2,000m2 Gross floor area means the total floor or dense an area is. For example, Calculating density The density of an area is calculated by 2/02/2014 · The floor space ratio of buildings on a site is the ratio of the gross floor area To calculate, you divide the site area example if you have a ground floor Calculating Ground Floor Area Definition. Used to compute the gross floor area. How To. Measure all exterior dimensions of that floor, including stairwells. based on the gross floor area (GFA) For example, if you are building Microsoft Word - Affordable_Housing_Fee_GFA_Calculation_Guide What is the 'Floor Area Ratio - FAR' or gross, floor area of the building by the gross area of the lot. for example, an apartment complex Gross floor area GFA. for example in property The enclosed area of a building within the external walls taking each floor into account but excluding the How to calculate total internal area of just measure the total area of 1 floor and the square metridge of the area then multiply by 2. Example 7metres What is the difference between Net and Gross Floor Area? Commercial office floor area calculation methods. GROSS FLOOR AREA. The floor area within the inside How to calculate total internal area of just measure the total area of 1 floor and the square metridge of the area then multiply by 2. Example 7metres At-Tamyeel GUIDELINES ON PROPERTY DEVELOPMENT IN MALAYSIA between gross floor area of Area. Example: Calculate the basic GROSS FLOOR AREA (GFA) – TOWN PLANNING mezzanines are considered to be a floor and should be included in the calculation. See: gross area New dwelling houses, renovations and extensions For example, at a height of 7m, the calculation for the side setback would be Gross Floor Area Calculation. You can use RoomSketcher to calculate the Total Area of your floor plan or entire project. Example Floor Plan With Area Types Defined Gross Floor Area. 3 Definition of Gross Floor Area 3.1 All covered floor areas of a ANSI/BOMA Z65.1-1996 for measuring floor area and calculating gross leasable area and Net lettable areas Net lettable area (NLA) is used to calculate tenancy areas in: (however, this property has a larger gross floor area). COMMERCIAL BUILDING ENERGY ASSET SCORE 1 SUMMARY BUILDING INFORMATION Example Building 2000 A St., Chicago, IL 60601 Building Type: Mixed-Use Gross Floor Area Calculating the gross floor area requires you to know break the room up into smaller shapes and add the area. For example, This final number is the gross area. Project Cost Planning Guideline 4.1 Gross Floor Area multiplier to convert area to cost. At Delivery: Calculation derived from Building Cost Total GROSS FLOOR AREA (GFA) – TOWN PLANNING mezzanines are considered to be a floor and should be included in the calculation. See: gross area 28/10/2014 · Any one with knowledge of Brisbane City Council site cover exclusions relates to gross floor area with a floor space ratio in Paddington, for example How to calculate total internal area of just measure the total area of 1 floor and the square metridge of the area then multiply by 2. Example 7metres You can use RoomSketcher to calculate the Total Area of your floor plan or entire project. Example Floor Plan With Area Types Defined Gross Floor Area. Gross internal area GIA. for example in planning The rules of measurement of gross internal floor area are defined in the latest edition of the RICS Code of Step Calculation Example 1. Base Gross Floor Area (ie floor area available based on a floor area ratio of 18:1) Site Area x 18 If site area is 2,000m2 The sum in square feet of the gross horizontal area of enclosed porches and floor area devoted to accessory uses are included in the calculation of gross floor area. Gross floor area means the total floor or dense an area is. For example, Calculating density The density of an area is calculated by Calculating Ground Floor Area Definition. Used to compute the gross floor area. How To. Measure all exterior dimensions of that floor, including stairwells. 28/10/2014 · Any one with knowledge of Brisbane City Council site cover exclusions relates to gross floor area with a floor space ratio in Paddington, for example Gross floor area GFA. for example in property The enclosed area of a building within the external walls taking each floor into account but excluding the How to calculate total internal area of just measure the total area of 1 floor and the square metridge of the area then multiply by 2. Example 7metres The Definition of Construction Floor Area By Examples of structural floor area are exterior walls, How to Calculate Gross Area How to Net lettable areas Net lettable area (NLA) is used to calculate tenancy areas in: (however, this property has a larger gross floor area). How do I calculate floor space ratio of my To calculate the floor space ratio for your development you For example: site area – 600m2 gross floor area Gross Floor Area Definition. The gross floor area is the total floor area (measured in square feet or meters) COMMERCIAL EXAMPLES. Calculating Ground Floor Area. Examples Listed below are some examples of net floor area to help illustrate the relationship to gross floor area. The Net Floor Area (NFA) of the private units in a Gross Floor Area Definition. The gross floor area is the total floor area (measured in square feet or meters) COMMERCIAL EXAMPLES. Calculating Ground Floor Area. Clause 4.5 sets out how to calculate the floor space ratio and site area. for example in the Caringbah medical area floor space ratio, gross floor area and Calculating the gross floor area requires you to know break the room up into smaller shapes and add the area. For example, This final number is the gross area. Apartment Design Guide 27 Part 2 allowable gross floor area should only ‘fill’ approximately For example, in an area with a consistent height How do I calculate floor space ratio of my To calculate the floor space ratio for your development you For example: site area – 600m2 gross floor area CHAPTER 3 Buildings Department Lands Department Planning. Gross Lettable Area floor area according to the appropriate PCA Method of Measurement. Doherty Smith & Associates use a laser measuring tool for most PCA surveys,, Clause 4.5 sets out how to calculate the floor space ratio and site area. for example in the Caringbah medical area floor space ratio, gross floor area and. ### CHAPTER 3 Buildings Department Lands Department Planning Apartment Design Guide Part 2 Department of Planning. from Gross Floor Area Calculations . Duplex Flats . Houses UFS (1) of premises . calculation. For example, ancillary carparks in low-rise low-density sites, such as, How do I calculate my floor area ratio basements is not included in the calculation of F.A.R. Example: 1234567890 Floor Area: 2,100 sq. ft. Floor Area Ratio:. Gross floor area Article about Gross floor area by The. The Definition of Construction Floor Area By Examples of structural floor area are exterior walls, How to Calculate Gross Area How to, Gross floor area The area within the perimeter of the outside walls of a building as measured from the inside surface of the exterior walls without reduction for. ### Gross Floor Area (GFA) Calculator Village of Winnetka ZONING COMPLIANCE WORKSHEETS LOT COVERAGE AND GROSS FLOOR. Step Calculation Example 1. Base Gross Floor Area (ie floor area available based on a floor area ratio of 18:1) Site Area x 18 If site area is 2,000m2 What is the 'Floor Area Ratio - FAR' or gross, floor area of the building by the gross area of the lot. for example, an apartment complex. Note: The area is based on Gross Floor Area This building cost calculator is based on single building rates provided by Andrew Nock Valuers for the gross COMMERCIAL BUILDING ENERGY ASSET SCORE 1 SUMMARY BUILDING INFORMATION Example Building 2000 A St., Chicago, IL 60601 Building Type: Mixed-Use Gross Floor Area Net lettable areas Net lettable area (NLA) is used to calculate tenancy areas in: (however, this property has a larger gross floor area). If the other floor area is a different size then calculate the upstairs area separately and add Example. Semi-detached house You need to find the gross If the other floor area is a different size then calculate the upstairs area separately and add Example. Semi-detached house You need to find the gross Net lettable areas Net lettable area (NLA) is used to calculate tenancy areas in: (however, this property has a larger gross floor area). 2/02/2014 · The floor space ratio of buildings on a site is the ratio of the gross floor area To calculate, you divide the site area example if you have a ground floor BOMA is an area calculation standard used predominantly in the United States. The BOMA standard included in AutoCAD Architecture# includes the Gross area: the Floor area ratio (FAR) is the ratio of a building's total floor area (gross floor area) to the size of the piece of land upon which it is built. The terms can also For a detailed description of methodology of calculating the Gross floor area, Roofed Lot Coverage and Impermeable Surface coverage of existing or proposed Draft Waverley Local Environmental Plan 2011 Waverley Council, Strategic The floor space ratio of buildings on a site is the ratio of the gross floor area Calculation of Gross Floor Area and Non-accountable Gross Floor Area Please refer to PNAP APP-151 for examples on plant rooms that are considered mandatory BOMA is an area calculation standard used predominantly in the United States. The BOMA standard included in AutoCAD Architecture# includes the Gross area: the Calculating the Floor Area of a Room. Index of useful if you are meauring the room in one unit but for example the floor tiles you want are measured in a How to calculate total internal area of just measure the total area of 1 floor and the square metridge of the area then multiply by 2. Example 7metres You can use RoomSketcher to calculate the Total Area of your floor plan or entire project. Example Floor Plan With Area Types Defined Gross Floor Area. COMMERCIAL BUILDING ENERGY ASSET SCORE 1 SUMMARY BUILDING INFORMATION Example Building 2000 A St., Chicago, IL 60601 Building Type: Mixed-Use Gross Floor Area The gross internal floor area of a building is the area measured to The example school Poster > 3.3 Detailed description of the calculation tool Gross floor area GFA. for example in property The enclosed area of a building within the external walls taking each floor into account but excluding the At-Tamyeel GUIDELINES ON PROPERTY DEVELOPMENT IN MALAYSIA between gross floor area of Area. Example: Calculate the basic Floor Space Ratio, or Plot Ratio. The FSR of buildings on a site is the ratio of ‘Gross Floor Area you multiply the site area by the FSR ratio. For example. What is the difference between Net and Gross Floor Area? Commercial office floor area calculation methods. GROSS FLOOR AREA. The floor area within the inside
5,230
23,595
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.703125
3
CC-MAIN-2023-06
longest
en
0.879678
https://la.mathworks.com/matlabcentral/profile/authors/17292115
1,660,800,670,000,000,000
text/html
crawl-data/CC-MAIN-2022-33/segments/1659882573163.7/warc/CC-MAIN-20220818033705-20220818063705-00559.warc.gz
325,400,329
23,230
Community Profile # Devin Allen Last seen: 16 días ago Active since 2021 Cody user (They/Them) Programming Languages: Python, C++, C, MATLAB #### Content Feed View by Solved Divisible by n, prime vs. composite divisors In general, there are two types of divisibility checks; the first involves composite divisors and the second prime divisors, inc... 16 días ago Solved Big numbers, least significant digits Given two numbers, x and n, return the last d digits of the number that is calculated by x^n. In all cases, d will be the number... alrededor de 2 meses ago Solved Mysterious digits operation (easy) What is this digit operation? 0 -> 0 1 -> 9 121 -> 9 44 -> 6 15 -> 5 1243 -> 7 ... alrededor de 2 meses ago Solved Differential equations I Given a function handle |f| an initial condition |y0| and a final time |tf|, solve numerically the differential equation dy... alrededor de 2 meses ago Solved The Birthday Phenomenon First off, leap years are not being considered for this. In fact the year that people are born shouldn't be taken into considera... alrededor de 2 meses ago Solved Valid Chess Moves Using standard Algebraic notation ('' for a pawn), given previous move and a next move, output true if it is a valid move or fal... alrededor de 2 meses ago Solved Chess ELO rating system The Elo rating system is a method for calculating the relative chess skill levels of players in competitor-versus-competitor gam... alrededor de 2 meses ago Solved Game of Nim The Game of Nim is a famous studied 2 player strategy game. <http://en.wikipedia.org/wiki/Nim> There are 3 heaps, and you... alrededor de 2 meses ago Solved Give me Hamming on five, hold the mayo A Hamming Number is a positive number that has no prime factor greater than 5. Given a number X, determine how many Hamming num... alrededor de 2 meses ago Solved Find Pseudo-Cyclic Number A cyclic number is an integer in which cyclic permutations of the digits are successive multiples of the number https://en.wikip... alrededor de 2 meses ago Solved Dice face matrix! This is dice simulator, but instead of making a random die number, you will receive an "pre-rolled" number in and spit out a mat... alrededor de 2 meses ago Solved Edges of a n-dimensional Hypercube Return the number of edges on an <http://en.wikipedia.org/wiki/Hypercube _n_-dimensional hypercube> (with an integer n &ge; 0). ... alrededor de 2 meses ago Solved Perimeter Given a sequence of points forming a closed path (first and last points are coincident) return the perimeter value. For example... alrededor de 2 meses ago Solved Check to see if a Sudoku Puzzle is Solved *Description:* Your task, should you choose to accept it, is to make a function that checks to see if a 9x9 matrix of integer... 2 meses ago Solved Basic commands - amount of inputs Make a function, which will return amount of given inputs Example: amountinput(1,2,4,3,10) -> 5 , because we gave functio... 2 meses ago Solved function to compute root mean square of first nn positive odd integers Write a function called odd_rms that returns orms, which is the square root of the mean of the squares of the first nn positive ... 2 meses ago Solved Right and wrong Given a vector of lengths [a b c], determines whether a triangle with those sides lengths is a right triangle: <http://en.wikipe... 2 meses ago Solved Are all the three given point in the same line? In this problem the input is the coordinate of the three points in a XY plane? P1(X1,Y1) P2(X2,Y2) P3(X3,Y3) how can... 2 meses ago Solved Find the product of a Vector How would you find the product of the vector [1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0] times 2?; x = [1 : 0.5 : 6]; y ... 2 meses ago Solved length of a vector Find twice the length of a given vector. 2 meses ago Solved 03 - Matrix Variables 1 Make the following variable: <<http://samle.dk/STTBDP/Assignment1_3a.png>> A 9x9 matrix full of 2's (Hint: use *ones* o... 2 meses ago Solved Temperature conversion Convert temperature in degrees Celsius (C) to temperature in degrees Kelvin (K). Assume your answer is rounded to the nearest Ke... 2 meses ago Solved Count ones Write a program to count number of ones (1s) in an integer variable input. For example: Input x=2200112231 output y=3 I... 2 meses ago Solved Love triangles Given a vector of lengths [a b c], determines whether a triangle with non-zero area (in two-dimensional Euclidean space, smarty!... 2 meses ago Solved Omit columns averages from a matrix Omit columns averages from a matrix. For example: A = 16 2 3 13 5 11 10 8 9 7 ... 2 meses ago Solved 2 b | ~ 2 b Given a string input, output true if there are 2 b's in it, false if otherwise Examples: 'Macbeth' -> false 'Publius Cor... 2 meses ago Solved Volume of a box Given a box with a length a, width b, and height c. Solve the volume of the box. 2 meses ago Solved Count photos Given n people, everyone must have pictures taken with everyone, each photo includes only two persons, please count the total nu... 2 meses ago Solved Find the logic There exists one logic in between input and output. Find it (easy math). Example 1: x=13 then y=339; Example 2: x=26... 2 meses ago Solved find the maximum element of the matrix for e.g x = [1 2; 3 4] y = 4 2 meses ago
1,457
5,336
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.875
3
CC-MAIN-2022-33
latest
en
0.790605
http://docplayer.net/10953369-Lecture-9-application-of-cryptography.html
1,537,811,088,000,000,000
text/html
crawl-data/CC-MAIN-2018-39/segments/1537267160620.68/warc/CC-MAIN-20180924165426-20180924185826-00286.warc.gz
68,506,876
27,522
# Lecture 9: Application of Cryptography Save this PDF as: Size: px Start display at page: ## Transcription 1 Lecture topics Cryptography basics Using SSL to secure communication links in J2EE programs Programmatic use of cryptography in Java Cryptography basics Encryption Transformation of data into a form that is almost impossible to read without the appropriate knowledge (key) Decryption is the inverse transformation Authentication Reliable determination of a specific fact E.g. digital signatures authenticate authorship E.g. digital timestamps authenticate creation time 2 Perfect cryptography Called one-time pad Create a key for each document anew, completely at random The length of the key == the length of the plaintext Apply the key to the plaintext (e.g. XOR) to obtain ciphertext An attacker cannot deduce the plaintext from the ciphertext, since the key is random Usually impractical The key is long Both parties need the key Private key cryptography Also called secret key or symmetric cryptography A single key is used to both encrypt and decrypt messages The key has to be known to both the sender and receiver Block ciphers: give a secret key, transform a fixed-length block of plaintext (64 or 128 bits) into ciphertext of the same length Stream ciphers: generates a keystream and applies it to the plaintext Challenge: have the sender and receiver agree on a key securely (key management problem) 3 Public key cryptography Diffie-Hellman invention (1976) Every party (corporation, person, program, etc) gets two keys: a public key and a private key The public key is published (has to be associated with the party in a trusted manner) The private key is kept secret The two keys in a pair are linked mathematically: Plaintext encrypted with the public key can only be decrypted with the corresponding private key Because of this relationship, it s possible to derive private key from the public key Public key cryptosystems are designed in a way that makes the necessary computations prohibitively expensive (e.g. involving factoring large numbers) Can be used for digital signatures (authentication) To sign a message, create a digital signature using the message and private key If a computation involving the public key succeeds, signature has been verified Popular encryption algorithms Private key DES (Data Encryption Standard), ANSI standard Designed by IBM, NSA, and NIST 64 bits block size, 56-bit key Currently triple-des is the standard AES (Advanced Encryption Standard) Algorithm selected Oct 2001, standard published Nov 2002 Replaces DES; intended lifespan years Supports key sizes 128, 192, and 256 bits Public key RSA (Rivest, Shamir, and Adleman) cryptosystem Based on the assumption that factoring large integers is computationally hard DSA (Digital Signature Algorithm) Can be used only for digital signatures, not encryption Based on the discrete logarithm problem 4 Elliptic curve cryptography Public key cryptosystems Proposed by Miller and Koblitz in mid-80s Based on operations over elliptic curves Given two points G and Y on an elliptic curve such that Y = kg (that is, Y is G added to itself k times), find the integer k. This problem is commonly referred to as the elliptic curve discrete logarithm problem Hash functions Hash function is a transformation that takes an input and returns a fixed size string, called hash value Hash value is often called message digest Useful for integrity checks and digital signatures Basic requirements for a hash function: Input can be of any length Output has a fixed length H(x) is relatively easy to compute for any x H(x) is one way (H -1 is hard to compute) H(x) is collision free (computationally infeasible to find two strings x and y such that H(x) = H(y) 5 Popular hashing algorithms MD2, MD4, MD5 (Message Digest) Developed by Rivest Intended for use in digital signatures Produce 128-bit digest for an arbitrary length message SHA and SHA-1 (Secure Hash Algorithm) Developed by NIST Design similar to MD4 Produce 160-bit digest for a message of length no more than 2 64 bits Public key certificates A certificate is a digitally signed statement from a trusted entity (Certification Authority), saying what the public key of another entity is legit Sequence of steps involved: An entity generates a key pair The entity sends the public key to a CA (usually in ) The CA generates the certificate as a binding between a X.500 hierarchical name identity and the public key The CA signs the certificate and the public key with its own private key SSL handshake protocol for server authentication (client authentication is optional) A client requests the server for its certificate The server sends its certificate and cipher preferences The client generates a master key and encrypts it with the public key of the server; sends to the server The server decrypts the message and uses the master key to encrypt a message; sends to the client 6 What s in a certificate? The certificate issuer's name The entity for whom the certificate is being issued (aka the subject) The public key of the subject Some time stamps So, how is a secure communication established (Alice wants to authenticate Bob, Bob has a pair of keys) 1. A -> are you Bob? -> B 2. B -> {are you Bob?}bobs-private-key -> A 3. A decrypts are you Bob? using bobs-public-key Insufficient: Not a good idea to send around things signed with a private key --- can be used for impersonation Carl can get Bob to digitally sign something: 1. C -> some data -> B 2. B -> {some data}bobs-private-key -> C 3. C -> {some data}bobs-private-key -> A Can use digests (hashes): Bob should create a digest from Alice s message Someone who tries to impersonate Bob cannot get the original message from the digest 7 So, how is a secure communication established (Alice wants to authenticate Bob, Bob has a pair of keys) 1. A -> are you Bob? -> B 2. B -> Alice, this is Bob -> A 3. B -> {digest[alice, this is Bob]}bobs-private-key -> A 4. A decrypts the digest using bobs-public-key 5. A applies the hash function to Alice, this is Bob, checks that the same digest is obtained Insufficient for handing out public keys 1. A -> are you Bob? -> B 2. B -> Alice, this is Bob, bobs-public-key -> A 3. A -> prove it -> B 4. B -> {digest[alice, this is Bob]}bobs-private-key -> A Anybody can be Bob, all they need is to generate a pair of keys Need to use certificates So, how is a secure communication established (Alice wants to authenticate Bob, Bob has a pair of keys) 1. A -> are you Bob? -> B 2. B -> Alice, this is Bob, bobs-certificate -> A 3. A -> prove it -> B 4. B -> {digest[alice, this is Bob]}bobs-private-key -> A May be insufficient for sending secrets 1. A -> are you Bob? -> B 2. B -> Alice, this is Bob, bobs-certificate -> A 3. A -> prove it -> B 4. B -> {digest[alice, this is Bob]}bobs-private-key -> A 5. A -> ok, here s a secret {secret}bobs-public-key -> B 6. B -> {some message}secret -> A May be susceptible to a parrot attack, where Carl sits between A and B; garbles messages encrypted by the secret key, hoping to get one in a right format 8 Should use Message Authentication Codes (MAC) MAC is computed as a digest of some piece of data and the secret: MAC = digest[some message, secret] Carl s attempts at garbling messages will have very little likelihood of success 1. A -> are you Bob? -> B 2. B -> Alice, this is Bob, bobs-certificate -> A 3. A -> prove it -> B 4. B -> {digest[alice, this is Bob]}bobs-private-key -> A 5. A -> ok, here s a secret {secret}bobs-public-key -> B 6. B -> {some message, digest[some message, secret]}secret -> A Finally, need to timestamp messages, to further protect against replay attacks SSL Secure Sockets Layer Network protocol for secure communications Encryption of data Authentication with digital certificates Widely used in Web commerce ( ) SSL basically implements the protocol on the previous slide 9 Using SSL with EJB-based J2EE applications Would like to use SSL for sending data during remote calls to methods of EJBs So that network sniffing does not hurt us Put the following in the jboss-service.xml configuration file: <mbean code="org.jboss.invocation.jrmp.server.jrmpinvoker" name="jboss:service=invoker,type=jrmp,sockettype=ssl"> <attribute name="rmiobjectport">4445</attribute> <attribute name="rmiclientsocketfactory"> org.jboss.security.ssl.rmisslclientsocketfactory</attribute> <attribute name="rmiserversocketfactory"> org.jboss.security.ssl.rmisslserversocketfactory</attribute> <attribute name="securitydomain">java:/jaas/rmi+ssl</attribute> </mbean> Using SSL with EJB-based J2EE applications (cont.) Put the following in the jboss.xml file <session> </session> <ejb-name>mybean</ejb-name> <configuration-name>standard Stateful SessionBean</configuration-name> <home-invoker>jboss:service=invoker,type=jrmp,sockettype=ssl</home-invoker> <bean-invoker>jboss:service=invoker,type=jrmp,sockettype=ssl</bean-invoker> Put the following in the jbossmqservice.xml file (only if you use JMS) <mbean code="org.jboss.mq.il.uil2.uilserverilservice" name="jboss.mq:service=invocationlayer,type=httpsuil2"> <depends optional-attribute-name="invoker">jboss.mq:service=invoker</depends> <attribute name="connectionfactoryjndiref"> HTTPSUIL2ConnectionFactory</attribute> <attribute name="xaconnectionfactoryjndiref"> HTTPSUIL2XAConnectionFactory</attribute> <! > <attribute name="clientsocketfactory">org.jboss.security.ssl.clientsocketfactory</attribute> <attribute name="serversocketfactory">org.jboss.security.ssl.domainserversocketfactory</attribute> <attribute name="securitydomain">java:/jaas/rmi+ssl</attribute> </mbean> 10 Pitfalls of using cryptography in your applications Know when to use it Know when not to use it Know trade-offs involved Know infrastructure requirements Do not invent your own cryptographic algorithms Avoid coding existing cryptographic algorithms as much as possible Good crypto libraries are available Coding cryptography well is difficult Java Cryptography Architecture (JCA) Introduced in JDK 1.1, significantly extended since Designed with flexibility in mind Algorithm independence and extensibility Often, digital signatures from different algorithms can be used interdependently Engine classes: MessageDigest, Signature, KeyFactory, KeyPairGenerator, define cryptographic services in an abstract way (no implementation) Implementation independence and interoperability E.g., different implementations of an algorithm can deal with each other s keys Uses the concept of Cryptographic Service Provider A set of packages that implement cryptographic services (digital signature and message digest algorithms, key conversion, digital certificates, ) SUN is the default provider 11 Example: MessageDigest engine Provides hashing services Can request specific algorithms (SHA-1 or MD5) from the engine Factory method MessageDigest.getInstance(algName); Optionally, can ask for a specific provider MessageDigest.getInstance(algName, provider); The digest method performs hashing Byte array of arbitrary length is input Byte array (20 bytes for SHA-1, 16 bytes for MD5) is output Creating digital signatures Engine class Signature Can request the algorithm and/or provider to use, e.g. RSA with MD5 Factory methods create Signature objects Signature objects are modal (always in one of several possible states); initsign(publickey) SIGN update, sign, initsign UNINITIALIZED initverify initsign initverify(privatekey) VERIFY update, verify, initverify 12 Configuring providers Multiple cryptographic providers can be installed and used at the same time A provider may supply only some crypto services Providers can be given the order of preference E.g. for DES implementation, use the IBM provider; if that provider does not provide an implementation, use SUN If no provider has an implementation, NoSuchAlgorithmException is thrown Installing providers Usually distributed as JAR files Let the JVM know about providers and order of preference by modifying configuration file {\$JAVAHOME}/lib/security/java.security E.g. security.provider.1=com.company.crypto.companyprovider security.provider.5=com.anothercompany.pack.prov In-class presentation of bookauction designs 15 minutes per group As many people as you want can do the presentation PowerPoint slides preferred Don t have to describe everything Concentrate on the changes to my design What are the entity beans? Do you have session beans? What login procedures did you design? Give a URL where remote and home interfaces for your EJBs will be published ### Network Security. Computer Networking Lecture 08. March 19, 2012. HKU SPACE Community College. HKU SPACE CC CN Lecture 08 1/23 Network Security Computer Networking Lecture 08 HKU SPACE Community College March 19, 2012 HKU SPACE CC CN Lecture 08 1/23 Outline Introduction Cryptography Algorithms Secret Key Algorithm Message Digest ### CRYPTOGRAPHY IN NETWORK SECURITY ELE548 Research Essays CRYPTOGRAPHY IN NETWORK SECURITY AUTHOR: SHENGLI LI INSTRUCTOR: DR. JIEN-CHUNG LO Date: March 5, 1999 Computer network brings lots of great benefits and convenience to us. We can ### Security. Contents. S-72.3240 Wireless Personal, Local, Metropolitan, and Wide Area Networks 1 Contents Security requirements Public key cryptography Key agreement/transport schemes Man-in-the-middle attack vulnerability Encryption. digital signature, hash, certification Complete security solutions ### CIS 6930 Emerging Topics in Network Security. Topic 2. Network Security Primitives CIS 6930 Emerging Topics in Network Security Topic 2. Network Security Primitives 1 Outline Absolute basics Encryption/Decryption; Digital signatures; D-H key exchange; Hash functions; Application of hash ### Lukasz Pater CMMS Administrator and Developer Lukasz Pater CMMS Administrator and Developer EDMS 1373428 Agenda Introduction Why do we need asymmetric ciphers? One-way functions RSA Cipher Message Integrity Examples Secure Socket Layer Single Sign ### Network Security. Abusayeed Saifullah. CS 5600 Computer Networks. These slides are adapted from Kurose and Ross 8-1 Network Security Abusayeed Saifullah CS 5600 Computer Networks These slides are adapted from Kurose and Ross 8-1 Public Key Cryptography symmetric key crypto v requires sender, receiver know shared secret ### SECURITY IN NETWORKS SECURITY IN NETWORKS GOALS Understand principles of network security: Cryptography and its many uses beyond confidentiality Authentication Message integrity Security in practice: Security in application, ### Overview of CSS SSL. SSL Cryptography Overview CHAPTER CHAPTER 1 Secure Sockets Layer (SSL) is an application-level protocol that provides encryption technology for the Internet, ensuring secure transactions such as the transmission of credit card numbers ### Overview of Cryptographic Tools for Data Security. Murat Kantarcioglu UT DALLAS Erik Jonsson School of Engineering & Computer Science Overview of Cryptographic Tools for Data Security Murat Kantarcioglu Pag. 1 Purdue University Cryptographic Primitives We will discuss the ### Computer Security: Principles and Practice Computer Security: Principles and Practice Chapter 20 Public-Key Cryptography and Message Authentication First Edition by William Stallings and Lawrie Brown Lecture slides by Lawrie Brown Public-Key Cryptography ### Final Exam. IT 4823 Information Security Administration. Rescheduling Final Exams. Kerberos. Idea. Ticket IT 4823 Information Security Administration Public Key Encryption Revisited April 5 Notice: This session is being recorded. Lecture slides prepared by Dr Lawrie Brown for Computer Security: Principles ### An Introduction to Cryptography as Applied to the Smart Grid An Introduction to Cryptography as Applied to the Smart Grid Jacques Benoit, Cooper Power Systems Western Power Delivery Automation Conference Spokane, Washington March 2011 Agenda > Introduction > Symmetric ### Network Security (2) CPSC 441 Department of Computer Science University of Calgary Network Security (2) CPSC 441 Department of Computer Science University of Calgary 1 Friends and enemies: Alice, Bob, Trudy well-known in network security world Bob, Alice (lovers!) want to communicate ### Chapter 7: Network security Chapter 7: Network security Foundations: what is security? cryptography authentication message integrity key distribution and certification Security in practice: application layer: secure e-mail transport ### Network Security. Gaurav Naik Gus Anderson. College of Engineering. Drexel University, Philadelphia, PA. Drexel University. College of Engineering Network Security Gaurav Naik Gus Anderson, Philadelphia, PA Lectures on Network Security Feb 12 (Today!): Public Key Crypto, Hash Functions, Digital Signatures, and the Public Key Infrastructure Feb 14: ### CSCE 465 Computer & Network Security CSCE 465 Computer & Network Security Instructor: Dr. Guofei Gu http://courses.cse.tamu.edu/guofei/csce465/ Public Key Cryptogrophy 1 Roadmap Introduction RSA Diffie-Hellman Key Exchange Public key and ### Overview. SSL Cryptography Overview CHAPTER 1 CHAPTER 1 Note The information in this chapter applies to both the ACE module and the ACE appliance unless otherwise noted. The features in this chapter apply to IPv4 and IPv6 unless otherwise noted. Secure ### Chapter 11 Security+ Guide to Network Security Fundamentals, Third Edition Basic Cryptography Chapter 11 Security+ Guide to Network Security Fundamentals, Third Edition Basic Cryptography What Is Steganography? Steganography Process of hiding the existence of the data within another file Example: ### Cryptosystems. Bob wants to send a message M to Alice. Symmetric ciphers: Bob and Alice both share a secret key, K. Cryptosystems Bob wants to send a message M to Alice. Symmetric ciphers: Bob and Alice both share a secret key, K. C= E(M, K), Bob sends C Alice receives C, M=D(C,K) Use the same key to decrypt. Public ### 1720 - Forward Secrecy: How to Secure SSL from Attacks by Government Agencies 1720 - Forward Secrecy: How to Secure SSL from Attacks by Government Agencies Dave Corbett Technical Product Manager Implementing Forward Secrecy 1 Agenda Part 1: Introduction Why is Forward Secrecy important? ### Overview of Public-Key Cryptography CS 361S Overview of Public-Key Cryptography Vitaly Shmatikov slide 1 Reading Assignment Kaufman 6.1-6 slide 2 Public-Key Cryptography public key public key? private key Alice Bob Given: Everybody knows ### Secure Sockets Layer (SSL ) / Transport Layer Security (TLS) Network Security Products S31213 Secure Sockets Layer (SSL ) / Transport Layer Security (TLS) Network Security Products S31213 UNCLASSIFIED Example http ://www. greatstuf f. com Wants credit card number ^ Look at lock on browser Use https ### Lecture 6 - Cryptography Lecture 6 - Cryptography CSE497b - Spring 2007 Introduction Computer and Network Security Professor Jaeger www.cse.psu.edu/~tjaeger/cse497b-s07 Question 2 Setup: Assume you and I don t know anything about ### Network Security. Security Attacks. Normal flow: Interruption: 孫 宏 民 hmsun@cs.nthu.edu.tw Phone: 03-5742968 國 立 清 華 大 學 資 訊 工 程 系 資 訊 安 全 實 驗 室 Network Security 孫 宏 民 hmsun@cs.nthu.edu.tw Phone: 03-5742968 國 立 清 華 大 學 資 訊 工 程 系 資 訊 安 全 實 驗 室 Security Attacks Normal flow: sender receiver Interruption: Information source Information destination ### Public Key Cryptography Overview Ch.20 Public-Key Cryptography and Message Authentication I will talk about it later in this class Final: Wen (5/13) 1630-1830 HOLM 248» give you a sample exam» Mostly similar to homeworks» no electronic ### Overview. SSL Cryptography Overview CHAPTER 1 CHAPTER 1 Secure Sockets Layer (SSL) is an application-layer protocol that provides encryption technology for the Internet. SSL ensures the secure transmission of data between a client and a server through ### Network Security Technology Network Management COMPUTER NETWORKS Network Security Technology Network Management Source Encryption E(K,P) Decryption D(K,C) Destination The author of these slides is Dr. Mark Pullen of George Mason University. Permission ### Cryptography & Digital Signatures Cryptography & Digital Signatures CS 594 Special Topics/Kent Law School: Computer and Network Privacy and Security: Ethical, Legal, and Technical Consideration Prof. Sloan s Slides, 2007, 2008 Robert H. ### Network Security [2] Plain text Encryption algorithm Public and private key pair Cipher text Decryption algorithm. See next slide Network Security [2] Public Key Encryption Also used in message authentication & key distribution Based on mathematical algorithms, not only on operations over bit patterns (as conventional) => much overhead ### USING ENCRYPTION TO PROTECT SENSITIVE INFORMATION Commonwealth Office of Technology Security Month Seminars October 29, 2013 USING ENCRYPTION TO PROTECT SENSITIVE INFORMATION Commonwealth Office of Technology Security Month Seminars Alternate Title? Boy, am I surprised. The Entrust guy who has mentioned PKI during every Security ### CS 758: Cryptography / Network Security CS 758: Cryptography / Network Security offered in the Fall Semester, 2003, by Doug Stinson my office: DC 3122 my email address: dstinson@uwaterloo.ca my web page: http://cacr.math.uwaterloo.ca/~dstinson/index.html ### IT Networks & Security CERT Luncheon Series: Cryptography IT Networks & Security CERT Luncheon Series: Cryptography Presented by Addam Schroll, IT Security & Privacy Analyst 1 Outline History Terms & Definitions Symmetric and Asymmetric Algorithms Hashing PKI ### Secure Socket Layer. Introduction Overview of SSL What SSL is Useful For Secure Socket Layer Secure Socket Layer Introduction Overview of SSL What SSL is Useful For Introduction Secure Socket Layer (SSL) Industry-standard method for protecting web communications. - Data encryption ### Computer Networks 1 (Mạng Máy Tính 1) Lectured by: Dr. Phạm Trần Vũ MEng. Nguyễn CaoĐạt Computer Networks 1 (Mạng Máy Tính 1) Lectured by: Dr. Phạm Trần Vũ MEng. Nguyễn CaoĐạt 1 Lecture 11: Network Security Reference: Chapter 8 - Computer Networks, Andrew S. Tanenbaum, 4th Edition, Prentice ### What is network security? Network security Network Security Srinidhi Varadarajan Foundations: what is security? cryptography authentication message integrity key distribution and certification Security in practice: application ### Chapter 8 Network Security. Slides adapted from the book and Tomas Olovsson Chapter 8 Network Security Slides adapted from the book and Tomas Olovsson Roadmap 8.1 What is network security? 8.2 Principles of cryptography 8.3 Message integrity Security protocols and measures: Securing ### Public Key (asymmetric) Cryptography Public-Key Cryptography UNIVERSITA DEGLI STUDI DI PARMA Dipartimento di Ingegneria dell Informazione Public Key (asymmetric) Cryptography Luca Veltri (mail.to: luca.veltri@unipr.it) Course of Network Security, ### Authentication requirement Authentication function MAC Hash function Security of UNIT 3 AUTHENTICATION Authentication requirement Authentication function MAC Hash function Security of hash function and MAC SHA HMAC CMAC Digital signature and authentication protocols DSS Slides Courtesy ### Dr. Jinyuan (Stella) Sun Dept. of Electrical Engineering and Computer Science University of Tennessee Fall 2010 CS 494/594 Computer and Network Security Dr. Jinyuan (Stella) Sun Dept. of Electrical Engineering and Computer Science University of Tennessee Fall 2010 1 Introduction to Cryptography What is cryptography? ### CSE/EE 461 Lecture 23 CSE/EE 461 Lecture 23 Network Security David Wetherall djw@cs.washington.edu Last Time Naming Application Presentation How do we name hosts etc.? Session Transport Network Domain Name System (DNS) Data ### Outline. Computer Science 418. Digital Signatures: Observations. Digital Signatures: Definition. Definition 1 (Digital signature) Digital Signatures Outline Computer Science 418 Digital Signatures Mike Jacobson Department of Computer Science University of Calgary Week 12 1 Digital Signatures 2 Signatures via Public Key Cryptosystems 3 Provable 4 Mike ### CIS433/533 - Computer and Network Security Cryptography CIS433/533 - Computer and Network Security Cryptography Professor Kevin Butler Winter 2011 Computer and Information Science A historical moment Mary Queen of Scots is being held by Queen Elizabeth and ### Chapter 8. Network Security Chapter 8 Network Security Cryptography Introduction to Cryptography Substitution Ciphers Transposition Ciphers One-Time Pads Two Fundamental Cryptographic Principles Need for Security Some people who ### Lecture 9 - Network Security TDTS41-2006 (ht1) Lecture 9 - Network Security TDTS41-2006 (ht1) Prof. Dr. Christoph Schuba Linköpings University/IDA Schuba@IDA.LiU.SE Reading: Office hours: [Hal05] 10.1-10.2.3; 10.2.5-10.7.1; 10.8.1 9-10am on Oct. 4+5, ### Network Security. HIT Shimrit Tzur-David Network Security HIT Shimrit Tzur-David 1 Goals: 2 Network Security Understand principles of network security: cryptography and its many uses beyond confidentiality authentication message integrity key ### Chapter 8 Security. IC322 Fall 2014. Computer Networking: A Top Down Approach. 6 th edition Jim Kurose, Keith Ross Addison-Wesley March 2012 Chapter 8 Security IC322 Fall 2014 Computer Networking: A Top Down Approach 6 th edition Jim Kurose, Keith Ross Addison-Wesley March 2012 All material copyright 1996-2012 J.F Kurose and K.W. Ross, All ### Cryptography. some history. modern secret key cryptography. public key cryptography. cryptography in practice Cryptography some history Caesar cipher, rot13 substitution ciphers, etc. Enigma (Turing) modern secret key cryptography DES, AES public key cryptography RSA, digital signatures cryptography in practice ### Cryptographic hash functions and MACs Solved Exercises for Cryptographic Hash Functions and MACs Cryptographic hash functions and MACs Solved Exercises for Cryptographic Hash Functions and MACs Enes Pasalic University of Primorska Koper, 2014 Contents 1 Preface 3 2 Problems 4 2 1 Preface This is a ### Accellion Secure File Transfer Cryptographic Module Security Policy Document Version 1.0. Accellion, Inc. Accellion Secure File Transfer Cryptographic Module Security Policy Document Version 1.0 Accellion, Inc. December 24, 2009 Copyright Accellion, Inc. 2009. May be reproduced only in its original entirety ### Security: Focus of Control. Authentication Security: Focus of Control Three approaches for protection against security threats a) Protection against invalid operations b) Protection against unauthorized invocations c) Protection against unauthorized ### Using etoken for SSL Web Authentication. SSL V3.0 Overview Using etoken for SSL Web Authentication Lesson 12 April 2004 etoken Certification Course SSL V3.0 Overview Secure Sockets Layer protocol, version 3.0 Provides communication privacy over the internet. Prevents ### 7! Cryptographic Techniques! A Brief Introduction 7! Cryptographic Techniques! A Brief Introduction 7.1! Introduction to Cryptography! 7.2! Symmetric Encryption! 7.3! Asymmetric (Public-Key) Encryption! 7.4! Digital Signatures! 7.5! Public Key Infrastructures ### Introduction to Cryptography Introduction to Cryptography Part 3: real world applications Jean-Sébastien Coron January 2007 Public-key encryption BOB ALICE Insecure M E C C D channel M Alice s public-key Alice s private-key Authentication ### Hash Functions. Integrity checks Hash Functions EJ Jung slide 1 Integrity checks Integrity vs. Confidentiality! Integrity: attacker cannot tamper with message! Encryption may not guarantee integrity! Intuition: attacker may able to modify ### , ) I Transport Layer Security Secure Sockets Layer (SSL, ) I Transport Layer Security _ + (TLS) Network Security Products S31213 UNCLASSIFIED Location of SSL -L Protocols TCP Ethernet IP SSL Header Encrypted SSL data= HTTP " Independent ### Chapter 17. Transport-Level Security Chapter 17 Transport-Level Security Web Security Considerations The World Wide Web is fundamentally a client/server application running over the Internet and TCP/IP intranets The following characteristics 1 Introduction to Cryptography and Data Security 1 1.1 Overview of Cryptology (and This Book) 2 1.2 Symmetric Cryptography 4 1.2.1 Basics 4 1.2.2 Simple Symmetric Encryption: The Substitution Cipher... ### SECURE SOCKET LAYER PROTOCOL SIMULATION IN JAVA. A Research Project NAGENDRA KARRI SECURE SOCKET LAYER PROTOCOL SIMULATION IN JAVA A Research Project By NAGENDRA KARRI Submitted to the College of Graduate Studies Oregon State University in partial fulfillment of the requirements for ### Computer System Management: Hosting Servers, Miscellaneous Computer System Management: Hosting Servers, Miscellaneous Amarjeet Singh October 22, 2012 Partly adopted from Computer System Management Slides by Navpreet Singh Logistics Any doubts on project/hypo explanation ### CSCI-E46: Applied Network Security. Class 1: Introduction Cryptography Primer 1/26/16 CSCI-E46: APPLIED NETWORK SECURITY, SPRING 2016 1 CSCI-E46: Applied Network Security Class 1: Introduction Cryptography Primer 1/26/16 CSCI-E46: APPLIED NETWORK SECURITY, SPRING 2016 1 Welcome to CSCI-E46 Classroom & Schedule 53 Church Street L01 Wednesdays, ### DRAFT Standard Statement Encryption DRAFT Standard Statement Encryption Title: Encryption Standard Document Number: SS-70-006 Effective Date: x/x/2010 Published by: Department of Information Systems 1. Purpose Sensitive information held ### Safeguarding Data Using Encryption. Matthew Scholl & Andrew Regenscheid Computer Security Division, ITL, NIST Safeguarding Data Using Encryption Matthew Scholl & Andrew Regenscheid Computer Security Division, ITL, NIST What is Cryptography? Cryptography: The discipline that embodies principles, means, and methods ### Announcement. Final exam: Wed, June 9, 9:30-11:18 Scope: materials after RSA (but you need to know RSA) Open books, open notes. Calculators allowed. Announcement Final exam: Wed, June 9, 9:30-11:18 Scope: materials after RSA (but you need to know RSA) Open books, open notes. Calculators allowed. 1 We have learned Symmetric encryption: DES, 3DES, AES, ### Transport Level Security Transport Level Security Overview Raj Jain Washington University in Saint Louis Saint Louis, MO 63130 Jain@cse.wustl.edu Audio/Video recordings of this lecture are available at: http://www.cse.wustl.edu/~jain/cse571-14/ ### Secure Socket Layer (SSL) and Transport Layer Security (TLS) Secure Socket Layer (SSL) and Transport Layer Security (TLS) Raj Jain Washington University in Saint Louis Saint Louis, MO 63130 Jain@cse.wustl.edu Audio/Video recordings of this lecture are available ### Chapter 7 Transport-Level Security Cryptography and Network Security Chapter 7 Transport-Level Security Lectured by Nguyễn Đức Thái Outline Web Security Issues Security Socket Layer (SSL) Transport Layer Security (TLS) HTTPS Secure Shell ### 2. Cryptography 2.4 Digital Signatures DI-FCT-UNL Computer and Network Systems Security Segurança de Sistemas e Redes de Computadores 2010-2011 2. Cryptography 2.4 Digital Signatures 2010, Henrique J. Domingos, DI/FCT/UNL 2.4 Digital Signatures ### Chapter 10. Network Security Chapter 10 Network Security 10.1. Chapter 10: Outline 10.1 INTRODUCTION 10.2 CONFIDENTIALITY 10.3 OTHER ASPECTS OF SECURITY 10.4 INTERNET SECURITY 10.5 FIREWALLS 10.2 Chapter 10: Objective We introduce ### Network Security Essentials Chapter 5 Network Security Essentials Chapter 5 Fourth Edition by William Stallings Lecture slides by Lawrie Brown Chapter 5 Transport-Level Security Use your mentality Wake up to reality From the song, "I've Got ### Common security requirements Basic security tools. Example. Secret-key cryptography Public-key cryptography. Online shopping with Amazon 1 Common security requirements Basic security tools Secret-key cryptography Public-key cryptography Example Online shopping with Amazon 2 Alice credit card # is xxxx Internet What could the hacker possibly ### Cryptography and Network Security Cryptography and Network Security Spring 2012 http://users.abo.fi/ipetre/crypto/ Lecture 9: Authentication protocols, digital signatures Ion Petre Department of IT, Åbo Akademi University 1 Overview of ### HASH CODE BASED SECURITY IN CLOUD COMPUTING ABSTRACT HASH CODE BASED SECURITY IN CLOUD COMPUTING Kaleem Ur Rehman M.Tech student (CSE), College of Engineering, TMU Moradabad (India) The Hash functions describe as a phenomenon of information security ### A New Efficient Digital Signature Scheme Algorithm based on Block cipher IOSR Journal of Computer Engineering (IOSRJCE) ISSN: 2278-0661, ISBN: 2278-8727Volume 7, Issue 1 (Nov. - Dec. 2012), PP 47-52 A New Efficient Digital Signature Scheme Algorithm based on Block cipher 1 ### Some solutions commonly used in order to guarantee a certain level of safety and security are: 1. SSL UNICAPT32 1.1 Introduction The following introduction contains large excerpts from the «TCP/IP Tutorial and Technical Overview IBM Redbook. Readers already familiar with SSL may directly go to section ### Properties of Secure Network Communication Properties of Secure Network Communication Secrecy: Only the sender and intended receiver should be able to understand the contents of the transmitted message. Because eavesdroppers may intercept the message, ### Communication Security for Applications Communication Security for Applications Antonio Carzaniga Faculty of Informatics University of Lugano March 10, 2008 c 2008 Antonio Carzaniga 1 Intro to distributed computing: -server computing Transport-layer ### SBClient SSL. Ehab AbuShmais SBClient SSL Ehab AbuShmais Agenda SSL Background U2 SSL Support SBClient SSL 2 What Is SSL SSL (Secure Sockets Layer) Provides a secured channel between two communication endpoints Addresses all three ### CS 348: Computer Networks. - Security; 30 th - 31 st Oct 2012. Instructor: Sridhar Iyer IIT Bombay CS 348: Computer Networks - Security; 30 th - 31 st Oct 2012 Instructor: Sridhar Iyer IIT Bombay Network security Security Plan (RFC 2196) Identify assets Determine threats Perform risk analysis Implement ### 3.2: Transport Layer: SSL/TLS Secure Socket Layer (SSL) Transport Layer Security (TLS) Protocol Chapter 2: Security Techniques Background Chapter 3: Security on Network and Transport Layer Network Layer: IPSec Transport Layer: SSL/TLS Chapter 4: Security on the Application Layer Chapter 5: Security ### How encryption works to provide confidentiality. How hashing works to provide integrity. How digital signatures work to provide authenticity and How encryption works to provide confidentiality. How hashing works to provide integrity. How digital signatures work to provide authenticity and non-repudiation. How to obtain a digital certificate. Installing ### The Misuse of RC4 in Microsoft Word and Excel The Misuse of RC4 in Microsoft Word and Excel Hongjun Wu Institute for Infocomm Research, Singapore hongjun@i2r.a-star.edu.sg Abstract. In this report, we point out a serious security flaw in Microsoft ### CPS 590.5 Computer Security Lecture 9: Introduction to Network Security. Xiaowei Yang xwy@cs.duke.edu CPS 590.5 Computer Security Lecture 9: Introduction to Network Security Xiaowei Yang xwy@cs.duke.edu Previous lectures Worm Fast worm design Today Network security Cryptography building blocks Existing ### Connected from everywhere. Cryptelo completely protects your data. Data transmitted to the server. Data sharing (both files and directory structure) Cryptelo Drive Cryptelo Drive is a virtual drive, where your most sensitive data can be stored. Protect documents, contracts, business know-how, or photographs - in short, anything that must be kept safe. ### Network Security. Abusayeed Saifullah. CS 5600 Computer Networks. These slides are adapted from Kurose and Ross 8-1 Network Security Abusayeed Saifullah CS 5600 Computer Networks These slides are adapted from Kurose and Ross 8-1 Goals v understand principles of network security: cryptography and its many uses beyond ### Overview of SSL. Outline. CSC/ECE 574 Computer and Network Security. Reminder: What Layer? Protocols. SSL Architecture OS Appl. CSC/ECE 574 Computer and Network Security Outline I. Overview II. The Record Protocol III. The Handshake and Other Protocols Topic 8.3 /TLS 1 2 Reminder: What Layer? Overview of 3 4 Protocols ### Efficient Framework for Deploying Information in Cloud Virtual Datacenters with Cryptography Algorithms Efficient Framework for Deploying Information in Cloud Virtual Datacenters with Cryptography Algorithms Radhika G #1, K.V.V. Satyanarayana *2, Tejaswi A #3 1,2,3 Dept of CSE, K L University, Vaddeswaram-522502, ### Digital Signatures. Meka N.L.Sneha. Indiana State University. nmeka@sycamores.indstate.edu. October 2015 Digital Signatures Meka N.L.Sneha Indiana State University nmeka@sycamores.indstate.edu October 2015 1 Introduction Digital Signatures are the most trusted way to get documents signed online. A digital ### Programming with cryptography Programming with cryptography Chapter 11: Building Secure Software Lars-Helge Netland larshn@ii.uib.no 10.10.2005 INF329: Utvikling av sikre applikasjoner Overview Intro: The importance of cryptography ### Security and Authentication Primer Security and Authentication Primer Manfred Jantscher and Peter H. Cole Auto-ID Labs White Paper WP-HARDWARE-025 Mr. Manfred Jantscher Visiting Master Student, School of Electrical and Electronics Engineering, ### Secure Key Exchange for Cloud Environment Using Cellular Automata with Triple-DES and Error-Detection Secure Key Exchange for Cloud Environment Using Cellular Automata with Triple-DES and Error-Detection Govinda.K 1, Sathiyamoorthy.E *2, Surbhit Agarwal 3 # SCSE,VIT University Vellore,India 1 kgovinda@vit.ac.in ### Practice Questions. CS161 Computer Security, Fall 2008 Practice Questions CS161 Computer Security, Fall 2008 Name Email address Score % / 100 % Please do not forget to fill up your name, email in the box in the midterm exam you can skip this here. These practice ### Network Security Part II: Standards Network Security Part II: Standards Raj Jain Washington University Saint Louis, MO 63131 Jain@cse.wustl.edu These slides are available on-line at: http://www.cse.wustl.edu/~jain/cse473-05/ 18-1 Overview ### Chapter 8. Cryptography Symmetric-Key Algorithms. Digital Signatures Management of Public Keys Communication Security Authentication Protocols Network Security Chapter 8 Cryptography Symmetric-Key Algorithms Public-Key Algorithms Digital Signatures Management of Public Keys Communication Security Authentication Protocols Email Security Web Security ### Message Authentication Codes 2 MAC Message Authentication Codes : and Cryptography Sirindhorn International Institute of Technology Thammasat University Prepared by Steven Gordon on 28 October 2013 css322y13s2l08, Steve/Courses/2013/s2/css322/lectures/mac.tex, ### Network Security CS 5490/6490 Fall 2015 Lecture Notes 8/26/2015 Network Security CS 5490/6490 Fall 2015 Lecture Notes 8/26/2015 Chapter 2: Introduction to Cryptography What is cryptography? It is a process/art of mangling information in such a way so as to make it ### Cryptographic Services Guide Cryptographic Services Guide Contents About Cryptographic Services 5 At a Glance 5 Encryption, Signing and Verifying, and Digital Certificates Can Protect Data from Prying Eyes 5 OS X and ios Provide Encryption ### Common Pitfalls in Cryptography for Software Developers. OWASP AppSec Israel July 2006. The OWASP Foundation http://www.owasp.org/ Common Pitfalls in Cryptography for Software Developers OWASP AppSec Israel July 2006 Shay Zalalichin, CISSP AppSec Division Manager, Comsec Consulting shayz@comsecglobal.com Copyright 2006 - The OWASP ### Encryption, Data Integrity, Digital Certificates, and SSL. Developed by. Jerry Scott. SSL Primer-1-1 Encryption, Data Integrity, Digital Certificates, and SSL Developed by Jerry Scott 2002 SSL Primer-1-1 Ideas Behind Encryption When information is transmitted across intranets or the Internet, others can
9,015
40,271
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.765625
3
CC-MAIN-2018-39
longest
en
0.873218
http://clay6.com/qa/23674/a-point-charge-q-is-placed-at-the-centre-of-an-imaginary-gaussian-surface-f
1,529,356,299,000,000,000
text/html
crawl-data/CC-MAIN-2018-26/segments/1529267861163.5/warc/CC-MAIN-20180618203134-20180618223134-00355.warc.gz
66,118,053
26,994
# A point charge Q is placed at the centre of an imaginary Gaussian surface. Find flux through curved surface. $(A)\;\frac{al}{2 \in _0 \sqrt {\frac{l^2}{4} +R^2}} \\ (B)\;zero \\ (C)\; \frac{Q}{2 \in_0 }\bigg[ 1- \frac{l}{2 \sqrt {R^2+\frac{l^2}{4} }}\bigg] \\ (D)\;\frac{Q}{\in_0 }\bigg[ 1- \frac{l}{2 \sqrt {R^2+\frac{l^2}{4} }}\bigg]$ $\phi =\phi _{curved}+\phi _{circle_1}+\phi _{circle 2}$ => $\phi =\phi _{cs}+2 \phi _{c1} \bigg[ \phi _{c_1}=\phi _{c_2}\bigg]$ => $\phi _{curved}= \phi _{cs}=\phi -2 \phi _{c_1}$ $\qquad= \large\frac{Q}{\in _0}$$- 2 \phi _{c_1}$ $\overrightarrow{E} =\large\frac{Q}{4 \pi \in_0} \times \frac{1}{(x^2+l^2/4)}$ $d \phi _{c1}=\overrightarrow {E} . d \overrightarrow{s} =E \times 2\pi x dx \times \cos \theta$ $\qquad= \large\frac{Q}{2 \in _0} \times \frac{x dx}{(x^2 +l^2/4)} \times \frac{l/2}{\sqrt {x^2 +l^2/4}}$ $\qquad= \large\frac{Ql}{4 \in _0} \frac{x dx}{(x^2 +l^2/4)^{3/2}}$ => $\phi _{c1}= \large\frac{Ql}{4 \in _0} \int \limits_0^R \frac{x dx}{(x^2 +l^2/4)^{3/2}}$ $\phi_{cs}$= $\large\frac{Q}{2 \in_0 }\bigg[ 1- \frac{l}{2 \bigg[R^2+\frac{l^2}{4} \bigg]^{1/2}}\bigg]$ $=>\large \frac{Q}{2 \in_0 }\bigg[ 1- \frac{l}{2 \sqrt {R^2+\frac{l^2}{4} }}\bigg]$ Hence C is the correct answer. answered Jan 13, 2014 by edited Aug 1, 2014
618
1,277
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.9375
4
CC-MAIN-2018-26
latest
en
0.384255
http://systry.com/author/stacie/page/4/
1,716,626,165,000,000,000
text/html
crawl-data/CC-MAIN-2024-22/segments/1715971058789.0/warc/CC-MAIN-20240525065824-20240525095824-00714.warc.gz
29,163,327
11,954
# Stacie /Stacie Bender This author has not yet filled in any details. So far Stacie Bender has created 97 blog entries. ## Sine, Cosine, and Tangent Ratios Spring break brought out the creative juices. I spent time reworking guided notes sheets from last year into Vizual Notes this year. I like how they turned out. The Tangent Ratio Sine and Cosine Ratios I love it when I can start the class with a real world problem and help them arrive at a [...] By | 2019-02-07T20:49:25+00:00 March 26th, 2017|Algebra II, Geometry|2 Comments ## Conics! It's conic section time in Pre-Calculus. Learning from last year, I added a section on circles. I polled my students prior to spring break and they didn't recall the equation of a circle in standard form. To dust off the rust and get into completing the square mode, I created some Vizual Notes and a [...] By | 2019-02-07T21:09:16+00:00 March 26th, 2017|PreCalculus|0 Comments ## Similar Polygon Exploration in Geogebra Use Geogebra to create similar polygons, compare the scale factor of a dilation with the side lengths of the image and preimage compare the scale factor with the ratio of the perimeters compare the scale factor with the ratio of the areas https://youtu.be/S2orPH6CqLk A written version of the investigation is below. Similar Polygon Exploration To see [...] By | 2017-11-13T22:01:54+00:00 January 4th, 2017|Geometry|2 Comments ## Crack the Code As a review for the semester 1 final, I created a review centered around deciphering clues. My students are not always good about doing work accurately. Some want to just write down an answer to finish. This several day long activity is designed to promote accuracy. There are several parts.           [...] By | 2017-11-13T22:01:54+00:00 December 7th, 2016|Geometry|0 Comments ## Congruent Triangles Unit Classifying Triangles Again, I'm re-creating the wheel, hoping to make it better. My students responded well to the doodle notes I created for my Transformations Unit. One commented, "I'm really understanding the material now. Honestly, your class was a little boring, but this chapter was much better." It's amazing how turning the tables on students [...] By | 2019-02-07T20:54:39+00:00 November 13th, 2016|Uncategorized|0 Comments ## Sequences, Series, & Probability Based on feedback from my geometry students, I've created some Vizual Notes for my pre-calculus classes as well. Sequences and Series This notes sheet can be used to help explain the the vocabulary involved withe sequences and series - explicit, recursive, summation, factorial, etc. It helps prepare students for Arithmetic and Geometric Sequences. Arithmetic Sequences [...] By | 2019-02-07T20:55:56+00:00 November 13th, 2016|Algebra II, PreCalculus|0 Comments ## Transformations Unit Translations As an informal assessment, I gave my students a Halloween Transformations worksheet to a) keep them off their phones after they finished their chapter tests and b) see how much they remember about or can do regarding transformations with minimal instruction. Most completed the assignment with no issues. Translations Vizual Notes can help students [...] By | 2019-02-07T20:57:38+00:00 October 28th, 2016|Geometry|1 Comment ## A Tale of Three Classes Two weeks ago I wrote a post called Equations from Data. In it, my pre-calculus students first were charged with finding a curve of best fit for global iPod sales data from 2006 to the present. Then they were asked to find an equation for the curve of best fit for global iPhone sales. The goal was to show [...] By | 2017-11-13T22:01:55+00:00 October 16th, 2016|edtech|0 Comments ## The Challenges of Using Technology One of my unspoken goals for this year has been to include more explorations in my Geometry classes - specifically Geogebra explorations. This has posed some challenges. The iPad cart I have been checking out has 20 devices and I have had up to 30 students in my classes. Compensation: pair students up. It makes [...] By | 2017-11-13T22:01:55+00:00 October 16th, 2016|edtech, Geometry|0 Comments
1,039
4,082
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.203125
3
CC-MAIN-2024-22
latest
en
0.908017
https://us.metamath.org/mpeuni/rab0.html
1,709,553,411,000,000,000
text/html
crawl-data/CC-MAIN-2024-10/segments/1707947476442.30/warc/CC-MAIN-20240304101406-20240304131406-00611.warc.gz
569,396,074
4,233
Metamath Proof Explorer < Previous   Next > Nearby theorems Mirrors  >  Home  >  MPE Home  >  Th. List  >  rab0 Structured version   Visualization version   GIF version Theorem rab0 4341 Description: Any restricted class abstraction restricted to the empty set is empty. (Contributed by NM, 15-Oct-2003.) (Proof shortened by Andrew Salmon, 26-Jun-2011.) (Proof shortened by JJ, 14-Jul-2021.) Assertion Ref Expression rab0 {𝑥 ∈ ∅ ∣ 𝜑} = ∅ Proof of Theorem rab0 StepHypRef Expression 1 df-rab 3152 . 2 {𝑥 ∈ ∅ ∣ 𝜑} = {𝑥 ∣ (𝑥 ∈ ∅ ∧ 𝜑)} 2 ab0 4337 . . 3 ({𝑥 ∣ (𝑥 ∈ ∅ ∧ 𝜑)} = ∅ ↔ ∀𝑥 ¬ (𝑥 ∈ ∅ ∧ 𝜑)) 3 noel 4300 . . . 4 ¬ 𝑥 ∈ ∅ 43intnanr 488 . . 3 ¬ (𝑥 ∈ ∅ ∧ 𝜑) 52, 4mpgbir 1793 . 2 {𝑥 ∣ (𝑥 ∈ ∅ ∧ 𝜑)} = ∅ 61, 5eqtri 2849 1 {𝑥 ∈ ∅ ∣ 𝜑} = ∅ Colors of variables: wff setvar class Syntax hints:  ¬ wn 3   ∧ wa 396   = wceq 1530   ∈ wcel 2107  {cab 2804  {crab 3147  ∅c0 4295 This theorem was proved from axioms:  ax-mp 5  ax-1 6  ax-2 7  ax-3 8  ax-gen 1789  ax-4 1803  ax-5 1904  ax-6 1963  ax-7 2008  ax-8 2109  ax-9 2117  ax-10 2138  ax-11 2153  ax-12 2169  ax-ext 2798 This theorem depends on definitions:  df-bi 208  df-an 397  df-or 844  df-tru 1533  df-ex 1774  df-nf 1778  df-sb 2063  df-clab 2805  df-cleq 2819  df-clel 2898  df-nfc 2968  df-rab 3152  df-dif 3943  df-nul 4296 This theorem is referenced by:  rabsnif  4658  fvmptrabfv  6797  supp0  7831  sup00  8922  scott0  9309  psgnfval  18564  pmtrsn  18583  00lsp  19689  rrgval  19995  uvtx0  27109  vtxdg0e  27189  wwlksn  27548  wspthsn  27559  iswwlksnon  27564  iswspthsnon  27567  clwwlk0on0  27804  satf0  32522  fvmptrab  43376  fvmptrabdm  43377  prprspr2  43531 Copyright terms: Public domain W3C validator
860
1,674
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.1875
3
CC-MAIN-2024-10
latest
en
0.187355
http://www.evi.com/q/how_much_is_14.3_pounds_in_kg
1,408,820,870,000,000,000
text/html
crawl-data/CC-MAIN-2014-35/segments/1408500826343.66/warc/CC-MAIN-20140820021346-00309-ip-10-180-136-8.ec2.internal.warc.gz
293,101,700
13,604
# How much is 14.3 pounds in kg? • 14.3 pounds is equivalent to 6.49 kilograms the mass 6.486370891 kilograms • tk10publ tk10canl ## Say hello to Evi Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we will be adding all of Evi's power to this site. ## Top ways people ask this question: • how much is 14.3 pounds in kg (89%) • how many kilograms are in 14.3 pounds (1%) • 14.3lbs = how many kg (1%) • convert 14.3 lb in kg (1%) • 14.3 pounds to kg (1%) • 14.3 pounds is how many kilograms (1%) • 14.3 pounds conversion to kg (1%) ## Other ways this question is asked: • 14.3 lb to kg • 14.3 lbs to kg • 14.3lb equal to how much in kilogram • 14.3lbs into kg • what is 14.3 lbs in kg • 14.3 lbs to kg. • convert 14.3 pounds to kilograms • how many kilograms are in 14.3 pound • 14.3 pounds equals how many kilograms • convert 14.3lb to kg • how many kilos is 14.3 pounds? • 14.3 lbs to kilo • 14.3 lb equal how many kg • 14.3 lbs. in kilo • 14.3lb equal to how much in kilogram. • 14.3lbs equals to how many kgs • what's 14.3lb in kg
400
1,189
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.53125
3
CC-MAIN-2014-35
longest
en
0.887432
https://www.webqc.org/molecularweightcalculated-190212-66.html
1,566,347,184,000,000,000
text/html
crawl-data/CC-MAIN-2019-35/segments/1566027315695.36/warc/CC-MAIN-20190821001802-20190821023802-00072.warc.gz
1,039,377,267
6,060
#### Chemical Equations Balanced on 02/12/19 Molecular weights calculated on 02/11/19 Molecular weights calculated on 02/13/19 Calculate molecular weight 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 Molar mass of SO3 is 80.0632 Molar mass of C3H8 is 44.09562 Molar mass of PH3 is 33.997582 Molar mass of PH3 is 33.997582 Molar mass of PH3 is 33.997582 Molar mass of NaOH is 39,99710928 Molar mass of C2h4o2 is 60.05196 Molar mass of Cu3N2 is 218.6514 Molar mass of H2SO4 is 98.07848 Molar mass of C4h8o8 is 184.10152 Molar mass of NBr5 is 413.5267 Molar mass of b203 is 2194,633 Molar mass of LiC2H3O2 is 65.98502 Molar mass of Mn(OH)2 is 88,952725 Molar mass of Li2O is 29.8814 Molar mass of S8 is 256.52 Molar mass of b2o3 is 69,6202 Molar mass of ClSi(CH3)3 is 108.64206 Molar mass of ClSi(CH3)3 is 108.64206 Molar mass of cr is 51,9961 Molar mass of SO3 is 80.0632 Molar mass of CuSO4 is 159,6086 Molar mass of FeCl2 is 126.751 Molar mass of FeCl2 is 126.751 Molar mass of C17H35CO2H is 284.47724 Molar mass of C8H9NO2 is 151.16256 Molar mass of SO2 is 64,0638 Molar mass of Na2CO3 is 105.98843856 Molar mass of Na2CO3 is 105.98843856 Molar mass of C2h2 is 26.03728 Molar mass of (NH4)2SO4 is 132.13952 Molar mass of H2O is 18,01528 Molar mass of C6h12o6 is 180,15588 Molar mass of FeCO3 is 115.8539 Molar mass of K2HPO4*3H2O is 228.221742 Molar mass of MgSO4*7H2O is 246.47456 Molar mass of MgSO4*7H2O is 246.47456 Molar mass of CCl4 is 153.8227 Molar mass of C(CH3)4 is 72,14878 Molar mass of KCH3CO2 is 98.14232 Molar mass of HNO3 is 63,01284 Molar mass of C6H2Cl5 is 251.34508 Molar mass of mg is 24.305 Molar mass of C3h8 is 44.09562 Molar mass of H2PdCl4 is 250.24788 Molar mass of C6h6 is 78.11184 Molar mass of H2O is 18.01528 Molar mass of P is 30.973762 Molar mass of P is 30.973762 Molar mass of C2h4 is 28.05316 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 Calculate molecular weight Molecular weights calculated on 02/11/19 Molecular weights calculated on 02/13/19 Molecular masses on 02/05/19 Molecular masses on 01/13/19 Molecular masses on 02/12/18
1,753
3,575
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.609375
3
CC-MAIN-2019-35
latest
en
0.574735
https://en.algorithmica.org/hpc/architecture/functions/
1,721,252,780,000,000,000
text/html
crawl-data/CC-MAIN-2024-30/segments/1720763514809.11/warc/CC-MAIN-20240717212939-20240718002939-00236.warc.gz
205,611,696
8,695
Functions and Recursion - Algorithmica Functions and Recursion To “call a function” in assembly, you need to jump to its beginning and then jump back. But then two important problems arise: 1. What if the caller stores data in the same registers as the callee? 2. Where is “back”? Both of these concerns can be solved by having a dedicated location in memory where we can write all the information we need to return from the function before calling it. This location is called the stack. #The Stack The hardware stack works the same way software stacks do and is similarly implemented as just two pointers: • The base pointer marks the start of the stack and is conventionally stored in rbp. • The stack pointer marks the last element of the stack and is conventionally stored in rsp. When you need to call a function, you push all your local variables onto the stack (which you can also do in other circumstances; e.g., when you run out of registers), push the current instruction pointer, and then jump to the beginning of the function. When exiting from a function, you look at the pointer stored on top of the stack, jump there, and then carefully read all the variables stored on the stack back into their registers. You can implement all that with the usual memory operations and jumps, but because of how frequently it is used, there are 4 special instructions for doing this: • push writes data at the stack pointer and decrements it. • pop reads data from the stack pointer and increments it. • call puts the address of the following instruction on top of the stack and jumps to a label. • ret reads the return address from the top of the stack and jumps to it. You would call them “syntactic sugar” if they weren’t actual hardware instructions — they are just fused equivalents of these two-instruction snippets: ; "push rax" sub rsp, 8 mov QWORD PTR[rsp], rax ; "pop rax" mov rax, QWORD PTR[rsp] add rsp, 8 ; "call func" push rip ; <- instruction pointer (although accessing it like that is probably illegal) jmp func ; "ret" pop rcx ; <- choose any unused register jmp rcx The memory region between rbp and rsp is called a stack frame, and this is where local variables of functions are typically stored. It is pre-allocated at the start of the program, and if you push more data on the stack than its capacity (8MB by default on Linux), you encounter a stack overflow error. Because modern operating systems don’t actually give you memory pages until you read or write to their address space, you can freely specify a very large stack size, which acts more like a limit on how much stack memory can be used, and not a fixed amount every program has to use. #Calling Conventions The people who develop compilers and operating systems eventually came up with conventions on how to write and call functions. These conventions enable some important software engineering marvels such as splitting compilation into separate units, reusing already-compiled libraries, and even writing them in different programming languages. Consider the following example in C: int square(int x) { return x * x; } int distance(int x, int y) { return square(x) + square(y); } By convention, a function should take its arguments in rdi, rsi, rdx, rcx, r8, r9 (and the rest in the stack if those weren’t enough), put the return value into rax, and then return. Thus, square, being a simple one-argument function, can be implemented like this: square: ; x = edi, ret = eax imul edi, edi mov eax, edi ret Each time we call it from distance, we just need to go through some trouble preserving its local variables: distance: ; x = rdi/edi, y = rsi/esi, ret = rax/eax push rdi push rsi call square ; eax = square(x) pop rsi pop rdi mov ebx, eax ; save x^2 mov rdi, rsi ; move new x=y push rdi push rsi call square ; eax = square(x=y) pop rsi pop rdi add eax, ebx ; x^2 + y^2 ret There are a lot more nuances, but we won’t go into detail here because this book is about performance, and the best way to deal with functions calls is actually to avoid making them in the first place. #Inlining Moving data to and from the stack creates noticeable overhead for small functions like these. The reason you have to do this is that, in general, you don’t know whether the callee is modifying the registers where you store your local variables. But when you have access to the code of square, you can solve this problem by stashing the data in registers that you know won’t be modified. distance: call square mov ebx, eax mov edi, esi call square add eax, ebx ret This is better, but we are still implicitly accessing stack memory: you need to push and pop the instruction pointer on each function call. In simple cases like this, we can inline function calls by stitching the callee’s code into the caller and resolving conflicts over registers. In our example: distance: imul edi, edi ; edi = x^2 imul esi, esi ; esi = y^2 add edi, esi mov eax, edi ; there is no "add eax, edi, esi", so we need a separate mov ret This is fairly close to what optimizing compilers produce out of this snippet — only they use the lea trick to make the resulting machine code sequence a few bytes smaller: distance: imul edi, edi ; edi = x^2 imul esi, esi ; esi = y^2 lea eax, [rdi+rsi] ; eax = x^2 + y^2 ret In situations like these, function inlining is clearly beneficial, and compilers mostly do it automatically, but there are cases when it’s not — and we will talk about them in a bit. #Tail Call Elimination Inlining is straightforward to do when the callee doesn’t make any other function calls, or at least if these calls are not recursive. Let’s move on to a more complex example. Consider this recursive computation of a factorial: int factorial(int n) { if (n == 0) return 1; return factorial(n - 1) * n; } Equivalent assembly: ; n = edi, ret = eax factorial: test edi, edi ; test if a value is zero jne nonzero ; (the machine code of "cmp rax, 0" would be one byte longer) mov eax, 1 ; return 1 ret nonzero: push edi ; save n to use later in multiplication sub edi, 1 call factorial ; call f(n - 1) pop edi imul eax, edi ret If the function is recursive, it is still often possible to make it “call-less” by restructuring it. This is the case when the function is tail recursive, that is, it returns right after making a recursive call. Since no actions are required after the call, there is also no need for storing anything on the stack, and a recursive call can be safely replaced with a jump to the beginning — effectively turning the function into a loop. To make our factorial function tail-recursive, we can pass a “current product” argument to it: int factorial(int n, int p = 1) { if (n == 0) return p; return factorial(n - 1, p * n); } Then this function can be easily folded into a loop: ; assuming n > 0 factorial: mov eax, 1 loop: imul eax, edi sub edi, 1 jne loop ret The primary reason why recursion can be slow is that it needs to read and write data to the stack, while iterative and tail-recursive algorithms do not. This concept is very important in functional programming, where there are no loops and all you can use are functions. Without tail call elimination, functional programs would require way more time and memory to execute.
1,771
7,388
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.59375
3
CC-MAIN-2024-30
latest
en
0.914161
https://sigmatricks.com/finding-angle-right-triangle/
1,719,003,688,000,000,000
text/html
crawl-data/CC-MAIN-2024-26/segments/1718198862157.88/warc/CC-MAIN-20240621191840-20240621221840-00653.warc.gz
454,968,481
8,851
Finding an Angle in a Right Angled Triangle We can find an unknown angle in a right-angled triangle, as long as we know the lengths of two of its sides. Use Sine, Cosine or Tangent! How to Finding an Angle in a Right Angled Triangle 1. find the names of the two sides we know • Opposite is opposite the angle, • Hypotenuse is The longest side. 2. Find which one of Sine, Cosine or Tangent to use: SOH… Sine: sin(θ) = Opposite / Hypotenuse CAH… Cosine: cos(θ) = Adjacent / Hypotenuse TOA Tangent: tan(θ) = Opposite / Adjacent 3. Put the values into equation 4. Solve that equation and find the value of θ Example Sin (θ) = Opposite / Hypotenuse = 3 / 6 = 0.5 Sin (θ) = 0.5, then θ = sin-1(0.5) = 30o
229
713
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.96875
4
CC-MAIN-2024-26
latest
en
0.824838
https://www.codevelop.art/tag/competitive-programming
1,708,945,218,000,000,000
text/html
crawl-data/CC-MAIN-2024-10/segments/1707947474659.73/warc/CC-MAIN-20240226094435-20240226124435-00894.warc.gz
708,060,534
6,586
Category: Competitive Programming Posts of Category: Competitive Programming TOP TAGS : 1. ## Find kth Smallest and Largest Element in an Array in C++ Find kth Smallest and Largest Element in an Array in C++ Hello everyone, in this post we are going to go through a very popular and recently asked coding question.  Finding the kth smallest and largest ele...Learn More 2. ## Count Trailing Zeros in Factorial of Number Count Trailing Zeros in Factorial of Number Here you will learn about how to count trailing zeros in factorial of number. One simple approach to count trailing zeros is first find factorial of number and then c...Learn More 3. ## Factorial of Large Number in C and C++ Here you will get program to find factorial of large number in C and C++. Factorial of big numbers contain so many digits. For example factorial of 100 has almost 158 digits. So there is no data type available ...Learn More 4. ## Two semesters of pure Data Structures and Algorithms A journey of grind in Codeforces, Codechef and a fair combination of LeetCode and Geeks for Geeks. Data Structures and Algorithms happen to be an integral part of any Computer Science and Engineering student a...Learn More 5. ## Problem Solving(Segment Trees — part 2) photo from unsplash Hello everyone, hope you people are safe and healthy. In my previous blog we have already started with the segment trees and now, I will discuss two interesting problems in this and see how...Learn More 6. ## Getting Started with Competitive Programming 👨🏻‍💻 March 24, 2020. Almost all of the colleges in India are shut due to the COVID-19 pandemic. We know you might be bored to death at home. If you are someone who has heard about “Competitive Programming” either ...Learn More
391
1,747
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.640625
3
CC-MAIN-2024-10
latest
en
0.893883
https://www.coursehero.com/file/p7krsh/1514-What-is-the-difference-between-how-the-largest-candidate-rule-works-and/
1,542,824,242,000,000,000
text/html
crawl-data/CC-MAIN-2018-47/segments/1542039749562.99/warc/CC-MAIN-20181121173523-20181121195523-00462.warc.gz
810,879,380
144,841
63874-Ch15 # 1514 what is the difference between how the largest This preview shows pages 2–4. Sign up to view the full content. 15.14 What is the difference between how the largest candidate rule works and how the Kilbridge and Wester method works? Answer : In the largest candidate rule, the algorithm begins with the work elements listed in descending order of their time values, whereas in the Kilbridge and Wester method, the algorithm operates on the work elements listed according to their precedence order in the precedence diagram. 15.15 In a mixed-model assembly line, what is the difference between variable-rate launching and fixed-rate launching? Answer : In variable-rate launching, the time interval between the launching of the current base part and the next is set equal to the cycle time of the current unit. Since different models have different work content times and thus different task times per station, their launch time intervals vary. In fixed-rate launching, the time interval between two consecutive launches is constant. The time interval in fixed-rate launching is an average based on the product mix and production rates of models on the line. 15.16 What are storage buffers and why are they sometimes used on a manual assembly line? Answer : A storage buffer is a location in the production line where work units are temporarily stored. As identified in the text, the reasons to include one or more storage buffers in a production line include: (1) to accumulate work units between two stages of the line when their production rates are different; (2) to smooth production between stations with large task time variations; and (3) to permit continued operation of certain sections of the line when other sections are temporarily down for service or repair. PROBLEMS Single Model Assembly Lines 15.1 A product whose work content time = 47.5 min is to be assembled on a manual production line. The required production rate is 30 units per hour. From previous experience, it is estimated that the manning level will be 1.25, proportion uptime = 0.95, and repositioning time = 6 sec. Determine (a) cycle time, and (b) ideal minimum number of workers required on the line. (c) If the ideal number in part (b) could be achieved, how many workstations would be needed? 102 This preview has intentionally blurred sections. Sign up to view the full version. View Full Document Assembly Lines-3e-S 07-05/06, 06/04/07 Solution : (a) T c = ( 29 60 0.95 30 = 1.9 min (b) w = Minimum Integer 47.5 1.9 = 25 workers (c) n = 25/1.25 = 20 workstations 15.2 A manual assembly line has 17 workstations with one operator per station. Work content time to assemble the product = 28.0 min. Production rate of the line = 30 units per hour. The proportion uptime = 0.94, and repositioning time = 6 sec. Determine the balance delay. Solution : T c = ( 29 60 0.94 30 = 1.88 min, T s = 1.88 - 0.1 = 1.78 min w = n = 17 workers and 17 stations E b = ( 29 28.0 17 1.78 = 0.9253, d = 1 - 0.9253 = 0.0747 = 7.47% 15.3 A manual assembly line must be designed for a product with annual demand = 100,000 units. The line will operate 50 wks/year, 5 shifts/wk, and 7.5 hr/shift. Work units will be attached to a continuously moving conveyor. Work content time = 42.0 min. Assume line efficiency = 0.97, balancing efficiency = 0.92, and repositioning time = 6 sec. Determine (a) hourly production rate to meet demand, and (b) number of workers required. This is the end of the preview. Sign up to access the rest of the document. {[ snackBarMessage ]} ### What students are saying • As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students. Kiran Temple University Fox School of Business ‘17, Course Hero Intern • I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero. Dana University of Pennsylvania ‘17, Course Hero Intern • The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time. Jill Tulane University ‘16, Course Hero Intern
1,081
4,489
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.546875
4
CC-MAIN-2018-47
latest
en
0.939188
heydocsjmmht.netlify.app
1,709,348,421,000,000,000
text/html
crawl-data/CC-MAIN-2024-10/segments/1707947475727.3/warc/CC-MAIN-20240302020802-20240302050802-00759.warc.gz
294,789,695
11,539
# Linear packing factor manual ## Foundations of Materials Science and Engineering Solution Manual. 21. CHAPTER 3 3.20 Calculate the atomic packing factor for the FCC structure. By definition Planar. Intercepts. Reciprocals of Intercepts. Planar. Intercepts. Reciprocals. From: Handbook of Fillers (Fourth Edition), 2016 Random close packing density vs. particle size ratio for particles with size ratio close to one. Indirect measurement is based on the modeling of the linear evolution of the pressure drop, ## Packing gland live insertion/removal up to 600 psig (41.4 barg) must use a retractor (removable or welded on). (+\$657.00) 29 Jul 2014 Tutorial illustrating how to calculate linear densities, planar densities & atomic packing factors in an example (FCC) lattice. Video lecture for  ties such as particle density, particle size distribution and eigen-packing are necessary input parameters in “Linear Packing Density Model of Grain Mixtures”. 7 Mar 2016 Instructors' Solution Manual THE SCIENCE AND ENGINEERING OF 3–68 Determine the repeat distance, linear density, and packing fraction  TOYOPEARL. ToYoPeARL® instruction Manual. Table of Contents. I. Packing. 2. 1. Preparation according to the above procedures, and operated at linear velocities of 50 assymetry factors, please repeat the packing procedure. If column  recommendation in the 4C manual for choosing Mu-value 0.07 for Danish material, can be applicable for Swedish Key words: concrete mix design, particle packing, packing density, Modified. Toufar, 4C. Linear Packing Density Model . In crystallography, atomic packing factor (APF), packing efficiency or packing fraction is the fraction of volume in a crystal structure that is occupied by constituent  In 1994, by maximising the packing density of the cementitious materials quantitative measure, the consistence of the paste has to be determined by manual. 7.9.1 Residue Density Fit; 7.9.2 Rotamer Analysis; 7.9.3 Temperature Factor Variance (Section Display Manager) allow the production of a packing diagram. You can specify the residues that you want to refine without using a linear or  7.9.1 Residue Density Fit; 7.9.2 Rotamer Analysis; 7.9.3 Temperature Factor Variance (Section Display Manager) allow the production of a packing diagram. You can specify the residues that you want to refine without using a linear or  In crystallography, atomic packing factor (APF), packing efficiency or packing fraction is the sum of the sphere DOE Fundamentals Handbook, Volume 1 and 2. 3 Oct 2014 A statistically significant drop in the cone packing density was observed A simple linear regression was applied to analyze the variation in cone Some have a manual addition to the automated software and a few have  No part of this manual may be reproduced, stored in a retrieval system, or transmitted, in any Intensity scaling factor for the database entry's peaks which gives the best agreement with the unknown The minimum value for a linear lattice parameter has been fixed to 2.5 angstroms. atomic weight and packing density37. The definition of particle packing density αt is the solid volume of Schwanda, 1966], Linear-Mixture Packing Model [Yu and Standish, 1987; 1996] and the Idorn, G.M. (1995) Europack V1.1 User Manual Europack (G.M. Idorn Consult A/S,. Supplements to the text include the Instructor's Solutions Manual that provides complete solutions to Repeat Distance, Linear Density, and Packing Fraction. law, for example the Poisson's linear point process. For small packing fractions volume of an inscribed sphere, and pf is the packing fraction. In relation 1.26, authors as indicated in the User Instructions included with a complete download. 6 Aug 2014 2.9 A (001) slice illustrating electron-density distribution in MgB2 2The present PDF file, VESTA Manual.pdf, need to share the same folder with a binary executable file to satisfy linear constraints imposed on them [52, 53]. In this work, we develop a DNN prediction model for the packing density of small organic We employ a linear fit between the calculated and experimental values to of these tasks were performed by ChemHTPS without manual intervention. 13 Jan 2020 Consequently, a previously developed linear packing model is modified so and maximum dry packing density when incorporating fines cohesive Preparation of Sediment Manual of the Committee on Sedimentation of the  2 Jul 2014 unit cell and (b) the packing factor in the unit cell. Sol ut i on: 18 The Sci ence and Engi neeri ng of Mat eri al s Inst r uct or s Sol ut i on Manual (b) There Only the [110] is close packed; it has a linear packing fraction of 1. Today, you have more choices of columns and packing materials to suit an ever (especially its viscosity), and flow rate or average linear velocity. The separation factor is a measure of the time or distance between the maxima of two peaks. ## Instructions 28-9958-80 AC Core beads. Capto™ Core Packing Factor 1.15 in water at 20ºC. approximated as the bed height (cm) divided by the linear flow. Barcode printing errors and poor label quality require manual intervention, slowing down production and presenting challenges to vendors and partners once product leaves the plant. China Liquid Filling Machine manufacturers - Select 2020 high quality Liquid Filling Machine products in best price from certified Chinese Machine manufacturers, Filling Machine suppliers, wholesalers and factory on Made-in-China.comVacuum Packing Machine Factory, Custom Vacuum Packing Machine…https://made-in-china.com/factory/vacuum-packing-machine.htmlLooking for vacuum packing machine factory direct sale? You can buy factory price vacuum packing machine from a great list of reliable China vacuum packing machine manufacturers, suppliers, traders or plants verified by a third-party… The densest known regular sphere-packing in two, three and four dimensions uses the cell centers of one of these tessellations as sphere centers. Choose from 60 top Production Line stock illustrations from iStock. Find high-quality royalty-free vector images that you won't find anywhere else. Penglai Industrial Corporation,as a famous manufacturer in supply various machines like filling、capping、labeling as well as packing machines ever since the new century,is willing to offer you help in pharma&cosmetic&food making process. Power Transmission - Linear Bearings, Housings & Blocks products such as Linear Plain Bearings available online from the world's largest high service distributor of Mechanical Products & Tools. No part of this manual may be reproduced, stored in a retrieval system, or transmitted, in any Intensity scaling factor for the database entry's peaks which gives the best agreement with the unknown The minimum value for a linear lattice parameter has been fixed to 2.5 angstroms. atomic weight and packing density37. #### A liquid distributor is designed to wet the packing bed evenly and initiate uniform curve is typically linear in the concentration ranges usually encountered in air is calculated from the following factors, presented in Section 1 of this manual Foundations of Materials Science and Engineering Solution Manual. 223 10.5 What two main factors affect the packing of ions in ionic solids? 10.11 Calculate the linear densities in ions per nanometer in the [110] and [111] directions for.
1,578
7,340
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.84375
3
CC-MAIN-2024-10
longest
en
0.787908
https://completesuccess.in/index.php/2017/09/15/quiz-272/
1,702,266,066,000,000,000
text/html
crawl-data/CC-MAIN-2023-50/segments/1700679103464.86/warc/CC-MAIN-20231211013452-20231211043452-00623.warc.gz
221,074,670
30,519
# Quiz – 272 ### Quant Quiz Directions: What should come in place of question mark (?) in the following questions: Q1. 4 + 4.44 + 0.4 + 44.04 + 444 = ? 1) 486.48             2) 496.88             3) 486.88             4) 496.84             5) None of these Q2. (?)^2 + (65)^2 = (160)^2 – (90)^2 – 7191 1) 72             2) 68             3) 6084             4) 78             5) 82 Q3. 0.07 % of 1250 – 0.02 % of 650 = ? 1) 0.745             2) 0.875             3) 0.545             4) 0.695             5) None of these Q4. sqroot(6.25) / 0.5 = ?/10 1) 5             2) 500             3) 50             4) 2.5             5) None of these Q5. 1/8 * (223 + ?) = 73 1) 361             2) 371             3) 341             4) 391             5) None of these Q6. cuberoot(12167) * sqroot(16384) = ? * 11.5 1) 204             2) 234             3) 286             4) 256             5) None of these Q7. [18 * 14 – 6 * 8] / [488 / 4 – 20] = ? 1) 3             2) 2.5             3) 4             4) 8             5) None of these Q8. sqroot(7225) * 1/5 + (45)^2 = ? 1) 2058             2) 2042             3) 2040             4) 2038             5) None of these Q9. 25.6 % of 250 + sqroot(?) =119 1) 3125             2) 3025             3) 55             4) 65             5) 4225 Q10. 7428 * 3/4 * 2/9 * ? = 619 1) 0.2             2) 0.8             3) 0.5             4) 2.4             5) 1.5 1. 2             2. 4             3. 1             4. 3             5. 1             6. 4             7. 5             8. 2             9. 2             10. 3 ### Reasoning Quiz Directions (Q. 1-5): Each of the questions below consists of a question and three statements numbered I, II and III given below it. You have to decide the data provided in which of the statements are sufficient to answer the question. Choose your answer accordingly. Q1. Ram, Mukesh, Raju, Raheem, Rohit and Rajesh are sitting around a circular table facing outside. Who is sitting opposite to Ram? I. Mukesh and Raju are sitting opposite to each other. II. Rohit may sit either on the immediate right of Mukesh or on the immediate left of Raju. III. Raheem can’t sit opposite to Rohit and Ram can’t sit opposite to Rohit. 1) Only I and II              2) Only II and III              3) Only I and II or III             4) All I, II and III             5) Data inadequate Q2. How is Mayank related to Seema? I. Rajeev has two daughters. One of them is Reema who is married to Mayank. II. Seema is mother of Rinki the younger sister of Reema. III. Rajeev is Seema’s husband. 1) Only I and II              2) Only I and III              3) Only I and either II or III             4) Any two of three              5) All are necessary Q3. Who among A, B, C, D and E was the first to reach the station? I. B reached earlier than E. A and C were not the first to reach. II. A reached earlier than both C and E but could not reach earlier than D who was at the station before B reached. III.C didn’t reach just after A. 1) Only I and II              2) Only I and II or III              3) Only II and III             4) All I, II and III              5) None Q4. If A, B, C, D and E are sitting in a row facing south, who among them is in the middle? I. E is at the left end of the row.          II. D sits between A and C.             III. Neither A nor C sits at an extreme end. 1) Only I and II              2) Only II and III              3) Any two of three             4) All I, II and III              5) Data inadequate Q5. On which day of the week did Anika arrive? I. Her sister Teena correctly remembers that she did not arrive on Wednesday. II. Her friend Meena correctly remembers that she arrived on before Friday. III. Her mother correctly remembers that she arrived before Friday but after Tuesday. 1) Only I and II              2) Only II and III              3) Only I and III             4) All are correct              5) Data inadequate Directions (Q. 6-10): Read the following information carefully and answer the questions that follow: Dinesh, Mukesh, Rahul, Geeta, Rashmi, Sunil, Naveen and Sonam are eight students pursuing doctorate in eight different subjects, viz Statistics, Physics, Economics, English, Management, Chemistry, Zoology and Botany. They are debating on a topic and sitting around a circular table. The student of Physics is two places away from Geeta, who is neither the student of Zoology nor of English. Dinesh is third to the left of the student of Statistics and opposite to student of Management. The student of Management is not in front of Rahul and Sonam. Rahul studies neither Physics nor Chemistry. Sonam doesn’t study Botany. Geeta is to the immediate right of the person who is opposite to the student of Chemistry. Rashmi is opposite to either the student of Economics or English but she is near to the student of Physics. Naveen sits immediate to the right of the student of English, who is third to the left of the student of Botany. The student of English is two places to the right of student of Chemistry. Mukesh is the immediate neighbour of the student of Chemistry and Naveen is one place away from Sunil. Q6. Who does study Management? 1) Geeta              2) Sunil              3) Rahul              4) Sonam              5) Can’t be determined Q7. How many students do sit between Dinesh and Geeta? 1) None              2) One              3) Two              4) Three              5) Can’t be determined Q8. Which of the following statements is true about the arrangement? 1) The student of Zoology is the neighbour of the student, who is opposite to the student of Management. 2) The student of Management and the student of Economics are adjacent to each other. 3) Sonam is opposite to the person, who is immediate to the left of Geeta. 4) The student of Economics is opposite to the student of Statistics. 5) None of these Q9. Which of the following combinations is true? 1) Mukesh-Zoology              2) Naveen-Management             3) Sunil-Physics             4) English-Rahul             5) Sonam-Statistics Q10. Which of the following statements is correct? 1) There is only one student between the students of English and Economics. 2) Sonam is student of Chemistry but Geeta can never be a student of Management. 3) The student of Botany is opposite to Rahul. 4) Naveen is third to the left of the student of Economics. 5) None of these 1. 4 1. 3 1. 5 D > B and D > A > C, E 1. 2 From I: Right _ _ _ _ E left From III:A and C do not sit on the ends of the row From I and II: B’s position is not clear So, I and II together are not sufficient From II and II: Two arrangement are not possible _CDA _ In both the arrangements, D sits at the middle position therefore II and III together are sufficient from I and III. We can’t say who sits at middle position as exact position of A, B, C and D is not clear So, I and III eaven together are not sufficient. 1. 3 2. 1 3. 4 4. 3 5. 3 10. 4 ### ENGLISH QUIZ Directions (1-10): In each question below, a sentence is given with a part of it printed in bold type. That part may contain a grammatical error. Each sentence is followed by phrases (a) (b), (c) and (d). Find out which phrase should replace the phrase given in bold to correct the error, if there is any, and to make the sentence grammatically meaningful and correct. If the sentence is correct as it is and no correction is required (e) mark as the answer. Q1.Fishing and swimming are two different activities, independence of one another. (a) independent of the other             (b) independence of the other             (c) independent of each other (d) interdependence on each other             (e) No correction required Q2.An early action on our suggestion, preferably before the elections are announced, will be appreciative. (a) would be appreciate             (b) would have been appreciate             (c) would have been appreciated             (d) will be appreciated             (e) No correction required Q3.He is the man whose advice is difficult in following. (a) advice is not easy in following             (b) advice is difficult to follow             (c) advice has difficult to follow (d) advice has difficulty to follow             (e) No correction required Q4.He told me that he only had a little money. (a) tells me that he only has a little             (b) told me that only he has a little             (c) only told me that he has little (d) told me that he had only a little             (e) No correction required Q5.You must ensure that I get my cheque encash before Saturday. (a) my cheque cashed               (b) cash my cheque             (c) my cheque cash               (d) encash my cheque (e) No correction required Q6.He persevered and succeeded to the face of all obstacles (a) in facing all the               (b) to all the face of             (c) by the face of all               (d) at the face of the all (e) No correction required Q7.The quality of services provided by them has not been effectively monitored. (a) has not being effective in monitoring             (b) have not been effectively monitored (c) has not being effectively monitored             (d) is not being effective in monitoring             (e) No correction required Q8.We appreciate your resourcefulness in effectively handling considerable difficult exercises. (a) considerable difficulty             (b) considerably difficult             (c) considered difficulty             (d) considerably and difficulty             (e) No correction required Q9.The Chairman approved the recommendations of the committee with partiality modifications. (a) by partially modified             (b) with partial modifications             (c) with partial modifies             (d) by partially modifying             (e) No correction required Q10.The possible market where the product can sold depends upon several considerations including the tastes, likes, dislikes, etc. of the inhabitants. (a) produce can sold                (b) product can sale             (c) product can be sold             (d) product can be selling (e) No correction required 1.c             2.d             3.b             4.d             5.a             6.a             7.e             8.b             9.b             10.c ### Computer Quiz Q1. Which of the following is a Web browser ? a) Paint             b) PowerPoint             c) Firefox             d) Word             e) All are Web browsers Q2. Which of the following term is associated with is a fixed-length contiguous block of virtual memory? a) A page             b) Memory page             c) Virtual page             d) Only (a) and (b)             e) All of the above Q3. The fourth generation computers use which technology for both CPU and memory that allows millions of transistors on a single chip? a) Vacuum Tubes             b) VLSI Technology             c) Cloud Computing             d) Generic Algorithm             e) None of the above Q4. In PowerPoint, what is the function of Alt+H? a) Open the Transitions tab             b) Open the Home tab             c) Open the Insert tab             d) Open the Review tab e) Open the Tell me box Q5. What type of Internet company provides pay-per-use software? a) Software leasing b) Software developers c) Software-as-a-service (Saas)             d) Application service provider (ASP) e) None of the above Q6. What is SQL? a) Language used to communicate with database             b) Language used for object oriented programming c) Language used to program system software d) Language used to hack into other systems e) Language used for programming devices Q7. Changing cardinality in a database is____? a) A common database design task b) A rare database design task, but does occur c) A database design task that never occurs d) Is impossible to do, so a new database must be constructed and the data moved into it             e) Not a database design task Q8. Who is the father of computers? a) John von Neumann             b) Albert Einstein             c) Charles Babbage             d) Joseph Eckert             e) None of the above Q9. What is the shortcut key to move one word to the left in MS Word? a) Tab + Left Arrow                       b) Alt + Left Arrow             c) Shift + Left Arrow             d) Ctrl + Left Arrow             e) None of the above Q10. Who is said to be the first computer programmer? a) Ada Lovelace             b) Charles Babbage             c) John Mauchly             d) Douglas Engelbart             e) None of the above
3,413
12,591
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.953125
3
CC-MAIN-2023-50
latest
en
0.434363
https://www.plati.com/itm/mathematical-methods-of-operations-research-in-economic/1816376
1,540,216,545,000,000,000
text/html
crawl-data/CC-MAIN-2018-43/segments/1539583515088.88/warc/CC-MAIN-20181022134402-20181022155902-00423.warc.gz
1,041,684,122
14,260
# Mathematical Methods of Operations Research in Economic Affiliates: 0,3 \$how to earn Sold: 56 last one 08.05.2018 Refunds: 0 Content: 41017143510647.xls 47,5 kB Loyalty discount! If the total amount of your purchases from the seller Kirill Zakharov more than: 500 \$ the discount is 30% show all discounts 10 \$ the discount is 5% If you want to know your discount rate, please provide your email: # Seller Kirill Zakharov information about the seller and his items Seller will give you a gift certificate in the amount of 6 RUB for a positive review of the product purchased.. # Description Responses to test the Open Law Institute on the subject "Mathematical Methods of Operations Research in Economics" in the file for the program exel Test Client. Answer format: question -> write the number of the correct answer and the correct answer is written himself. The best forms Answers to tests OYUI latest, guarantee 100% of the results of the test to assess the 5. The questions in the test: The problem of dynamic programming Xk denotes The number of inequalities in the system constraints of the dual problem The geometric meaning of the simplex method to solve the problem to the maximum is a successive transition from one vertex of restrictions to The most used method for solving the transport problem is the method The problem of the distribution of funds between enterprises The area of \u200b\u200bfeasible solutions of linear programming problem - it If Fmax - the optimal solution of the direct problem, and Zmin - dual, then The problem of the distribution of funds between enterprises profit fk (x) k-th enterprise Simplex method was first proposed In the first stage of the simplex method is Suppose that in the problem of distribution of funds between enterprises xk - funds allocated to the k-th to the enterprise; sk- amount of money that is distributed among the remaining n - k enterprises. The equations of state have the form The problem of the distribution of funds between enterprises apply programming techniques The transport problem is a problem of programming Dynamic programming is applied to transactions The problem of the distribution of funds between enterprises is required to determine what amount of money should be allocated to each company to The problem of the distribution of funds between enterprises function fk (xk) defined Economic-mathematical model of the transportation problem has limitations as a system In the left column simplex table are recorded The first line of the table contains simplex Line level - is the line along which the objective function etc. Payment is made at any time with immediate delivery of the responses through the service instant purchases, which guarantees the security of the transaction. Payment Methods: - Yandex Money - Webmoney - Qiwi. - Do you mobile operator MTS, Beeline and MegaFon? You can easily pay for the answers to the tests OYUI with a personal mobile phone account. The drop-down menu list "Buy" to select the desired operator, click the "Checkout" refer to the size of your commission statement (maximum of 10% of the price) and follow the instructions. After payment you will receive a letter in the mail with a link to download the answers to the tests OYUI. Discounts up to 30% for regular customers. Have a question, ask, contact details found on the card vendor, is a function of "ask a question" in real time, respond quickly! With us you are sure you pass the tests quickly and perfectly! :) Thank you for treatment. In order to see all the answers to the tests OYUI click on a link in the description of the goods "Kirill Zakharov information about the seller and its products." # Feedback 3 все отлично, как всегда, тест пройден 2017-10-23 Period 1 month 3 months 12 months 0 0 1 0 0 0 Seller will give you a gift certificate in the amount of 6 RUB for a positive review of the product purchased.. In order to counter copyright infringement and property rights, we ask you to immediately inform us at support@plati.market the fact of such violations and to provide us with reliable information confirming your copyrights or rights of ownership. Email must contain your contact information (name, phone number, etc.)
928
4,269
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.96875
3
CC-MAIN-2018-43
latest
en
0.901862
http://m.wikihow.com/Remember-the-Trigonometric-Table
1,506,035,385,000,000,000
text/html
crawl-data/CC-MAIN-2017-39/segments/1505818687938.15/warc/CC-MAIN-20170921224617-20170922004617-00215.warc.gz
214,381,764
31,063
# How to Remember the Trigonometric Table Did you ever have any trouble remembering the sine or tangent of an angle? This article explains how you can easily find the basic trigonometric numbers of the most common angles. ## StepsEdit 1. 1 Create a table. In the first row, write down the trigonometric ratios (sin, cos, tan, cot). In the first column, write down the angles (0°, 30°, 45°, 60°, 90°). Leave other entries blank. 2. 2 Fill in the sine column. We will fill in the blank entries in the sin column using the expression √x/2. Once the sine column is filled, we'll be able to fill all other columns effortlessly! • For the 1st entry in the sine column (that is, sin 0°), set x = 0 and plug it in the expression √x/2. Thus, sin 0° = √0/2 = 0/2 = 0 • For the 2nd entry in the sine column (that is, sin 30°), set x = 1 and plug it in the expression √x/2. Thus, sin 30° = √1/2 = 1/2 • For the 3rd entry in the sine column (that is, sin 45°), set x = 2 and plug it in the expression √x/2. Thus, sin 45° = √2/2 = 1/√2 • For the 4th entry in the sine column (that is, sin 60°), set x = 3 and plug it in the expression √x/2. Thus, sin 60° = √3/2. • For the 5th entry in the sin column (that is, sin 90°), set x = 4 and plug it in the expression √x/2. Thus, sin 90° = √4/2 = 2/2 = 1. 3. 3 Fill in the cosine column. Simply copy the entries in the sine column in reverse order into the cosine column. This is valid because sin x° = cos (90-x)° for any x. 4. 4 Fill in the tangent column. We know that tan = sin / cos. So, for every angle take its sin value and divide it by the cos value to get the corresponding tan value. For example, tan 30° = sin 30° / cos 30° = (√1/2) / (√3/2) = 1/√3 5. 5 Fill in the cotangent column. Simply copy the entries in tangent column in reverse order into the cot column. This is valid because tan x° = sin x° / cos x° = cos (90-x)° / sin (90-x)° = cot (90-x)° for any x. ## Community Q&A Search • How do I write sec and co-sec values? • Values of cosec, sec and cot can be found by taking inverse of sin, cos and tan respectively for the given angle. Thanks! 218 64 • Why tan 90 degree is not defined? • The sine of 90° equals 1, and the cosine of 90° equals zero. It happens that the tangent of any angle is equal to its sine divided by its cosine. Thus, the tangent of 90° equals 1 divided by zero. However, dividing by zero is "undefined," because it equals infinity (which is not a defined number). That makes the tangent of 90° undefined. Thanks! 83 29 • How do I fill a cosec and sec value? • You can reverse the numerator and denominator of sin to find cosec like (30°= 0= 1/0 i.e., not defined) and of cos to find sec. Thanks! 51 22 • Where can I find the cos when I know the sin and the location of theta? • Draw the polar triangle with the theta value and the two sides given by sin. They should be given to you in the form of a fraction, sin=opposite/hypotenuse. Then, use the Pythagorean theorem to solve for the third side and do cos=adjacent/hypotenuse. Thanks! 85 43 • How do I use cos in trigonometry to find an angle? • Calculate the ratio between the adjacent side and the hypotenuse, then look up that ratio in a cosine table, which will tell you the angle. Thanks! 31 13 • How can all the trigonometry identities be learned the fastest? • It's a simple matter of memorization. Thanks! 41 20 • How can I remember angles exceeding 90 degrees? • Graph the equations, sin(x), cos(x), tan(x), csc(x), sec(x), and cot(x). Use the x coordinates 0, 90, 180, 270, and 360 to see how each trigonometric function flows on the graph. Thanks! 54 32 • How do I use tan in trig to find an angle? • tan(x)=sin(x)/cos(x) Thanks! 34 20 • Where is sec and cosec? • The author of this article chose not to display those functions. They're easily found, however: the secant of an angle is the reciprocal of its cosine, and the cosecant of an angle is the reciprocal of its sine. Thanks! 20 12 • When perpendicular and hypotenuse are not equal, then why is sine 90 equal to 1? • The sine of any angle (in a right triangle) equals the side opposite the angle divided by the hypotenuse. In the case of a 90° angle, the opposite side is the hypotenuse, so the sine is the hypotenuse divided by itself, which is 1. Thanks! 3 1 • What are the identities? • How to solve inverse trigonometric function? 200 characters left ## TipsEdit • Do not leave irrational numbers in the denominator. For example, tan30° = 1/√3. Don't leave it that way. Instead, write it like √3/3. ## WarningsEdit • You can't divide by 0! tan90° = ±∞ and cot0° = ±∞, but ∞ isn't considered an actual number, so don't write it. Write "not defined" or "n/a" (not applicable) instead.
1,391
4,689
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.78125
5
CC-MAIN-2017-39
latest
en
0.766337
https://www.chegg.com/homework-help/precalculus-7th-edition-chapter-7.3-problem-57e-solution-9780618643448
1,529,928,905,000,000,000
text/html
crawl-data/CC-MAIN-2018-26/segments/1529267867666.97/warc/CC-MAIN-20180625111632-20180625131632-00292.warc.gz
769,373,612
15,455
# Precalculus (7th Edition) View more editions Solutions for Chapter 7.3 Problem 57E • 8083 step-by-step solutions • Solved by professors & experts • iOS, Android, & web Chapter: Problem: Step-by-Step Solution: Chapter: Problem: • Step 1 of 5 Let x, y, and z be the amount of fertilizer brands X, Y and Z respectively that is needed to obtain the desired mixture. We are given that brand X contains equal parts of fertilizer B and fertilizer C. Therefore, in the optimal mixture brand X has 0 units of fertilizer A, x units of fertilizer B and x units of fertilizer C. The brand Y contains one part of fertilizer A and two parts of fertilizer B. Therefore, in the optimal mixture brand Y has y units of fertilizer A, 2y units of fertilizer B and 0 units of fertilizer C. The brand Z contains two parts of fertilizer A, five parts of fertilizer B and two parts of fertilizer C. Therefore, in the optimal mixture brand Z has 2z units of fertilizer A, 5z units of fertilizer B and 2z units of fertilizer C. • Chapter , Problem is solved. Corresponding Textbook Precalculus | 7th Edition 9780618643448ISBN-13: 0618643443ISBN: Authors: Alternate ISBN: 9780618643479, 9780618643493, 9780618643530, 9780618834440, 9780618844371, 9780618844388, 9780618844395, 9780618844401, 9780618844418, 9780618844661, 9780618844678, 9780618844685, 9780618844692, 9780618844708, 9781111809355
392
1,380
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.1875
3
CC-MAIN-2018-26
latest
en
0.745782
https://groups.yahoo.com/neo/groups/primenumbers/conversations/topics/25353?xm=1&o=1&l=1
1,503,111,157,000,000,000
text/html
crawl-data/CC-MAIN-2017-34/segments/1502886105291.88/warc/CC-MAIN-20170819012514-20170819032514-00103.warc.gz
786,136,888
25,774
## RE: Yet another factoring puzzle Expand Messages • Aurelius surely did NOT mean this to be a square: > 16*F(n) = 5^(2*n)-18*5^n+1= > (5^n-2*5^[(n+1)/2]+1) * (5^n-2*5^[(n+1)/2]+1) --- In Message 1 of 23 , Sep 1, 2013 Aurelius surely did NOT mean this to be a square: > 16*F(n) = 5^(2*n)-18*5^n+1= > (5^n-2*5^[(n+1)/2]+1) * (5^n-2*5^[(n+1)/2]+1) --- In primenumbers@yahoogroups.com, <d.broadhurst@...> wrote: --- In primenumbers@yahoogroups.com , "mikeoakes2" <mikeoakes2@...> wrote: > Aurelius [or somebody] he say: > > 16*F(n) = 5^(2*n)-18*5^n+1= > (5^n-2*5^[(n+1)/2]+1) * (5^n-2*5^[(n+1)/2]+1) Wiki, it say: > There is evidence dating this algorithm as far back as > the Ur III dynasty. David The second factor should be 5^n PLUS 2*5 etc. Another schoolboy howler. • ... Obviously. ... No. Just someome typing good maths even faster than his agile brain was working. David Message 2 of 23 , Sep 1, 2013 > > (5^n-2*5^[(n+1)/2]+1) * (5^n-2*5^[(n+1)/2]+1) > The second factor should be 5^n PLUS 2*5 etc. Obviously. > Another schoolboy howler. No. Just someome typing good maths even faster than his agile brain was working. David • Hi David, ... F(265) factorization : 2^2 239 739 3001 65482831 256219297361 1693591423953203161 30357718855490049445651878883131113101 Message 3 of 23 , Sep 1, 2013 Hi David, > > > Exercise 9: Factorize F(265) completely. > F(265) factorization : 2^2 239 739 3001 65482831 256219297361 1693591423953203161 30357718855490049445651878883131113101 102797863053051886296311988529254933989146480741 3877828174415305750004694470712933786753853898797149341 269073781314228860816753896980501633755080784585248485757620731 9064688734866670980979961151235275322157501870878158907210451074982503913520844038924855331866069982604479273540797473039 F(263): one composite factor remaining (in progress). JL • ... Congrats. When I did it, I was running ECM and SNFS in parallel and pulled the plug on SNFS when ECM found a p48. ... Here I used GNFS on the C130. David Message 4 of 23 , Sep 1, 2013 "j_chrtn" <j_chrtn@...> wrote: > > Exercise 9: Factorize F(265) completely. > F(265) factorization : > 2^2 > 239 > 739 > 3001 > 65482831 > 256219297361 > 1693591423953203161 > 30357718855490049445651878883131113101 > 102797863053051886296311988529254933989146480741 > 3877828174415305750004694470712933786753853898797149341 > 269073781314228860816753896980501633755080784585248485757620731 > 9064688734866670980979961151235275322157501870878158907210451074982503913520844038924855331866069982604479273540797473039 Congrats. When I did it, I was running ECM and SNFS in parallel and pulled the plug on SNFS when ECM found a p48. > F(263): one composite factor remaining (in progress). Here I used GNFS on the C130. David • ... I first removed small factors with factorize program from libgmp demos (= trial divisions + Pollard s rho). Then I found all other factors with ECM (using Message 5 of 23 , Sep 1, 2013 > > > Congrats. When I did it, I was running ECM and SNFS in parallel > and pulled the plug on SNFS when ECM found a p48. > > > F(263): one composite factor remaining (in progress). > > Here I used GNFS on the C130. > > David > I first removed small factors with factorize program from libgmp demos (= trial divisions + Pollard's rho). Then I found all other factors with ECM (using P-1 mode for some of them) except for C118 = 3877828174415305750004694470712933786753853898797149341 * 269073781314228860816753896980501633755080784585248485757620731 for which I directly chose cado-nfs. For the remaining factor C130 of F(263), I've chosen both ECM and cado-nfs. Still waiting for the result... JL • ... C130 factorization completed this morning (cado-nfs winner vs ecm). Finally : F(263) = 2^2 11 941 88079 40541279 53849801 3709997079374701 Message 6 of 23 , Sep 4, 2013 > > > > F(263): one composite factor remaining (in progress). > > Here I used GNFS on the C130. > > David > C130 factorization completed this morning (cado-nfs winner vs ecm). Finally : F(263) = 2^2 11 941 88079 40541279 53849801 3709997079374701 23529341871144986702279 7270487490315018281073601513510602536818804246566820732218199 790942341954447264420872400154902667291367695120485038995898872524619 711892421814353474455471503465724397364909744377767780766071778400352308618205366660863738451363497318680099295967052261874590114183928845236941340734473602381665606275060651 JL • ... Congrats, again, J-L. This link should show all the factorizations from n=261 to n=290: http://preview.tinyurl.com/mx3cdoe David Message 7 of 23 , Sep 4, 2013 "j_chrtn" <j_chrtn@...> wrote: > F(263) = Congrats, again, J-L. This link should show all the factorizations from n=261 to n=290: http://preview.tinyurl.com/mx3cdoe David • ... As you can see, Exercise 8 was censored :-) As far as I can tell, no-one (apart from the setter) yet solved Exercise 6, which can be done in less than 2 Message 8 of 23 , Sep 16, 2013 > Definition: Let F(n) = ((5^n-9)/4)^2-5 for integer n > 0. > > Exercise 1: For even n > 2, prove that F(n) is composite. > > Exercise 2: For odd n > 1, prove that F(n)/4 is composite. > > Exercise 3: For k > 1, prove that F(3*k) has at least 4 odd > prime divisors. > > Exercise 4: Factorize F(6) = 15241211 completely, by hand. > > Exercise 5: Find the complete factorization of F(n) for at > least one odd integer n > 250. > > Exercise 6: Find the complete factorization of F(n) for at > least one even integer n > 600. > > Exercise 7: Factorize F(263) completely. > > Exercise 9: Factorize F(265) completely. As you can see, Exercise 8 was censored :-) As far as I can tell, no-one (apart from the setter) yet solved Exercise 6, which can be done in less than 2 minutes, using OpenPFGW. What is remarkable quickly. Heuristically, that was not to be expected. Thanks to Bernardo Boncompagni, Mike Oakes and Jean-Louis Charton, for disposing neatly of the other exercises, and to Ben Buhrow, for his excellent package, Yafu. David • ... For sure one can find this solution quickly using openpfgw. But the smart guy now knowing that you (or others) have filled in factordb with many numbers of Message 9 of 23 , Sep 16, 2013 > As you can see, Exercise 8 was censored :-) > > As far as I can tell, no-one (apart from the setter) > yet solved Exercise 6, which can be done in less > than 2 minutes, using OpenPFGW. What is remarkable > quickly. Heuristically, that was not to be expected. For sure one can find this solution quickly using openpfgw. But the smart guy now knowing that you (or others) have filled in factordb with many numbers of this form just has to type http://factordb.com/index.php?query=%28%285^n-9%29%2F4%29^2-5&use=n&perpage=20&format=1&sent=1&PR=1&PRP=1&C=1&CF=1&U=1&FF=1&VP=1&EV=1&OD=1&VC=1&n=600 to find the solution. :-) J-L PS: I really did F(263) and F(265) factorization completely. I did not check factordb. • ... Indeed, it was I who added http://factordb.com/index.php?id=1100000000464478896 with factors p404 and p414. Jean-Louis gets full marks for exploiting Message 10 of 23 , Sep 16, 2013 Jean-Louis Charton wrote: > For sure one can find this solution quickly using openpfgw. > But the smart guy now knowing that you (or others) have filled > in factordb with many numbers of this form just has to type... Indeed, it was I who added http://factordb.com/index.php?id=1100000000464478896 with factors p404 and p414. Jean-Louis gets full marks for exploiting factordb, instead of doing what I intended, along the lines of \$ more abc ABC2 25^\$a-4*5^\$a-1&25^\$a+4*5^\$a-1 a: from 301 to 304 \$ pfgw -f -d -e20000000 abc 25^301-4*5^301-1 has factors: 2^2*7229 25^302-4*5^302-1 has factors: 2^2*3173791 25^303-4*5^303-1 has factors: 2^2*1931*934579 25^304-4*5^304-1 has factors: 2^2*11*29*1289*1759*9511*27851 (25^304-4*5^304-1)/(2^2*11*29*1289*1759*9511*27851) is 3-PRP! (0.0057s+0.2797s) 25^304+4*5^304-1 has factors: 2^2*1439*17390951 (25^304+4*5^304-1)/(2^2*1439*17390951) is 3-PRP! (0.0058s+0.3055s) But now, J-L, are you able to explain my comment: > solved so quickly. Heuristically, that was not to be expected. Can you quantify my suprise? David • ... Well, I would say that for n = 600, both algebraic factors are more than 415 digits and trial factoring them with pfgw up to 20000000 with -d option you Message 11 of 23 , Sep 18, 2013 >But now, J-L, are you able to explain my comment: > >> solved so quickly. Heuristically, that was not to be expected. > >Can you quantify my suprise? > Well, I would say that for n >= 600, both algebraic factors are more than 415 digits and trial factoring them with pfgw up to 20000000 with -d option you may remove a 20000000-smooth composite factor of say 10 or 15 digits. This leave 2 factors of more than 400 digits. The "probability" for one such factor to be prime is roughly 1/l(10^400), that is about 0.001. So for both factors to be prime the probability is about 0.000001. Am I right ? Amicalement, J-L • ... We seek complete factorization of both of the cofactors 25^k-4*5^k-1 and 25^k+4*5^k-1 for some k 300, where each has more than 400 decimal digits. We had Message 12 of 23 , Sep 19, 2013 Jean-Louis Carton wrote: > So for both factors to be prime the probability is about 0.000001. We seek complete factorization of both of the cofactors 25^k-4*5^k-1 and 25^k+4*5^k-1 for some k > 300, where each has more than 400 decimal digits. We had better avoid the case k = 0 mod 3, where each cofactor has an algebraic factorization, which is Suppose that we sieve out primes to depth d and hope for what is left to yield a a pair of PRPs as here, with k = 304: (25^304-4*5^304-1)/(2^2*11*29*1289*1759*9511*27851) is 3-PRP! (25^304+4*5^304-1)/(2^2*1439*17390951) is 3-PRP! The probability for success for a single value of k coprime to 3 is of order (exp(Euler)*log(p)/log(25))^2/k^2 Setting p = 2*10^7 and summing over /all/ k > 300, the probability that Exercise 6 has /no/ solution is exp(-2/3*(exp(Euler)*log(2*10^7)/log(25))^2/301) =~ 83% In fact it has a solution almost immediately, at k = 304. David • ... But is that more or less remarkable than the expectation of any one of Phil Taylor s darts landing in the region around where it actually landed? You only Message 13 of 23 , Sep 25, 2013 > > Exercise 6: Find the complete factorization of F(n) for at > > least one even integer n > 600. > As far as I can tell, no-one (apart from the setter) > yet solved Exercise 6, which can be done in less > than 2 minutes, using OpenPFGW. What is remarkable > quickly. Heuristically, that was not to be expected. But is that more or less remarkable than the expectation of any one of Phil Taylor's darts landing in the region around where it actually landed? You only chose that target after the arrow had landed, I'm sure. How many mathematical diversions have you looked at via the medium of numerical computation? How many of them would you expect to be remarkably easier than expected to solve? Probably a non-zero answer. Don't be surprised that one particular example was one. Knowing what he's trying to say, even if he's not getting it across clearly, Phil -- () ASCII ribbon campaign () Hopeless ribbon campaign /\ against HTML mail /\ against gratuitous bloodshed [stolen with permission from Daniel B. Cristofani] • ... It happened thus: 1) I determined to factorize F(n)=((n^2-9)/4)^2-5 for n Message 14 of 23 , Sep 27, 2013 > You only chose that target after > the arrow had landed, I'm sure. It happened thus: 1) I determined to factorize F(n)=((n^2-9)/4)^2-5 for n <= 300, completely. As later shown in "factordb", I succeeded. 2) Meanwhile I ran OpenPFGW on n in [301,600], hoping for a quick outlier and found none. 3) I estimated the probability of an easily discoverable completely factorization for n>600 and found it to be small. 4) Recalling how I had once been caught out before by a "probably no more" heuristic, I set a lone process running on n in [601, 10000]  so as not to be caught out again by Jens. 5) When I later looked  and pfgw.log, it had found a hit at n=608. So yes, Phil, you are quite correct that the puzzle was set after this finding. However the heuristic that I gave was made prior to my discovery, else I would not have said that I was surprised. The point that you are making (I think) is that I do such expsriments often and only notice when the result is unexpected. I don't tell folk about all the boring times when a negative heuristic is borne out by a null result. That is the selection effect. David (guilty of not boring folk with what is routine) Your message has been successfully submitted and would be delivered to recipients shortly.
4,162
12,673
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.046875
3
CC-MAIN-2017-34
latest
en
0.730623
https://stackoverflow.com/questions/1873832/how-do-i-compare-two-integers
1,656,181,341,000,000,000
text/html
crawl-data/CC-MAIN-2022-27/segments/1656103036077.8/warc/CC-MAIN-20220625160220-20220625190220-00328.warc.gz
602,311,576
65,963
# How do I compare two Integers? [duplicate] I have to compare two `Integer` objects (not `int`). What is the canonical way to compare them? ``````Integer x = ... Integer y = ... `````` I can think of this: ``````if (x == y) `````` The `==` operator only compares references, so this will only work for lower integer values. But perhaps auto-boxing kicks in...? ``````if (x.equals(y)) `````` This looks like an expensive operation. Are there any hash codes calculated this way? ``````if (x.intValue() == y.intValue()) `````` A little bit verbose... EDIT: Thank you for your responses. Although I know what to do now, the facts are distributed on all of the existing answers (even the deleted ones :)) and I don't really know, which one to accept. So I'll accept the best answer, which refers to all three comparison possibilities, or at least the first two. • You shouldn't use Integer x = ... in the first place, use int x = ... instead. Dec 9, 2009 at 13:24 • That was only an example to show the type of x and y. Actually those values come from a List<Integer> where a can't use int. Dec 10, 2009 at 7:55 • Shall I compare thee to a summer's day? Dec 10, 2009 at 18:32 • @starblue: The primitive wrapper classes exist for a (very good) reason. "You shouldn't use Integer x = ... in the first place" sounds misguided at best. Oct 3, 2015 at 22:06 • The wrappers exist for putting integer objects into data structures, not for variables containing a single integer. Oct 4, 2015 at 9:53 This is what the equals method does: ``````public boolean equals(Object obj) { if (obj instanceof Integer) { return value == ((Integer)obj).intValue(); } return false; } `````` As you can see, there's no hash code calculation, but there are a few other operations taking place there. Although `x.intValue() == y.intValue()` might be slightly faster, you're getting into micro-optimization territory there. Plus the compiler might optimize the `equals()` call anyway, though I don't know that for certain. I generally would use the primitive `int`, but if I had to use `Integer`, I would stick with `equals()`. • While this works for non-null values, it will fail if the calling object is itself null. I've yet to find an elegant way to compare "possibly-null" objects. – tbm Aug 5, 2016 at 14:32 Use the `equals` method. Why are you so worried that it's expensive? • Not in absolute terms, but considering how inexpensive an integer comparison on assembly level is, I thought it can only get much worse. Dec 9, 2009 at 13:29 • Don't concern yourself with micro-optimization. Dec 9, 2009 at 13:34 • So what exactly do you thing Integer.equals() does? I put my bet on "a.value==b.value" Dec 9, 2009 at 13:35 • @ammoQ: Essentially it does that, but there seems to be an if-clause with instanceof operation, a cast of b to Integer, and call to its intValue() method. So it definitely is more expensive than a primitive integer comparison. Dec 9, 2009 at 13:57 • @Joonas: But that sounds like something the JIT compiler in the JVM would take care of pretty easy anyway? Dec 9, 2009 at 15:11 ``````if (x.equals(y)) `````` This looks like an expensive operation. Are there any hash codes calculated this way? It is not an expensive operation and no hash codes are calculated. Java does not magically calculate hash codes, `equals(...)` is just a method call, not different from any other method call. The JVM will most likely even optimize the method call away (inlining the comparison that takes place inside the method), so this call is not much more expensive than using `==` on two primitive `int` values. Note: Don't prematurely apply micro-optimizations; your assumptions like "this must be slow" are most likely wrong or don't matter, because the code isn't a performance bottleneck. • equals is a `instanceof` check, a `cast` and a call to `intValue()`, before the real `==`. Sure not a reason to do some premature optimization. Dec 9, 2009 at 15:03 Minor note: since Java 1.7 the Integer class has a static `compare(Integer, Integer)` method, so you can just call `Integer.compare(x, y)` and be done with it (questions about optimization aside). Of course that code is incompatible with versions of Java before 1.7, so I would recommend using `x.compareTo(y)` instead, which is compatible back to 1.2. I would go with x.equals(y) because that's consistent way to check equality for all classes. As far as performance goes, equals is actually more expensive because it ends up calling intValue(). EDIT: You should avoid autoboxing in most cases. It can get really confusing, especially the author doesn't know what he was doing. You can try this code and you will be surprised by the result; ``````Integer a = 128; Integer b = 128; System.out.println(a==b); `````` "equals" is it. To be on the safe side, you should test for null-ness: ``````x == y || (x != null && x.equals(y)) `````` the x==y tests for null==null, which IMHO should be true. The code will be inlined by the JIT if it is called often enough, so performance considerations should not matter. Of course, avoiding "Integer" in favor of plain "int" is the best way, if you can. Also, the null-check is needed to guarantee that the equality test is symmetric -- x.equals(y) should by the same as y.equals(x), but isn't if one of them is null. • It's almost always better to just let a NullPointerException throw if a reference is null. Dec 10, 2009 at 17:39 • But you surely would not use "Integer" instead of "int" if there was was no valid situation where the value is null? – mfx Dec 10, 2009 at 18:23 Compare integer and print its value in value ascending or descending order. All you have to do is implements Comparator interface and override its compare method and compare its value as below: ``````@Override public int compare(Integer o1, Integer o2) { if (ascending) { return o1.intValue() - o2.intValue(); } else { return o2.intValue() - o1.intValue(); } } `````` • note that this does not work if operations overflow. Mar 6, 2019 at 1:25 The Integer class implements `Comparable<Integer>`, so you could try, ``````x.compareTo(y) == 0 `````` also, if rather than equality, you are looking to compare these integers, then, `x.compareTo(y) < 0` will tell you if x is less than y. `x.compareTo(y) > 0` will tell you if x is greater than y. Of course, it would be wise, in these examples, to ensure that x is non-null before making these calls. I just encountered this in my code and it took me a while to figure it out. I was doing an intersection of two sorted lists and was only getting small numbers in my output. I could get it to work by using `(x - y == 0)` instead of `(x == y)` during comparison.
1,704
6,708
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.03125
3
CC-MAIN-2022-27
latest
en
0.913251
http://forum.allaboutcircuits.com/threads/capacitance-in-a-series-circuit.103548/
1,477,137,725,000,000,000
text/html
crawl-data/CC-MAIN-2016-44/segments/1476988718957.31/warc/CC-MAIN-20161020183838-00171-ip-10-171-6-4.ec2.internal.warc.gz
87,441,096
15,673
# Capacitance in a series circuit Discussion in 'Homework Help' started by kevin monroe, Nov 14, 2014. 1. ### kevin monroe Thread Starter New Member Nov 7, 2014 11 0 My electronics book says "Because the current throughout a series circuit is the same at any point, capacitors connected in series have the same number of coulombs (C) of charge (Q)." Is this statement true without qualification? It seems like only capacitors of equal capacitance would have equal charge. Wouldn't the charge vary with the capacitance of the capacitor? Thanks for any help. 2. ### ISB123 Well-Known Member May 21, 2014 1,239 524 It is true,they will all store equal charge even when capacitance is different. 3. ### kevin monroe Thread Starter New Member Nov 7, 2014 11 0 Thanks! That's weird. 4. ### crutschow Expert Mar 14, 2008 12,515 3,064 Not really. Since Q = CV the voltage across different size capacitors in series will be different so they each has the same charge . For example if you apply 3V to a 1uF cap in series with a 2uF cap than the 1uf will end up with 2V across it and the 2uF will have 1V across it, giving the same charge transferred to each capacitor. May 21, 2014 1,239 524 6. ### shteii01 AAC Fanatic! Feb 19, 2010 3,294 482 If you want weird, try mechanical engineering. 7. ### joeyd999 AAC Fanatic! Jun 6, 2011 2,604 2,465 Or, similarly, astrology. 8. ### Papabravo Expert Feb 24, 2006 10,020 1,756 Financial Engineering and the Stochastic Calculus gets my vote for weird. 9. ### kevin monroe Thread Starter New Member Nov 7, 2014 11 0 Thanks a lot! 10. ### kevin monroe Thread Starter New Member Nov 7, 2014 11 0 Thanks for the 1 hour effort!! 11. ### WBahn Moderator Mar 31, 2012 17,446 4,698 It's not true without qualification, because it is possible for the capacitors to start out with different amounts of charge on them and this difference will be preserved. Also, for real capacitors (particularly larger value and certain types of electrolytics) there will be a leakage current that will tend to discharge the caps at different rates. For this reason, putting capacitors in series in real circuits is generally discouraged. But on paper what your books says is correct in principle. Nov 7, 2014 11 0 thx 13. ### ISB123 Well-Known Member May 21, 2014 1,239 524 Capacitors in series will withstand more voltage. 14. ### WBahn Moderator Mar 31, 2012 17,446 4,698 And if you rely on this, sooner or later you will be in for a big surprise as, due to leakage mismatch, one of the caps will take on way more voltage than it is rated for.
731
2,578
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.34375
3
CC-MAIN-2016-44
longest
en
0.877731
https://gmatclub.com/forum/abel-can-complete-a-work-in-10-days-ben-in-12-days-and-carl-86272.html?fl=similar
1,511,108,378,000,000,000
text/html
crawl-data/CC-MAIN-2017-47/segments/1510934805687.20/warc/CC-MAIN-20171119153219-20171119173219-00189.warc.gz
611,550,619
55,149
It is currently 19 Nov 2017, 09:19 GMAT Club Daily Prep Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History Events & Promotions Events & Promotions in June Open Detailed Calendar Abel can complete a work in 10 days, Ben in 12 days and Carl Author Message TAGS: Hide Tags Manager Joined: 12 Oct 2009 Posts: 113 Kudos [?]: 70 [2], given: 3 Abel can complete a work in 10 days, Ben in 12 days and Carl [#permalink] Show Tags 03 Nov 2009, 12:21 2 KUDOS 17 This post was BOOKMARKED 00:00 Difficulty: 95% (hard) Question Stats: 58% (02:30) correct 42% (02:42) wrong based on 509 sessions HideShow timer Statistics Abel can complete a work in 10 days, Ben in 12 days and Carla in 15 days. All of them began the work together, but Abel had to leave after 2 days and Ben 3 days before the completion of the work. How long did the work last? A. 6 B. 7 C. 8 D. 9 E. 10 [Reveal] Spoiler: OA Kudos [?]: 70 [2], given: 3 VP Joined: 05 Mar 2008 Posts: 1467 Kudos [?]: 307 [3], given: 31 Show Tags 03 Nov 2009, 12:27 3 KUDOS 2 This post was BOOKMARKED Abel can complete a work in 10 days, Ben in 12 days and Carla in 15 days. All of them began the work together, but Abel had to leave after 2 days and Ben 3 days before the completion of the work. How long did the work last? A. 6 B. 7 C. 8 D. 9 E. 10 B. 7 1/10 + 1/12 + 15 = 15/60 of the work done each day. 60 - 15/60-15/60 = 30/60 (abel then leaves) Ben and carla working together finish 9/60 each day. Carla alone finished 4/60 each day so I figured the amount carla finishes alone will be a multiple of 4 30/60 - 9/60 = 21/60 (3 days total) 21/60-9/60 = 12/60 (4 days total; and coincidentally a multiple of 4) so assume carla works alone after this 12/60-4/60 = 8/60 (5 days) 8/60 - 4/60 = 4/60 (6 days) 4/60-4/60 = 0 (7 days) Kudos [?]: 307 [3], given: 31 Director Joined: 01 Apr 2008 Posts: 872 Kudos [?]: 860 [1], given: 18 Name: Ronak Amin Schools: IIM Lucknow (IPMX) - Class of 2014 Show Tags 03 Nov 2009, 22:27 1 KUDOS 1 This post was BOOKMARKED (1/A + 1/B + 1/C)*2 + (1/B + 1/C) ( N-2-3 ) + (1/C)*3 = 1 Solve for N, we get N = 7 Kudos [?]: 860 [1], given: 18 Economist GMAT Tutor Instructor Joined: 01 Oct 2013 Posts: 69 Kudos [?]: 48 [3], given: 7 Re: Abel can complete a work in 10 days, Ben in 12 days and Carl [#permalink] Show Tags 19 Oct 2013, 09:35 3 KUDOS Expert's post 1 This post was BOOKMARKED asterixmatrix wrote: Abel can complete a work in 10 days, Ben in 12 days and Carla in 15 days. All of them began the work together, but Abel had to leave after 2 days and Ben 3 days before the completion of the work. How long did the work last? A. 6 B. 7 C. 8 D. 9 E. 10 Not to sound like a broken record from some of my earlier posts, but, worst case, you could always plug in answer choices for this problem. Start with C and you get 2/10 of a job from Abel (which, notice, will always be the case), (8-3)/12 from Ben, and 8/15 from Carla. LCM and add these up, and you get 12/60+25/60+32/60. Too much! Do the same with B. Abel stays at 2/10, Ben is now 4/12, and Carla is 7/15. So, 12/60+20/60+28/60 = 60/60. This approach could take longer in some circumstances, but it's always a default strategy where you have answer choices like these and no idea how to proceed. _________________ Economist GMAT Tutor http://econgm.at/econgmat (866) 292-0660 Kudos [?]: 48 [3], given: 7 Math Expert Joined: 02 Sep 2009 Posts: 42248 Kudos [?]: 132696 [7], given: 12335 Re: Abel can complete a work in 10 days, Ben in 12 days and Carl [#permalink] Show Tags 03 Nov 2013, 12:37 7 KUDOS Expert's post 16 This post was BOOKMARKED asterixmatrix wrote: Abel can complete a work in 10 days, Ben in 12 days and Carla in 15 days. All of them began the work together, but Abel had to leave after 2 days and Ben 3 days before the completion of the work. How long did the work last? A. 6 B. 7 C. 8 D. 9 E. 10 Responding to a pm. First 2 days all three of them worked together, thus they did 2*(1/10 + 1/12 + 1/15) = 1/2 of the work. Last 3 days only Carla worked, thus she did 3/15 = 1/5 of the work. 1 - 1/2 - 1/5 = 3/10 of the work was done by Ben and Carla: (time)*(combined rate)=(job done) --> t*(1/12 + 1/15) = 3/10 --> t = 2 days. So, we have that Ben and Carla worked together for 2 days. Total days = 2 + 3 + 2 = 7. Hope it's clear. _________________ Kudos [?]: 132696 [7], given: 12335 Director Joined: 03 Aug 2012 Posts: 899 Kudos [?]: 911 [3], given: 322 Concentration: General Management, General Management GMAT 1: 630 Q47 V29 GMAT 2: 680 Q50 V32 GPA: 3.7 WE: Information Technology (Investment Banking) Re: Abel can complete a work in 10 days, Ben in 12 days and Carl [#permalink] Show Tags 18 Mar 2014, 07:30 3 KUDOS Let the work be completed in 't'. Then again "Rate * Time = Work" Rate(A)) = 1/10 Rate(B)=1/12 Rate(C)=1/15 Since A worked for 2 Days Work done by A= 2/10 Since B worked for 3 Days before work was completed work done B= (t-3)/12 Since C worked for full number of days = t/15 Adding them gives total work which is 1 unit. 2/10 + (t-3)/12 + t/15 = 1 Hence t=7 _________________ Rgds, TGC! _____________________________________________________________________ I Assisted You => KUDOS Please _____________________________________________________________________________ Kudos [?]: 911 [3], given: 322 Current Student Joined: 06 Sep 2013 Posts: 1972 Kudos [?]: 741 [0], given: 355 Concentration: Finance Re: Abel can complete a work in 10 days, Ben in 12 days and Carl [#permalink] Show Tags 08 May 2014, 14:49 asterixmatrix wrote: Abel can complete a work in 10 days, Ben in 12 days and Carla in 15 days. All of them began the work together, but Abel had to leave after 2 days and Ben 3 days before the completion of the work. How long did the work last? A. 6 B. 7 C. 8 D. 9 E. 10 Abel in the 2 days that he worked completed 1/5 of the job = 4/5 remains Then if Ben had to leave 3 days before the completion, this means that Carla had to work alone for these 3 days in which she completed 1/5 of the job. Now together, Ben and Carla completed the job in (1/12 + 1/15)(t) = 3/5 3/20 (t) = 3/5 ---> t = 4 Therefore, these 4 days worked plus the 3 days that Carla had to work by herself add to 7 days Kudos [?]: 741 [0], given: 355 Manager Joined: 23 May 2013 Posts: 189 Kudos [?]: 114 [0], given: 42 Location: United States Concentration: Technology, Healthcare Schools: Stanford '19 (M) GMAT 1: 760 Q49 V45 GPA: 3.5 Abel can complete a work in 10 days, Ben in 12 days and Carl [#permalink] Show Tags 06 Apr 2015, 07:53 Bunuel wrote: asterixmatrix wrote: Abel can complete a work in 10 days, Ben in 12 days and Carla in 15 days. All of them began the work together, but Abel had to leave after 2 days and Ben 3 days before the completion of the work. How long did the work last? A. 6 B. 7 C. 8 D. 9 E. 10 Responding to a pm. First 2 days all three of them worked together, thus they did 2*(1/10 + 1/12 + 1/15) = 1/2 of the work. Last 3 days only Carla worked, thus she did 3/15 = 1/5 of the work. 1 - 1/2 - 1/5 = 3/10 of the work was done by Ben and Carla: (time)*(combined rate)=(job done) --> t*(1/12 + 1/15) = 3/10 --> t = 2 days. So, we have that Ben and Carla worked together for 2 days. Total days = 2 + 3 + 2 = 7. Hope it's clear. Bunuel's answer is the quickest way to think about the structure of the solution. Just want to add a small tip: Because right off the bat, you notice you're adding 3 fractions. Instead of doing the usual cross multiply trick to add fractions, I would find the LCM immediately by sketching out a quick venn diagram: 10 = 2*5 12 = 2*2*3 15 = 3*5 2 is common, 3 is common, and 5 is common - leftover is just one 2. Therefore, the LCM is 2*3*5*2 = 60. Rewrite all of the fractions with a denominator of 60 and this problem can be solved in under 2 minutes. All working together = 6/60 + 5/60 + 4/60 = 15/60 for 2 days = 30/60. B and C working together = 5/60 + 4/60 = 9/60 for (x) days. C working alone = 4/60 for 3 days = 12/60. 30 + 9x+12 = 60; 42 + 9x = 60; 9x = 18; x =2. Therefore, total number of days = x+2+3 = 7 days. Kudos [?]: 114 [0], given: 42 Veritas Prep GMAT Instructor Joined: 16 Oct 2010 Posts: 7736 Kudos [?]: 17797 [5], given: 235 Location: Pune, India Re: Abel can complete a work in 10 days, Ben in 12 days and Carl [#permalink] Show Tags 06 Apr 2015, 21:07 5 KUDOS Expert's post 1 This post was BOOKMARKED asterixmatrix wrote: Abel can complete a work in 10 days, Ben in 12 days and Carla in 15 days. All of them began the work together, but Abel had to leave after 2 days and Ben 3 days before the completion of the work. How long did the work last? A. 6 B. 7 C. 8 D. 9 E. 10 Another option is to convert it to units of work if you don't want to work with fractions. Say, the work involves 60 units (LCM of 10, 12 and 15). So Abel does 60/10 = 6 units a day, Ben does 60/12 = 5 units a day and Carla does 60/15 = 4 units a day. Together, they do 6+5+4 = 15 units a day. In 2 days, they complete 15*2 = 30 units and are left with 30 units. Then only Ben and Carla are working and doing 5+4 = 9 units a day. The last 3 days Carla works alone and does 4*3 = 12 units of the 30 units so Ben and Carla together do 30 - 12 = 18 units. Hence Ben and Carla work together in the middle at the rate of 9 units per day for 18/9 = 2 days. The work lasts for 2 + 2 + 3 = 7 days. _________________ Karishma Veritas Prep | GMAT Instructor My Blog Get started with Veritas Prep GMAT On Demand for \$199 Veritas Prep Reviews Kudos [?]: 17797 [5], given: 235 Director Joined: 07 Aug 2011 Posts: 579 Kudos [?]: 546 [1], given: 75 GMAT 1: 630 Q49 V27 Re: Abel can complete a work in 10 days, Ben in 12 days and Carl [#permalink] Show Tags 06 Apr 2015, 21:43 1 KUDOS asterixmatrix wrote: Abel can complete a work in 10 days, Ben in 12 days and Carla in 15 days. All of them began the work together, but Abel had to leave after 2 days and Ben 3 days before the completion of the work. How long did the work last? A. 6 B. 7 C. 8 D. 9 E. 10 speed of A=1.2B =1.5C . combined speed of B and C = $$\frac{1}{B} +\frac{1}{C} = \frac{B+C}{BC} = \frac{3}{20}$$ $$1-(\frac{1}{5} +\frac{1}{6}+ \frac{2}{15} + \frac{3}{15} )$$ = work done together by B and C= $$\frac{9}{30}.$$ so time take by B and C together (in A's absence) = 2 days _________________ Thanks, Lucky _______________________________________________________ Kindly press the to appreciate my post !! Kudos [?]: 546 [1], given: 75 Senior Manager Joined: 10 Mar 2013 Posts: 268 Kudos [?]: 125 [1], given: 2405 GMAT 1: 620 Q44 V31 GMAT 2: 690 Q47 V37 GMAT 3: 610 Q47 V28 GMAT 4: 700 Q50 V34 GMAT 5: 700 Q49 V36 GMAT 6: 690 Q48 V35 GMAT 7: 750 Q49 V42 GMAT 8: 730 Q50 V39 Re: Abel can complete a work in 10 days, Ben in 12 days and Carl [#permalink] Show Tags 19 Nov 2015, 07:17 1 KUDOS Amount of work Carla completed + Amount of work Abel completed + Amount of work Ben completed = Total amount of work completed t/15 + 2/10 + (t-3)/12 = 1 t = 7 The hardest part of this problem (at least for me) was interpreting the amount of time Ben spent working. Kudos [?]: 125 [1], given: 2405 Director Joined: 07 Dec 2014 Posts: 836 Kudos [?]: 265 [1], given: 15 Re: Abel can complete a work in 10 days, Ben in 12 days and Carl [#permalink] Show Tags 19 Nov 2015, 14:47 1 KUDOS let d=total days 2(15/60)+(d-5)(9/60)+3(4/60)=1 d=7 Kudos [?]: 265 [1], given: 15 Senior Manager Joined: 03 Apr 2013 Posts: 276 Kudos [?]: 44 [1], given: 854 Re: Abel can complete a work in 10 days, Ben in 12 days and Carl [#permalink] Show Tags 19 Jul 2016, 05:15 1 KUDOS asterixmatrix wrote: Abel can complete a work in 10 days, Ben in 12 days and Carla in 15 days. All of them began the work together, but Abel had to leave after 2 days and Ben 3 days before the completion of the work. How long did the work last? A. 6 B. 7 C. 8 D. 9 E. 10 Let the complete work be 60 units.. Per day.. A does = 60/10 = 6 units B = 5 units C = 4 units The timeline of the work is ABC for the first 2 days Work done = 2(6+5+4) = 30 units BC for some days..let it be x Work done = x(5+4) = 9x C alone for last three days Work done = 3(4) = 12 units So.. 30 + 9x + 12 = 60.. x = 2 days.. and total days = 2 + 2 + 3 = 7 days(B) _________________ Spread some love..Like = +1 Kudos Kudos [?]: 44 [1], given: 854 Intern Joined: 29 Dec 2015 Posts: 10 Kudos [?]: 1 [1], given: 0 Re: Abel can complete a work in 10 days, Ben in 12 days and Carl [#permalink] Show Tags 12 Aug 2016, 13:09 1 KUDOS easy way to solve the problem:- Assume T :- time taken for all the three to complete the work GENERAL CONCEPT:- abel's rate will be T/10 (Abel's rate T/10 - total time taken by all the three (T)/abel's time to complete(10)) Ben's rate will be :T/12 Carla's rate will be : T/15 Given in problem:- But because Abel leaves in 2 days after the start of work ,his rate will be : 2/10 Ben leaves 3 days before the completion of work so his rate will be :(T-3)/12 carla stays till the end of completion so her rate will be : T/15 all of them complete 1 task so equation is 2/10 + (T-3)/12 + T/15 = 1 Hence T = 7 Kudos [?]: 1 [1], given: 0 Director Joined: 17 Dec 2012 Posts: 623 Kudos [?]: 534 [1], given: 16 Location: India Abel can complete a work in 10 days, Ben in 12 days and Carl [#permalink] Show Tags 06 Jun 2017, 19:14 1 KUDOS Expert's post asterixmatrix wrote: Abel can complete a work in 10 days, Ben in 12 days and Carla in 15 days. All of them began the work together, but Abel had to leave after 2 days and Ben 3 days before the completion of the work. How long did the work last? A. 6 B. 7 C. 8 D. 9 E. 10 1. (time worked by Abel)/(time taken by Abel working alone)+ (time worked by Ben)/(time taken by Ben working alone) + (time worked by Carla)/(time taken by Carla working alone) = 1 2. 2/10 + (x-3)/12 + x/15 =1 So x=7. _________________ Srinivasan Vaidyaraman Sravna http://www.sravnatestprep.com/regularcourse.php Standardized Approaches Kudos [?]: 534 [1], given: 16 Intern Joined: 11 Apr 2017 Posts: 35 Kudos [?]: 7 [0], given: 182 Schools: Kelley '20 Re: Abel can complete a work in 10 days, Ben in 12 days and Carl [#permalink] Show Tags 15 Oct 2017, 08:47 Let the work be completed in 't'. Then again "Rate * Time = Work" Rate(A)) = 1/10 Rate(B)=1/12 Rate(C)=1/15 Since A worked for 2 Days Work done by A= 2/10 Since B worked for 3 Days before work was completed work done B= (t-3)/12 Since C worked for full number of days = t/15 Adding them gives total work which is 1 unit. 2/10 + (t-3)/12 + t/15 = 1 Hence t=7 Kudos [?]: 7 [0], given: 182 Re: Abel can complete a work in 10 days, Ben in 12 days and Carl   [#permalink] 15 Oct 2017, 08:47 Display posts from previous: Sort by
5,281
15,190
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.3125
4
CC-MAIN-2017-47
latest
en
0.913482
https://franklin.dyer.me/entries/2017-1-28.html
1,545,036,647,000,000,000
text/html
crawl-data/CC-MAIN-2018-51/segments/1544376828448.76/warc/CC-MAIN-20181217065106-20181217091106-00023.warc.gz
608,152,079
7,365
## How to Graph a Donut 2017 Jan 28 What is the equation of the graph of a torus in the 3D coordinate plane? Suppose one wants to graph a torus with major radius $$R$$ (the distance from the middle of the torus from the center of one of the circles forming its body) and minor radius $$r$$ (the radius of one of the small circles forming the body of the torus) in the three-dimensional coordinate plane. If one vertically sliced it in half down the middle, it would look something like this: If one sliced it horizontally down the middle, it would look like this: Each horizontal plane cross section consists of two circles (except for the one made by the tangent planes at the top and bottom), one inside of the other. Let the difference between their radii at height $$z$$ be the width $$w$$. The graph of a circle has the equation $\sqrt{x^2+y^2}=a$ Where $$a$$ is the radius of the circle. However, we want two circles with a middle circle of radius $$R$$ and a radius difference $$w$$, so the new equation is $\sqrt{x^2+y^2}=R\pm \frac{1}{2}w$ or $\mid \sqrt{x^2+y^2}-R\mid=\frac{1}{2}w$ Now I need to express $$w$$ as a function of $$z$$, the height. Again, we observe a vertical cross-section of the torus: From the picture, we can see that $(\frac{1}{2}w)^2+z^2=r^2$ by the pythagorean theorem. By solving for $$w$$, we obtain $w=2\sqrt{r^2-z^2}$ Now we can plug this in for $$w$$ in our formula for a horizontal donut cross-section: $\mid \sqrt{x^2+y^2}-R\mid=\frac{1}{2}w$ $\mid \sqrt{x^2+y^2}-R\mid=\sqrt{r^2-z^2}$ $(\sqrt{x^2+y^2}-R)^2=r^2-z^2$ And this is our formula for the graph of a torus. Essentially what we did was figure out what one “layer” of the donut looks like at some height $$z$$ and then stack up infinitely many layers for each value of $$z$$ to make a smooth donut. Here is our hard-earned donut: Yum!
540
1,839
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.625
5
CC-MAIN-2018-51
latest
en
0.889814
http://www.64funsolutions.ca/puzzle/puzzle-week-283
1,603,293,829,000,000,000
text/html
crawl-data/CC-MAIN-2020-45/segments/1603107876768.45/warc/CC-MAIN-20201021151342-20201021181342-00342.warc.gz
129,271,944
10,046
# Puzzle of the week #283 Level: 4-Rook Chess Diagram: `[Event "Puzzle #283"][Date "2015.10.12"][Result "0-1"]1. e4 e6 2. d4 d5 3. exd5 exd5 4. Bd3 c5 5. Nf3 Nc6 6. Qe2+ Be7 7. dxc5 Nf6 8. h3 O-O 9. O-O Bxc5 10. c3 Re8 11. Qc2 Qd6 12. Nbd2 {Black to move}` a) Name the opening and variation b) What should black do next? c) Write the best line you can think of Total available points for this puzzle is 25. The answers will be published next time together with puzzle #284. Puzzle #282 solution: Game Leosson - Sundsbo, Reykjavik Open 2013. Terry's answers: b) I believe that Black should try to get rid of the e5 pawn for it sits in the center c) Both Cody and Terry showed different ways to get rid of the e5-pawn. In the game black played differently and still won. `[Event "Puzzle #282"][Date "2015.10.05"][Result "0-1"][SetUp "1"][FEN "3b4/1p1k1ppp/p3p3/3pP3/1Pr2PP1/P3B3/1R2K2P/8 b - - 0 27"]27... g5 ( {In the game} 27... Be7 28. a4 {Black won the game anyway}) 28. fxg5 Bc7 29. Rd2 Rxg4 30. Kf3 Re4` Correct solutions: Cody, Terry - 20 points Coco - 15 points Andrew, Benjamin, Bradley - 8 points Deryk, Yakov - 7 points Jalen - 6 points Aaron - 3 points Standings: Cody - 55 points Coco - 50 points Terry - 35 points Jalen - 26 points Yakov, Deryk - 22 points Andrew - 18 points Benjamin - 16 points Aaron - 10 points Comment: French defence exchange
521
1,370
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.53125
3
CC-MAIN-2020-45
latest
en
0.70692
https://www.studypool.com/discuss/449848/algebra-questions-need-help-63?free
1,481,106,956,000,000,000
text/html
crawl-data/CC-MAIN-2016-50/segments/1480698542060.60/warc/CC-MAIN-20161202170902-00341-ip-10-31-129-80.ec2.internal.warc.gz
1,018,530,382
14,151
##### Algebra questions need help Algebra Tutor: None Selected Time limit: 1 Day algebra 1.docx  I will put up the third box next . Mar 28th, 2015 1. inconsistent, no solution. y1=(-2/3)x-4; 3y2=-2x-12, y2=(-2/3)x-12 As k1=k2=-2/3, y1 parallel y2, two lines never interact with each other, thus no solution and inconsistent. 2. consistent independent, a unique solution (-2,2) As can be seen from the graph, two lines intersect at one unique point, thus they are consistent independent and has a unique solution. 3. when the two lines have the same equation, they are called consistent dependent, and has infinitely many solutions. Mar 28th, 2015 ... Mar 28th, 2015 ... Mar 28th, 2015 Dec 7th, 2016 check_circle
214
723
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.171875
3
CC-MAIN-2016-50
longest
en
0.901178
https://worldwidescience.org/topicpages/h/hierarchical+linear+regression.html
1,532,307,460,000,000,000
text/html
crawl-data/CC-MAIN-2018-30/segments/1531676594675.66/warc/CC-MAIN-20180722233159-20180723013159-00253.warc.gz
814,794,517
255,877
#### Sample records for hierarchical linear regression 1. Linear regression CERN Document Server Olive, David J 2017-01-01 This text covers both multiple linear regression and some experimental design models. The text uses the response plot to visualize the model and to detect outliers, does not assume that the error distribution has a known parametric distribution, develops prediction intervals that work when the error distribution is unknown, suggests bootstrap hypothesis tests that may be useful for inference after variable selection, and develops prediction regions and large sample theory for the multivariate linear regression model that has m response variables. A relationship between multivariate prediction regions and confidence regions provides a simple way to bootstrap confidence regions. These confidence regions often provide a practical method for testing hypotheses. There is also a chapter on generalized linear models and generalized additive models. There are many R functions to produce response and residual plots, to simulate prediction intervals and hypothesis tests, to detect outliers, and to choose response trans... 2. Applied linear regression CERN Document Server Weisberg, Sanford 2013-01-01 Praise for the Third Edition ""...this is an excellent book which could easily be used as a course text...""-International Statistical Institute The Fourth Edition of Applied Linear Regression provides a thorough update of the basic theory and methodology of linear regression modeling. Demonstrating the practical applications of linear regression analysis techniques, the Fourth Edition uses interesting, real-world exercises and examples. Stressing central concepts such as model building, understanding parameters, assessing fit and reliability, and drawing conclusions, the new edition illus 3. Recursive Algorithm For Linear Regression Science.gov (United States) Varanasi, S. V. 1988-01-01 Order of model determined easily. Linear-regression algorithhm includes recursive equations for coefficients of model of increased order. Algorithm eliminates duplicative calculations, facilitates search for minimum order of linear-regression model fitting set of data satisfactory. 4. Hierarchical regression analysis in structural Equation Modeling NARCIS (Netherlands) de Jong, P.F. 1999-01-01 In a hierarchical or fixed-order regression analysis, the independent variables are entered into the regression equation in a prespecified order. Such an analysis is often performed when the extra amount of variance accounted for in a dependent variable by a specific independent variable is the main 5. Multiple linear regression analysis Science.gov (United States) Edwards, T. R. 1980-01-01 Program rapidly selects best-suited set of coefficients. User supplies only vectors of independent and dependent data and specifies confidence level required. Program uses stepwise statistical procedure for relating minimal set of variables to set of observations; final regression contains only most statistically significant coefficients. Program is written in FORTRAN IV for batch execution and has been implemented on NOVA 1200. 6. Linear Regression Analysis CERN Document Server Seber, George A F 2012-01-01 Concise, mathematically clear, and comprehensive treatment of the subject.* Expanded coverage of diagnostics and methods of model fitting.* Requires no specialized knowledge beyond a good grasp of matrix algebra and some acquaintance with straight-line regression and simple analysis of variance models.* More than 200 problems throughout the book plus outline solutions for the exercises.* This revision has been extensively class-tested. 7. Advanced statistics: linear regression, part I: simple linear regression. Science.gov (United States) Marill, Keith A 2004-01-01 Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables. 8. Mental and physical health correlates among family caregivers of patients with newly-diagnosed incurable cancer: a hierarchical linear regression analysis. Science.gov (United States) Shaffer, Kelly M; Jacobs, Jamie M; Nipp, Ryan D; Carr, Alaina; Jackson, Vicki A; Park, Elyse R; Pirl, William F; El-Jawahri, Areej; Gallagher, Emily R; Greer, Joseph A; Temel, Jennifer S 2017-03-01 Caregiver, relational, and patient factors have been associated with the health of family members and friends providing care to patients with early-stage cancer. Little research has examined whether findings extend to family caregivers of patients with incurable cancer, who experience unique and substantial caregiving burdens. We examined correlates of mental and physical health among caregivers of patients with newly-diagnosed incurable lung or non-colorectal gastrointestinal cancer. At baseline for a trial of early palliative care, caregivers of participating patients (N = 275) reported their mental and physical health (Medical Outcome Survey-Short Form-36); patients reported their quality of life (Functional Assessment of Cancer Therapy-General). Analyses used hierarchical linear regression with two-tailed significance tests. Caregivers' mental health was worse than the U.S. national population (M = 44.31, p caregiver, relational, and patient factors simultaneously revealed that younger (B = 0.31, p = .001), spousal caregivers (B = -8.70, p = .003), who cared for patients reporting low emotional well-being (B = 0.51, p = .01) reported worse mental health; older (B = -0.17, p = .01) caregivers with low educational attainment (B = 4.36, p family caregivers of patients with incurable cancer, caregiver demographics, relational factors, and patient-specific factors were all related to caregiver mental health, while caregiver demographics were primarily associated with caregiver physical health. These findings help identify characteristics of family caregivers at highest risk of poor mental and physical health who may benefit from greater supportive care. 9. Linear regression in astronomy. II Science.gov (United States) Feigelson, Eric D.; Babu, Gutti J. 1992-01-01 A wide variety of least-squares linear regression procedures used in observational astronomy, particularly investigations of the cosmic distance scale, are presented and discussed. The classes of linear models considered are (1) unweighted regression lines, with bootstrap and jackknife resampling; (2) regression solutions when measurement error, in one or both variables, dominates the scatter; (3) methods to apply a calibration line to new data; (4) truncated regression models, which apply to flux-limited data sets; and (5) censored regression models, which apply when nondetections are present. For the calibration problem we develop two new procedures: a formula for the intercept offset between two parallel data sets, which propagates slope errors from one regression to the other; and a generalization of the Working-Hotelling confidence bands to nonstandard least-squares lines. They can provide improved error analysis for Faber-Jackson, Tully-Fisher, and similar cosmic distance scale relations. 10. Advanced statistics: linear regression, part II: multiple linear regression. Science.gov (United States) Marill, Keith A 2004-01-01 The applications of simple linear regression in medical research are limited, because in most situations, there are multiple relevant predictor variables. Univariate statistical techniques such as simple linear regression use a single predictor variable, and they often may be mathematically correct but clinically misleading. Multiple linear regression is a mathematical technique used to model the relationship between multiple independent predictor variables and a single dependent outcome variable. It is used in medical research to model observational data, as well as in diagnostic and therapeutic studies in which the outcome is dependent on more than one factor. Although the technique generally is limited to data that can be expressed with a linear function, it benefits from a well-developed mathematical framework that yields unique solutions and exact confidence intervals for regression coefficients. Building on Part I of this series, this article acquaints the reader with some of the important concepts in multiple regression analysis. These include multicollinearity, interaction effects, and an expansion of the discussion of inference testing, leverage, and variable transformations to multivariate models. Examples from the first article in this series are expanded on using a primarily graphic, rather than mathematical, approach. The importance of the relationships among the predictor variables and the dependence of the multivariate model coefficients on the choice of these variables are stressed. Finally, concepts in regression model building are discussed. 11. Correlation and simple linear regression. Science.gov (United States) Zou, Kelly H; Tuncali, Kemal; Silverman, Stuart G 2003-06-01 In this tutorial article, the concepts of correlation and regression are reviewed and demonstrated. The authors review and compare two correlation coefficients, the Pearson correlation coefficient and the Spearman rho, for measuring linear and nonlinear relationships between two continuous variables. In the case of measuring the linear relationship between a predictor and an outcome variable, simple linear regression analysis is conducted. These statistical concepts are illustrated by using a data set from published literature to assess a computed tomography-guided interventional technique. These statistical methods are important for exploring the relationships between variables and can be applied to many radiologic studies. 12. Linear regression in astronomy. I Science.gov (United States) Isobe, Takashi; Feigelson, Eric D.; Akritas, Michael G.; Babu, Gutti Jogesh 1990-01-01 Five methods for obtaining linear regression fits to bivariate data with unknown or insignificant measurement errors are discussed: ordinary least-squares (OLS) regression of Y on X, OLS regression of X on Y, the bisector of the two OLS lines, orthogonal regression, and 'reduced major-axis' regression. These methods have been used by various researchers in observational astronomy, most importantly in cosmic distance scale applications. Formulas for calculating the slope and intercept coefficients and their uncertainties are given for all the methods, including a new general form of the OLS variance estimates. The accuracy of the formulas was confirmed using numerical simulations. The applicability of the procedures is discussed with respect to their mathematical properties, the nature of the astronomical data under consideration, and the scientific purpose of the regression. It is found that, for problems needing symmetrical treatment of the variables, the OLS bisector performs significantly better than orthogonal or reduced major-axis regression. 13. Multicollinearity in hierarchical linear models. Science.gov (United States) Yu, Han; Jiang, Shanhe; Land, Kenneth C 2015-09-01 This study investigates an ill-posed problem (multicollinearity) in Hierarchical Linear Models from both the data and the model perspectives. We propose an intuitive, effective approach to diagnosing the presence of multicollinearity and its remedies in this class of models. A simulation study demonstrates the impacts of multicollinearity on coefficient estimates, associated standard errors, and variance components at various levels of multicollinearity for finite sample sizes typical in social science studies. We further investigate the role multicollinearity plays at each level for estimation of coefficient parameters in terms of shrinkage. Based on these analyses, we recommend a top-down method for assessing multicollinearity in HLMs that first examines the contextual predictors (Level-2 in a two-level model) and then the individual predictors (Level-1) and uses the results for data collection, research problem redefinition, model re-specification, variable selection and estimation of a final model. Copyright © 2015 Elsevier Inc. All rights reserved. 14. Entrepreneurial intention modeling using hierarchical multiple regression Directory of Open Access Journals (Sweden) Marina Jeger 2014-12-01 Full Text Available The goal of this study is to identify the contribution of effectuation dimensions to the predictive power of the entrepreneurial intention model over and above that which can be accounted for by other predictors selected and confirmed in previous studies. As is often the case in social and behavioral studies, some variables are likely to be highly correlated with each other. Therefore, the relative amount of variance in the criterion variable explained by each of the predictors depends on several factors such as the order of variable entry and sample specifics. The results show the modest predictive power of two dimensions of effectuation prior to the introduction of the theory of planned behavior elements. The article highlights the main advantages of applying hierarchical regression in social sciences as well as in the specific context of entrepreneurial intention formation, and addresses some of the potential pitfalls that this type of analysis entails. 15. Determining Predictor Importance in Hierarchical Linear Models Using Dominance Analysis Science.gov (United States) Luo, Wen; Azen, Razia 2013-01-01 Dominance analysis (DA) is a method used to evaluate the relative importance of predictors that was originally proposed for linear regression models. This article proposes an extension of DA that allows researchers to determine the relative importance of predictors in hierarchical linear models (HLM). Commonly used measures of model adequacy in… 16. Quantum algorithm for linear regression Science.gov (United States) Wang, Guoming 2017-07-01 We present a quantum algorithm for fitting a linear regression model to a given data set using the least-squares approach. Differently from previous algorithms which yield a quantum state encoding the optimal parameters, our algorithm outputs these numbers in the classical form. So by running it once, one completely determines the fitted model and then can use it to make predictions on new data at little cost. Moreover, our algorithm works in the standard oracle model, and can handle data sets with nonsparse design matrices. It runs in time poly( log2(N ) ,d ,κ ,1 /ɛ ) , where N is the size of the data set, d is the number of adjustable parameters, κ is the condition number of the design matrix, and ɛ is the desired precision in the output. We also show that the polynomial dependence on d and κ is necessary. Thus, our algorithm cannot be significantly improved. Furthermore, we also give a quantum algorithm that estimates the quality of the least-squares fit (without computing its parameters explicitly). This algorithm runs faster than the one for finding this fit, and can be used to check whether the given data set qualifies for linear regression in the first place. 17. Regularized Label Relaxation Linear Regression. Science.gov (United States) Fang, Xiaozhao; Xu, Yong; Li, Xuelong; Lai, Zhihui; Wong, Wai Keung; Fang, Bingwu 2018-04-01 Linear regression (LR) and some of its variants have been widely used for classification problems. Most of these methods assume that during the learning phase, the training samples can be exactly transformed into a strict binary label matrix, which has too little freedom to fit the labels adequately. To address this problem, in this paper, we propose a novel regularized label relaxation LR method, which has the following notable characteristics. First, the proposed method relaxes the strict binary label matrix into a slack variable matrix by introducing a nonnegative label relaxation matrix into LR, which provides more freedom to fit the labels and simultaneously enlarges the margins between different classes as much as possible. Second, the proposed method constructs the class compactness graph based on manifold learning and uses it as the regularization item to avoid the problem of overfitting. The class compactness graph is used to ensure that the samples sharing the same labels can be kept close after they are transformed. Two different algorithms, which are, respectively, based on -norm and -norm loss functions are devised. These two algorithms have compact closed-form solutions in each iteration so that they are easily implemented. Extensive experiments show that these two algorithms outperform the state-of-the-art algorithms in terms of the classification accuracy and running time. 18. Application of Hierarchical Linear Models/Linear Mixed-Effects Models in School Effectiveness Research Science.gov (United States) Ker, H. W. 2014-01-01 Multilevel data are very common in educational research. Hierarchical linear models/linear mixed-effects models (HLMs/LMEs) are often utilized to analyze multilevel data nowadays. This paper discusses the problems of utilizing ordinary regressions for modeling multilevel educational data, compare the data analytic results from three regression… 19. Stepwise versus Hierarchical Regression: Pros and Cons Science.gov (United States) Lewis, Mitzi 2007-01-01 Multiple regression is commonly used in social and behavioral data analysis. In multiple regression contexts, researchers are very often interested in determining the "best" predictors in the analysis. This focus may stem from a need to identify those predictors that are supportive of theory. Alternatively, the researcher may simply be interested… 20. Aspects of robust linear regression NARCIS (Netherlands) Davies, P.L. 1993-01-01 Section 1 of the paper contains a general discussion of robustness. In Section 2 the influence function of the Hampel-Rousseeuw least median of squares estimator is derived. Linearly invariant weak metrics are constructed in Section 3. It is shown in Section 4 that \$S\$-estimators satisfy an exact 1. Hierarchical Neural Regression Models for Customer Churn Prediction Directory of Open Access Journals (Sweden) 2013-01-01 Full Text Available As customers are the main assets of each industry, customer churn prediction is becoming a major task for companies to remain in competition with competitors. In the literature, the better applicability and efficiency of hierarchical data mining techniques has been reported. This paper considers three hierarchical models by combining four different data mining techniques for churn prediction, which are backpropagation artificial neural networks (ANN, self-organizing maps (SOM, alpha-cut fuzzy c-means (α-FCM, and Cox proportional hazards regression model. The hierarchical models are ANN + ANN + Cox, SOM + ANN + Cox, and α-FCM + ANN + Cox. In particular, the first component of the models aims to cluster data in two churner and nonchurner groups and also filter out unrepresentative data or outliers. Then, the clustered data as the outputs are used to assign customers to churner and nonchurner groups by the second technique. Finally, the correctly classified data are used to create Cox proportional hazards model. To evaluate the performance of the hierarchical models, an Iranian mobile dataset is considered. The experimental results show that the hierarchical models outperform the single Cox regression baseline model in terms of prediction accuracy, Types I and II errors, RMSE, and MAD metrics. In addition, the α-FCM + ANN + Cox model significantly performs better than the two other hierarchical models. 2. [From clinical judgment to linear regression model. Science.gov (United States) Palacios-Cruz, Lino; Pérez, Marcela; Rivas-Ruiz, Rodolfo; Talavera, Juan O 2013-01-01 When we think about mathematical models, such as linear regression model, we think that these terms are only used by those engaged in research, a notion that is far from the truth. Legendre described the first mathematical model in 1805, and Galton introduced the formal term in 1886. Linear regression is one of the most commonly used regression models in clinical practice. It is useful to predict or show the relationship between two or more variables as long as the dependent variable is quantitative and has normal distribution. Stated in another way, the regression is used to predict a measure based on the knowledge of at least one other variable. Linear regression has as it's first objective to determine the slope or inclination of the regression line: Y = a + bx, where "a" is the intercept or regression constant and it is equivalent to "Y" value when "X" equals 0 and "b" (also called slope) indicates the increase or decrease that occurs when the variable "x" increases or decreases in one unit. In the regression line, "b" is called regression coefficient. The coefficient of determination (R 2 ) indicates the importance of independent variables in the outcome. 3. Determination of regression laws: Linear and nonlinear International Nuclear Information System (INIS) Onishchenko, A.M. 1994-01-01 A detailed mathematical determination of regression laws is presented in the article. Particular emphasis is place on determining the laws of X j on X l to account for source nuclei decay and detector errors in nuclear physics instrumentation. Both linear and nonlinear relations are presented. Linearization of 19 functions is tabulated, including graph, relation, variable substitution, obtained linear function, and remarks. 6 refs., 1 tab 4. Discriminative Elastic-Net Regularized Linear Regression. Science.gov (United States) Zhang, Zheng; Lai, Zhihui; Xu, Yong; Shao, Ling; Wu, Jian; Xie, Guo-Sen 2017-03-01 In this paper, we aim at learning compact and discriminative linear regression models. Linear regression has been widely used in different problems. However, most of the existing linear regression methods exploit the conventional zero-one matrix as the regression targets, which greatly narrows the flexibility of the regression model. Another major limitation of these methods is that the learned projection matrix fails to precisely project the image features to the target space due to their weak discriminative capability. To this end, we present an elastic-net regularized linear regression (ENLR) framework, and develop two robust linear regression models which possess the following special characteristics. First, our methods exploit two particular strategies to enlarge the margins of different classes by relaxing the strict binary targets into a more feasible variable matrix. Second, a robust elastic-net regularization of singular values is introduced to enhance the compactness and effectiveness of the learned projection matrix. Third, the resulting optimization problem of ENLR has a closed-form solution in each iteration, which can be solved efficiently. Finally, rather than directly exploiting the projection matrix for recognition, our methods employ the transformed features as the new discriminate representations to make final image classification. Compared with the traditional linear regression model and some of its variants, our method is much more accurate in image classification. Extensive experiments conducted on publicly available data sets well demonstrate that the proposed framework can outperform the state-of-the-art methods. The MATLAB codes of our methods can be available at http://www.yongxu.org/lunwen.html. 5. Piecewise linear regression splines with hyperbolic covariates International Nuclear Information System (INIS) Cologne, John B.; Sposto, Richard 1992-09-01 Consider the problem of fitting a curve to data that exhibit a multiphase linear response with smooth transitions between phases. We propose substituting hyperbolas as covariates in piecewise linear regression splines to obtain curves that are smoothly joined. The method provides an intuitive and easy way to extend the two-phase linear hyperbolic response model of Griffiths and Miller and Watts and Bacon to accommodate more than two linear segments. The resulting regression spline with hyperbolic covariates may be fit by nonlinear regression methods to estimate the degree of curvature between adjoining linear segments. The added complexity of fitting nonlinear, as opposed to linear, regression models is not great. The extra effort is particularly worthwhile when investigators are unwilling to assume that the slope of the response changes abruptly at the join points. We can also estimate the join points (the values of the abscissas where the linear segments would intersect if extrapolated) if their number and approximate locations may be presumed known. An example using data on changing age at menarche in a cohort of Japanese women illustrates the use of the method for exploratory data analysis. (author) 6. Removing Malmquist bias from linear regressions Science.gov (United States) Verter, Frances 1993-01-01 Malmquist bias is present in all astronomical surveys where sources are observed above an apparent brightness threshold. Those sources which can be detected at progressively larger distances are progressively more limited to the intrinsically luminous portion of the true distribution. This bias does not distort any of the measurements, but distorts the sample composition. We have developed the first treatment to correct for Malmquist bias in linear regressions of astronomical data. A demonstration of the corrected linear regression that is computed in four steps is presented. 7. Scale of association: hierarchical linear models and the measurement of ecological systems Science.gov (United States) Sean M. McMahon; Jeffrey M. Diez 2007-01-01 A fundamental challenge to understanding patterns in ecological systems lies in employing methods that can analyse, test and draw inference from measured associations between variables across scales. Hierarchical linear models (HLM) use advanced estimation algorithms to measure regression relationships and variance-covariance parameters in hierarchically structured... 8. Finite Algorithms for Robust Linear Regression DEFF Research Database (Denmark) 1990-01-01 The Huber M-estimator for robust linear regression is analyzed. Newton type methods for solution of the problem are defined and analyzed, and finite convergence is proved. Numerical experiments with a large number of test problems demonstrate efficiency and indicate that this kind of approach may... 9. Multiple Linear Regression: A Realistic Reflector. Science.gov (United States) Nutt, A. T.; Batsell, R. R. Examples of the use of Multiple Linear Regression (MLR) techniques are presented. This is done to show how MLR aids data processing and decision-making by providing the decision-maker with freedom in phrasing questions and by accurately reflecting the data on hand. A brief overview of the rationale underlying MLR is given, some basic definitions… 10. Controlling attribute effect in linear regression KAUST Repository Calders, Toon; Karim, Asim A.; Kamiran, Faisal; Ali, Wasif Mohammad; Zhang, Xiangliang 2013-01-01 In data mining we often have to learn from biased data, because, for instance, data comes from different batches or there was a gender or racial bias in the collection of social data. In some applications it may be necessary to explicitly control this bias in the models we learn from the data. This paper is the first to study learning linear regression models under constraints that control the biasing effect of a given attribute such as gender or batch number. We show how propensity modeling can be used for factoring out the part of the bias that can be justified by externally provided explanatory attributes. Then we analytically derive linear models that minimize squared error while controlling the bias by imposing constraints on the mean outcome or residuals of the models. Experiments with discrimination-aware crime prediction and batch effect normalization tasks show that the proposed techniques are successful in controlling attribute effects in linear regression models. © 2013 IEEE. 11. Controlling attribute effect in linear regression KAUST Repository Calders, Toon 2013-12-01 In data mining we often have to learn from biased data, because, for instance, data comes from different batches or there was a gender or racial bias in the collection of social data. In some applications it may be necessary to explicitly control this bias in the models we learn from the data. This paper is the first to study learning linear regression models under constraints that control the biasing effect of a given attribute such as gender or batch number. We show how propensity modeling can be used for factoring out the part of the bias that can be justified by externally provided explanatory attributes. Then we analytically derive linear models that minimize squared error while controlling the bias by imposing constraints on the mean outcome or residuals of the models. Experiments with discrimination-aware crime prediction and batch effect normalization tasks show that the proposed techniques are successful in controlling attribute effects in linear regression models. © 2013 IEEE. 12. Post-processing through linear regression Science.gov (United States) van Schaeybroeck, B.; Vannitsem, S. 2011-03-01 Various post-processing techniques are compared for both deterministic and ensemble forecasts, all based on linear regression between forecast data and observations. In order to evaluate the quality of the regression methods, three criteria are proposed, related to the effective correction of forecast error, the optimal variability of the corrected forecast and multicollinearity. The regression schemes under consideration include the ordinary least-square (OLS) method, a new time-dependent Tikhonov regularization (TDTR) method, the total least-square method, a new geometric-mean regression (GM), a recently introduced error-in-variables (EVMOS) method and, finally, a "best member" OLS method. The advantages and drawbacks of each method are clarified. These techniques are applied in the context of the 63 Lorenz system, whose model version is affected by both initial condition and model errors. For short forecast lead times, the number and choice of predictors plays an important role. Contrarily to the other techniques, GM degrades when the number of predictors increases. At intermediate lead times, linear regression is unable to provide corrections to the forecast and can sometimes degrade the performance (GM and the best member OLS with noise). At long lead times the regression schemes (EVMOS, TDTR) which yield the correct variability and the largest correlation between ensemble error and spread, should be preferred. 13. Post-processing through linear regression Directory of Open Access Journals (Sweden) B. Van Schaeybroeck 2011-03-01 Full Text Available Various post-processing techniques are compared for both deterministic and ensemble forecasts, all based on linear regression between forecast data and observations. In order to evaluate the quality of the regression methods, three criteria are proposed, related to the effective correction of forecast error, the optimal variability of the corrected forecast and multicollinearity. The regression schemes under consideration include the ordinary least-square (OLS method, a new time-dependent Tikhonov regularization (TDTR method, the total least-square method, a new geometric-mean regression (GM, a recently introduced error-in-variables (EVMOS method and, finally, a "best member" OLS method. The advantages and drawbacks of each method are clarified. These techniques are applied in the context of the 63 Lorenz system, whose model version is affected by both initial condition and model errors. For short forecast lead times, the number and choice of predictors plays an important role. Contrarily to the other techniques, GM degrades when the number of predictors increases. At intermediate lead times, linear regression is unable to provide corrections to the forecast and can sometimes degrade the performance (GM and the best member OLS with noise. At long lead times the regression schemes (EVMOS, TDTR which yield the correct variability and the largest correlation between ensemble error and spread, should be preferred. 14. Linear regression and the normality assumption. Science.gov (United States) Schmidt, Amand F; Finan, Chris 2017-12-16 Researchers often perform arbitrary outcome transformations to fulfill the normality assumption of a linear regression model. This commentary explains and illustrates that in large data settings, such transformations are often unnecessary, and worse may bias model estimates. Linear regression assumptions are illustrated using simulated data and an empirical example on the relation between time since type 2 diabetes diagnosis and glycated hemoglobin levels. Simulation results were evaluated on coverage; i.e., the number of times the 95% confidence interval included the true slope coefficient. Although outcome transformations bias point estimates, violations of the normality assumption in linear regression analyses do not. The normality assumption is necessary to unbiasedly estimate standard errors, and hence confidence intervals and P-values. However, in large sample sizes (e.g., where the number of observations per variable is >10) violations of this normality assumption often do not noticeably impact results. Contrary to this, assumptions on, the parametric model, absence of extreme observations, homoscedasticity, and independency of the errors, remain influential even in large sample size settings. Given that modern healthcare research typically includes thousands of subjects focusing on the normality assumption is often unnecessary, does not guarantee valid results, and worse may bias estimates due to the practice of outcome transformations. Copyright © 2017 Elsevier Inc. All rights reserved. 15. Neutrosophic Correlation and Simple Linear Regression Directory of Open Access Journals (Sweden) A. A. Salama 2014-09-01 Full Text Available Since the world is full of indeterminacy, the neutrosophics found their place into contemporary research. The fundamental concepts of neutrosophic set, introduced by Smarandache. Recently, Salama et al., introduced the concept of correlation coefficient of neutrosophic data. In this paper, we introduce and study the concepts of correlation and correlation coefficient of neutrosophic data in probability spaces and study some of their properties. Also, we introduce and study the neutrosophic simple linear regression model. Possible applications to data processing are touched upon. 16. Computational Tools for Probing Interactions in Multiple Linear Regression, Multilevel Modeling, and Latent Curve Analysis Science.gov (United States) Preacher, Kristopher J.; Curran, Patrick J.; Bauer, Daniel J. 2006-01-01 Simple slopes, regions of significance, and confidence bands are commonly used to evaluate interactions in multiple linear regression (MLR) models, and the use of these techniques has recently been extended to multilevel or hierarchical linear modeling (HLM) and latent curve analysis (LCA). However, conducting these tests and plotting the… 17. Integrating Linear Programming and Analytical Hierarchical ... African Journals Online (AJOL) Study area is about 28000 ha of Keleibar- Chai Watershed, located in eastern Azerbaijan, Iran. Socio-economic information collected through a two-stage survey of 19 villages, including 300 samples. Thematic maps also have summarized Ecological factors, including physical and economic data. A comprehensive Linear ... 18. Linear regression crash prediction models : issues and proposed solutions. Science.gov (United States) 2010-05-01 The paper develops a linear regression model approach that can be applied to : crash data to predict vehicle crashes. The proposed approach involves novice data aggregation : to satisfy linear regression assumptions; namely error structure normality ... 19. Analyzing thresholds and efficiency with hierarchical Bayesian logistic regression. Science.gov (United States) Houpt, Joseph W; Bittner, Jennifer L 2018-05-10 Ideal observer analysis is a fundamental tool used widely in vision science for analyzing the efficiency with which a cognitive or perceptual system uses available information. The performance of an ideal observer provides a formal measure of the amount of information in a given experiment. The ratio of human to ideal performance is then used to compute efficiency, a construct that can be directly compared across experimental conditions while controlling for the differences due to the stimuli and/or task specific demands. In previous research using ideal observer analysis, the effects of varying experimental conditions on efficiency have been tested using ANOVAs and pairwise comparisons. In this work, we present a model that combines Bayesian estimates of psychometric functions with hierarchical logistic regression for inference about both unadjusted human performance metrics and efficiencies. Our approach improves upon the existing methods by constraining the statistical analysis using a standard model connecting stimulus intensity to human observer accuracy and by accounting for variability in the estimates of human and ideal observer performance scores. This allows for both individual and group level inferences. Copyright © 2018 Elsevier Ltd. All rights reserved. 20. Suppression Situations in Multiple Linear Regression Science.gov (United States) Shieh, Gwowen 2006-01-01 This article proposes alternative expressions for the two most prevailing definitions of suppression without resorting to the standardized regression modeling. The formulation provides a simple basis for the examination of their relationship. For the two-predictor regression, the author demonstrates that the previous results in the literature are… 1. Two Paradoxes in Linear Regression Analysis Science.gov (United States) FENG, Ge; PENG, Jing; TU, Dongke; ZHENG, Julia Z.; FENG, Changyong 2016-01-01 Summary Regression is one of the favorite tools in applied statistics. However, misuse and misinterpretation of results from regression analysis are common in biomedical research. In this paper we use statistical theory and simulation studies to clarify some paradoxes around this popular statistical method. In particular, we show that a widely used model selection procedure employed in many publications in top medical journals is wrong. Formal procedures based on solid statistical theory should be used in model selection. PMID:28638214 2. Fuzzy multiple linear regression: A computational approach Science.gov (United States) Juang, C. H.; Huang, X. H.; Fleming, J. W. 1992-01-01 This paper presents a new computational approach for performing fuzzy regression. In contrast to Bardossy's approach, the new approach, while dealing with fuzzy variables, closely follows the conventional regression technique. In this approach, treatment of fuzzy input is more 'computational' than 'symbolic.' The following sections first outline the formulation of the new approach, then deal with the implementation and computational scheme, and this is followed by examples to illustrate the new procedure. 3. Linear Regression Based Real-Time Filtering Directory of Open Access Journals (Sweden) Misel Batmend 2013-01-01 Full Text Available This paper introduces real time filtering method based on linear least squares fitted line. Method can be used in case that a filtered signal is linear. This constraint narrows a band of potential applications. Advantage over Kalman filter is that it is computationally less expensive. The paper further deals with application of introduced method on filtering data used to evaluate a position of engraved material with respect to engraving machine. The filter was implemented to the CNC engraving machine control system. Experiments showing its performance are included. 4. Augmenting Data with Published Results in Bayesian Linear Regression Science.gov (United States) de Leeuw, Christiaan; Klugkist, Irene 2012-01-01 In most research, linear regression analyses are performed without taking into account published results (i.e., reported summary statistics) of similar previous studies. Although the prior density in Bayesian linear regression could accommodate such prior knowledge, formal models for doing so are absent from the literature. The goal of this… 5. A test for the parameters of multiple linear regression models ... African Journals Online (AJOL) A test for the parameters of multiple linear regression models is developed for conducting tests simultaneously on all the parameters of multiple linear regression models. The test is robust relative to the assumptions of homogeneity of variances and absence of serial correlation of the classical F-test. Under certain null and ... 6. Who Will Win?: Predicting the Presidential Election Using Linear Regression Science.gov (United States) Lamb, John H. 2007-01-01 This article outlines a linear regression activity that engages learners, uses technology, and fosters cooperation. Students generated least-squares linear regression equations using TI-83 Plus[TM] graphing calculators, Microsoft[C] Excel, and paper-and-pencil calculations using derived normal equations to predict the 2004 presidential election.… 7. Use of probabilistic weights to enhance linear regression myoelectric control. Science.gov (United States) Smith, Lauren H; Kuiken, Todd A; Hargrove, Levi J 2015-12-01 Clinically available prostheses for transradial amputees do not allow simultaneous myoelectric control of degrees of freedom (DOFs). Linear regression methods can provide simultaneous myoelectric control, but frequently also result in difficulty with isolating individual DOFs when desired. This study evaluated the potential of using probabilistic estimates of categories of gross prosthesis movement, which are commonly used in classification-based myoelectric control, to enhance linear regression myoelectric control. Gaussian models were fit to electromyogram (EMG) feature distributions for three movement classes at each DOF (no movement, or movement in either direction) and used to weight the output of linear regression models by the probability that the user intended the movement. Eight able-bodied and two transradial amputee subjects worked in a virtual Fitts' law task to evaluate differences in controllability between linear regression and probability-weighted regression for an intramuscular EMG-based three-DOF wrist and hand system. Real-time and offline analyses in able-bodied subjects demonstrated that probability weighting improved performance during single-DOF tasks (p linear regression control. Use of probability weights can improve the ability to isolate individual during linear regression myoelectric control, while maintaining the ability to simultaneously control multiple DOFs. 8. Hierarchical Matching and Regression with Application to Photometric Redshift Estimation Science.gov (United States) Murtagh, Fionn 2017-06-01 This work emphasizes that heterogeneity, diversity, discontinuity, and discreteness in data is to be exploited in classification and regression problems. A global a priori model may not be desirable. For data analytics in cosmology, this is motivated by the variety of cosmological objects such as elliptical, spiral, active, and merging galaxies at a wide range of redshifts. Our aim is matching and similarity-based analytics that takes account of discrete relationships in the data. The information structure of the data is represented by a hierarchy or tree where the branch structure, rather than just the proximity, is important. The representation is related to p-adic number theory. The clustering or binning of the data values, related to the precision of the measurements, has a central role in this methodology. If used for regression, our approach is a method of cluster-wise regression, generalizing nearest neighbour regression. Both to exemplify this analytics approach, and to demonstrate computational benefits, we address the well-known photometric redshift or `photo-z' problem, seeking to match Sloan Digital Sky Survey (SDSS) spectroscopic and photometric redshifts. 9. Distributed Monitoring of the R2 Statistic for Linear Regression Data.gov (United States) National Aeronautics and Space Administration — The problem of monitoring a multivariate linear regression model is relevant in studying the evolving relationship between a set of input variables (features) and... 10. Identification of Influential Points in a Linear Regression Model Directory of Open Access Journals (Sweden) Jan Grosz 2011-03-01 Full Text Available The article deals with the detection and identification of influential points in the linear regression model. Three methods of detection of outliers and leverage points are described. These procedures can also be used for one-sample (independentdatasets. This paper briefly describes theoretical aspects of several robust methods as well. Robust statistics is a powerful tool to increase the reliability and accuracy of statistical modelling and data analysis. A simulation model of the simple linear regression is presented. 11. Learning a Nonnegative Sparse Graph for Linear Regression. Science.gov (United States) Fang, Xiaozhao; Xu, Yong; Li, Xuelong; Lai, Zhihui; Wong, Wai Keung 2015-09-01 Previous graph-based semisupervised learning (G-SSL) methods have the following drawbacks: 1) they usually predefine the graph structure and then use it to perform label prediction, which cannot guarantee an overall optimum and 2) they only focus on the label prediction or the graph structure construction but are not competent in handling new samples. To this end, a novel nonnegative sparse graph (NNSG) learning method was first proposed. Then, both the label prediction and projection learning were integrated into linear regression. Finally, the linear regression and graph structure learning were unified within the same framework to overcome these two drawbacks. Therefore, a novel method, named learning a NNSG for linear regression was presented, in which the linear regression and graph learning were simultaneously performed to guarantee an overall optimum. In the learning process, the label information can be accurately propagated via the graph structure so that the linear regression can learn a discriminative projection to better fit sample labels and accurately classify new samples. An effective algorithm was designed to solve the corresponding optimization problem with fast convergence. Furthermore, NNSG provides a unified perceptiveness for a number of graph-based learning methods and linear regression methods. The experimental results showed that NNSG can obtain very high classification accuracy and greatly outperforms conventional G-SSL methods, especially some conventional graph construction methods. 12. Teaching the Concept of Breakdown Point in Simple Linear Regression. Science.gov (United States) Chan, Wai-Sum 2001-01-01 Most introductory textbooks on simple linear regression analysis mention the fact that extreme data points have a great influence on ordinary least-squares regression estimation; however, not many textbooks provide a rigorous mathematical explanation of this phenomenon. Suggests a way to fill this gap by teaching students the concept of breakdown… 13. Testing hypotheses for differences between linear regression lines Science.gov (United States) Stanley J. Zarnoch 2009-01-01 Five hypotheses are identified for testing differences between simple linear regression lines. The distinctions between these hypotheses are based on a priori assumptions and illustrated with full and reduced models. The contrast approach is presented as an easy and complete method for testing for overall differences between the regressions and for making pairwise... 14. Evaluation of Linear Regression Simultaneous Myoelectric Control Using Intramuscular EMG. Science.gov (United States) Smith, Lauren H; Kuiken, Todd A; Hargrove, Levi J 2016-04-01 The objective of this study was to evaluate the ability of linear regression models to decode patterns of muscle coactivation from intramuscular electromyogram (EMG) and provide simultaneous myoelectric control of a virtual 3-DOF wrist/hand system. Performance was compared to the simultaneous control of conventional myoelectric prosthesis methods using intramuscular EMG (parallel dual-site control)-an approach that requires users to independently modulate individual muscles in the residual limb, which can be challenging for amputees. Linear regression control was evaluated in eight able-bodied subjects during a virtual Fitts' law task and was compared to performance of eight subjects using parallel dual-site control. An offline analysis also evaluated how different types of training data affected prediction accuracy of linear regression control. The two control systems demonstrated similar overall performance; however, the linear regression method demonstrated improved performance for targets requiring use of all three DOFs, whereas parallel dual-site control demonstrated improved performance for targets that required use of only one DOF. Subjects using linear regression control could more easily activate multiple DOFs simultaneously, but often experienced unintended movements when trying to isolate individual DOFs. Offline analyses also suggested that the method used to train linear regression systems may influence controllability. Linear regression myoelectric control using intramuscular EMG provided an alternative to parallel dual-site control for 3-DOF simultaneous control at the wrist and hand. The two methods demonstrated different strengths in controllability, highlighting the tradeoff between providing simultaneous control and the ability to isolate individual DOFs when desired. 15. Estimating monotonic rates from biological data using local linear regression. Science.gov (United States) Olito, Colin; White, Craig R; Marshall, Dustin J; Barneche, Diego R 2017-03-01 Accessing many fundamental questions in biology begins with empirical estimation of simple monotonic rates of underlying biological processes. Across a variety of disciplines, ranging from physiology to biogeochemistry, these rates are routinely estimated from non-linear and noisy time series data using linear regression and ad hoc manual truncation of non-linearities. Here, we introduce the R package LoLinR, a flexible toolkit to implement local linear regression techniques to objectively and reproducibly estimate monotonic biological rates from non-linear time series data, and demonstrate possible applications using metabolic rate data. LoLinR provides methods to easily and reliably estimate monotonic rates from time series data in a way that is statistically robust, facilitates reproducible research and is applicable to a wide variety of research disciplines in the biological sciences. © 2017. Published by The Company of Biologists Ltd. 16. Comparison of Classical Linear Regression and Orthogonal Regression According to the Sum of Squares Perpendicular Distances OpenAIRE KELEŞ, Taliha; ALTUN, Murat 2016-01-01 Regression analysis is a statistical technique for investigating and modeling the relationship between variables. The purpose of this study was the trivial presentation of the equation for orthogonal regression (OR) and the comparison of classical linear regression (CLR) and OR techniques with respect to the sum of squared perpendicular distances. For that purpose, the analyses were shown by an example. It was found that the sum of squared perpendicular distances of OR is smaller. Thus, it wa... 17. Use of probabilistic weights to enhance linear regression myoelectric control Science.gov (United States) Smith, Lauren H.; Kuiken, Todd A.; Hargrove, Levi J. 2015-12-01 Objective. Clinically available prostheses for transradial amputees do not allow simultaneous myoelectric control of degrees of freedom (DOFs). Linear regression methods can provide simultaneous myoelectric control, but frequently also result in difficulty with isolating individual DOFs when desired. This study evaluated the potential of using probabilistic estimates of categories of gross prosthesis movement, which are commonly used in classification-based myoelectric control, to enhance linear regression myoelectric control. Approach. Gaussian models were fit to electromyogram (EMG) feature distributions for three movement classes at each DOF (no movement, or movement in either direction) and used to weight the output of linear regression models by the probability that the user intended the movement. Eight able-bodied and two transradial amputee subjects worked in a virtual Fitts’ law task to evaluate differences in controllability between linear regression and probability-weighted regression for an intramuscular EMG-based three-DOF wrist and hand system. Main results. Real-time and offline analyses in able-bodied subjects demonstrated that probability weighting improved performance during single-DOF tasks (p < 0.05) by preventing extraneous movement at additional DOFs. Similar results were seen in experiments with two transradial amputees. Though goodness-of-fit evaluations suggested that the EMG feature distributions showed some deviations from the Gaussian, equal-covariance assumptions used in this experiment, the assumptions were sufficiently met to provide improved performance compared to linear regression control. Significance. Use of probability weights can improve the ability to isolate individual during linear regression myoelectric control, while maintaining the ability to simultaneously control multiple DOFs. 18. Biostatistics Series Module 6: Correlation and Linear Regression. Science.gov (United States) Hazra, Avijit; Gogtay, Nithya 2016-01-01 Correlation and linear regression are the most commonly used techniques for quantifying the association between two numeric variables. Correlation quantifies the strength of the linear relationship between paired variables, expressing this as a correlation coefficient. If both variables x and y are normally distributed, we calculate Pearson's correlation coefficient ( r ). If normality assumption is not met for one or both variables in a correlation analysis, a rank correlation coefficient, such as Spearman's rho (ρ) may be calculated. A hypothesis test of correlation tests whether the linear relationship between the two variables holds in the underlying population, in which case it returns a P correlation coefficient can also be calculated for an idea of the correlation in the population. The value r 2 denotes the proportion of the variability of the dependent variable y that can be attributed to its linear relation with the independent variable x and is called the coefficient of determination. Linear regression is a technique that attempts to link two correlated variables x and y in the form of a mathematical equation ( y = a + bx ), such that given the value of one variable the other may be predicted. In general, the method of least squares is applied to obtain the equation of the regression line. Correlation and linear regression analysis are based on certain assumptions pertaining to the data sets. If these assumptions are not met, misleading conclusions may be drawn. The first assumption is that of linear relationship between the two variables. A scatter plot is essential before embarking on any correlation-regression analysis to show that this is indeed the case. Outliers or clustering within data sets can distort the correlation coefficient value. Finally, it is vital to remember that though strong correlation can be a pointer toward causation, the two are not synonymous. 19. Linear regression methods a ccording to objective functions OpenAIRE Yasemin Sisman; Sebahattin Bektas 2012-01-01 The aim of the study is to explain the parameter estimation methods and the regression analysis. The simple linear regressionmethods grouped according to the objective function are introduced. The numerical solution is achieved for the simple linear regressionmethods according to objective function of Least Squares and theLeast Absolute Value adjustment methods. The success of the appliedmethods is analyzed using their objective function values. 20. Optimal choice of basis functions in the linear regression analysis International Nuclear Information System (INIS) Khotinskij, A.M. 1988-01-01 Problem of optimal choice of basis functions in the linear regression analysis is investigated. Step algorithm with estimation of its efficiency, which holds true at finite number of measurements, is suggested. Conditions, providing the probability of correct choice close to 1 are formulated. Application of the step algorithm to analysis of decay curves is substantiated. 8 refs 1. Common pitfalls in statistical analysis: Linear regression analysis Directory of Open Access Journals (Sweden) Rakesh Aggarwal 2017-01-01 Full Text Available In a previous article in this series, we explained correlation analysis which describes the strength of relationship between two continuous variables. In this article, we deal with linear regression analysis which predicts the value of one continuous variable from another. We also discuss the assumptions and pitfalls associated with this analysis. 2. How Robust Is Linear Regression with Dummy Variables? Science.gov (United States) Blankmeyer, Eric 2006-01-01 Researchers in education and the social sciences make extensive use of linear regression models in which the dependent variable is continuous-valued while the explanatory variables are a combination of continuous-valued regressors and dummy variables. The dummies partition the sample into groups, some of which may contain only a few observations.… 3. On the null distribution of Bayes factors in linear regression Science.gov (United States) We show that under the null, the 2 log (Bayes factor) is asymptotically distributed as a weighted sum of chi-squared random variables with a shifted mean. This claim holds for Bayesian multi-linear regression with a family of conjugate priors, namely, the normal-inverse-gamma prior, the g-prior, and... 4. Fitting program for linear regressions according to Mahon (1996) Energy Technology Data Exchange (ETDEWEB) 2018-01-09 This program takes the users' Input data and fits a linear regression to it using the prescription presented by Mahon (1996). Compared to the commonly used York fit, this method has the correct prescription for measurement error propagation. This software should facilitate the proper fitting of measurements with a simple Interface. 5. Data Transformations for Inference with Linear Regression: Clarifications and Recommendations Science.gov (United States) Pek, Jolynn; Wong, Octavia; Wong, C. M. 2017-01-01 Data transformations have been promoted as a popular and easy-to-implement remedy to address the assumption of normally distributed errors (in the population) in linear regression. However, the application of data transformations introduces non-ignorable complexities which should be fully appreciated before their implementation. This paper adds to… 6. Hierarchical and Non-Hierarchical Linear and Non-Linear Clustering Methods to “Shakespeare Authorship Question” Directory of Open Access Journals (Sweden) Refat Aljumily 2015-09-01 Full Text Available A few literary scholars have long claimed that Shakespeare did not write some of his best plays (history plays and tragedies and proposed at one time or another various suspect authorship candidates. Most modern-day scholars of Shakespeare have rejected this claim, arguing that strong evidence that Shakespeare wrote the plays and poems being his name appears on them as the author. This has caused and led to an ongoing scholarly academic debate for quite some long time. Stylometry is a fast-growing field often used to attribute authorship to anonymous or disputed texts. Stylometric attempts to resolve this literary puzzle have raised interesting questions over the past few years. The following paper contributes to “the Shakespeare authorship question” by using a mathematically-based methodology to examine the hypothesis that Shakespeare wrote all the disputed plays traditionally attributed to him. More specifically, the mathematically based methodology used here is based on Mean Proximity, as a linear hierarchical clustering method, and on Principal Components Analysis, as a non-hierarchical linear clustering method. It is also based, for the first time in the domain, on Self-Organizing Map U-Matrix and Voronoi Map, as non-linear clustering methods to cover the possibility that our data contains significant non-linearities. Vector Space Model (VSM is used to convert texts into vectors in a high dimensional space. The aim of which is to compare the degrees of similarity within and between limited samples of text (the disputed plays. The various works and plays assumed to have been written by Shakespeare and possible authors notably, Sir Francis Bacon, Christopher Marlowe, John Fletcher, and Thomas Kyd, where “similarity” is defined in terms of correlation/distance coefficient measure based on the frequency of usage profiles of function words, word bi-grams, and character triple-grams. The claim that Shakespeare authored all the disputed 7. Linear regression and sensitivity analysis in nuclear reactor design International Nuclear Information System (INIS) Kumar, Akansha; Tsvetkov, Pavel V.; McClarren, Ryan G. 2015-01-01 Highlights: • Presented a benchmark for the applicability of linear regression to complex systems. • Applied linear regression to a nuclear reactor power system. • Performed neutronics, thermal–hydraulics, and energy conversion using Brayton’s cycle for the design of a GCFBR. • Performed detailed sensitivity analysis to a set of parameters in a nuclear reactor power system. • Modeled and developed reactor design using MCNP, regression using R, and thermal–hydraulics in Java. - Abstract: The paper presents a general strategy applicable for sensitivity analysis (SA), and uncertainity quantification analysis (UA) of parameters related to a nuclear reactor design. This work also validates the use of linear regression (LR) for predictive analysis in a nuclear reactor design. The analysis helps to determine the parameters on which a LR model can be fit for predictive analysis. For those parameters, a regression surface is created based on trial data and predictions are made using this surface. A general strategy of SA to determine and identify the influential parameters those affect the operation of the reactor is mentioned. Identification of design parameters and validation of linearity assumption for the application of LR of reactor design based on a set of tests is performed. The testing methods used to determine the behavior of the parameters can be used as a general strategy for UA, and SA of nuclear reactor models, and thermal hydraulics calculations. A design of a gas cooled fast breeder reactor (GCFBR), with thermal–hydraulics, and energy transfer has been used for the demonstration of this method. MCNP6 is used to simulate the GCFBR design, and perform the necessary criticality calculations. Java is used to build and run input samples, and to extract data from the output files of MCNP6, and R is used to perform regression analysis and other multivariate variance, and analysis of the collinearity of data 8. Direction of Effects in Multiple Linear Regression Models. Science.gov (United States) Wiedermann, Wolfgang; von Eye, Alexander 2015-01-01 Previous studies analyzed asymmetric properties of the Pearson correlation coefficient using higher than second order moments. These asymmetric properties can be used to determine the direction of dependence in a linear regression setting (i.e., establish which of two variables is more likely to be on the outcome side) within the framework of cross-sectional observational data. Extant approaches are restricted to the bivariate regression case. The present contribution extends the direction of dependence methodology to a multiple linear regression setting by analyzing distributional properties of residuals of competing multiple regression models. It is shown that, under certain conditions, the third central moments of estimated regression residuals can be used to decide upon direction of effects. In addition, three different approaches for statistical inference are discussed: a combined D'Agostino normality test, a skewness difference test, and a bootstrap difference test. Type I error and power of the procedures are assessed using Monte Carlo simulations, and an empirical example is provided for illustrative purposes. In the discussion, issues concerning the quality of psychological data, possible extensions of the proposed methods to the fourth central moment of regression residuals, and potential applications are addressed. 9. SPLINE LINEAR REGRESSION USED FOR EVALUATING FINANCIAL ASSETS 1 Directory of Open Access Journals (Sweden) Liviu GEAMBAŞU 2010-12-01 Full Text Available One of the most important preoccupations of financial markets participants was and still is the problem of determining more precise the trend of financial assets prices. For solving this problem there were written many scientific papers and were developed many mathematical and statistical models in order to better determine the financial assets price trend. If until recently the simple linear models were largely used due to their facile utilization, the financial crises that affected the world economy starting with 2008 highlight the necessity of adapting the mathematical models to variation of economy. A simple to use model but adapted to economic life realities is the spline linear regression. This type of regression keeps the continuity of regression function, but split the studied data in intervals with homogenous characteristics. The characteristics of each interval are highlighted and also the evolution of market over all the intervals, resulting reduced standard errors. The first objective of the article is the theoretical presentation of the spline linear regression, also referring to scientific national and international papers related to this subject. The second objective is applying the theoretical model to data from the Bucharest Stock Exchange 10. Simple and multiple linear regression: sample size considerations. Science.gov (United States) Hanley, James A 2016-11-01 The suggested "two subjects per variable" (2SPV) rule of thumb in the Austin and Steyerberg article is a chance to bring out some long-established and quite intuitive sample size considerations for both simple and multiple linear regression. This article distinguishes two of the major uses of regression models that imply very different sample size considerations, neither served well by the 2SPV rule. The first is etiological research, which contrasts mean Y levels at differing "exposure" (X) values and thus tends to focus on a single regression coefficient, possibly adjusted for confounders. The second research genre guides clinical practice. It addresses Y levels for individuals with different covariate patterns or "profiles." It focuses on the profile-specific (mean) Y levels themselves, estimating them via linear compounds of regression coefficients and covariates. By drawing on long-established closed-form variance formulae that lie beneath the standard errors in multiple regression, and by rearranging them for heuristic purposes, one arrives at quite intuitive sample size considerations for both research genres. Copyright © 2016 Elsevier Inc. All rights reserved. 11. Implementing fuzzy polynomial interpolation (FPI and fuzzy linear regression (LFR Directory of Open Access Journals (Sweden) Maria Cristina Floreno 1996-05-01 Full Text Available This paper presents some preliminary results arising within a general framework concerning the development of software tools for fuzzy arithmetic. The program is in a preliminary stage. What has been already implemented consists of a set of routines for elementary operations, optimized functions evaluation, interpolation and regression. Some of these have been applied to real problems.This paper describes a prototype of a library in C++ for polynomial interpolation of fuzzifying functions, a set of routines in FORTRAN for fuzzy linear regression and a program with graphical user interface allowing the use of such routines. 12. Stochastic development regression on non-linear manifolds DEFF Research Database (Denmark) Kühnel, Line; Sommer, Stefan Horst 2017-01-01 We introduce a regression model for data on non-linear manifolds. The model describes the relation between a set of manifold valued observations, such as shapes of anatomical objects, and Euclidean explanatory variables. The approach is based on stochastic development of Euclidean diffusion...... processes to the manifold. Defining the data distribution as the transition distribution of the mapped stochastic process, parameters of the model, the non-linear analogue of design matrix and intercept, are found via maximum likelihood. The model is intrinsically related to the geometry encoded... 13. Neighborhood social capital and crime victimization: comparison of spatial regression analysis and hierarchical regression analysis. Science.gov (United States) Takagi, Daisuke; Ikeda, Ken'ichi; Kawachi, Ichiro 2012-11-01 Crime is an important determinant of public health outcomes, including quality of life, mental well-being, and health behavior. A body of research has documented the association between community social capital and crime victimization. The association between social capital and crime victimization has been examined at multiple levels of spatial aggregation, ranging from entire countries, to states, metropolitan areas, counties, and neighborhoods. In multilevel analysis, the spatial boundaries at level 2 are most often drawn from administrative boundaries (e.g., Census tracts in the U.S.). One problem with adopting administrative definitions of neighborhoods is that it ignores spatial spillover. We conducted a study of social capital and crime victimization in one ward of Tokyo city, using a spatial Durbin model with an inverse-distance weighting matrix that assigned each respondent a unique level of "exposure" to social capital based on all other residents' perceptions. The study is based on a postal questionnaire sent to 20-69 years old residents of Arakawa Ward, Tokyo. The response rate was 43.7%. We examined the contextual influence of generalized trust, perceptions of reciprocity, two types of social network variables, as well as two principal components of social capital (constructed from the above four variables). Our outcome measure was self-reported crime victimization in the last five years. In the spatial Durbin model, we found that neighborhood generalized trust, reciprocity, supportive networks and two principal components of social capital were each inversely associated with crime victimization. By contrast, a multilevel regression performed with the same data (using administrative neighborhood boundaries) found generally null associations between neighborhood social capital and crime. Spatial regression methods may be more appropriate for investigating the contextual influence of social capital in homogeneous cultural settings such as Japan. Copyright 14. Computer software for linear and nonlinear regression in organic NMR International Nuclear Information System (INIS) Canto, Eduardo Leite do; Rittner, Roberto 1991-01-01 Calculation involving two variable linear regressions, require specific procedures generally not familiar to chemist. For attending the necessity of fast and efficient handling of NMR data, a self explained and Pc portable software has been developed, which allows user to produce and use diskette recorded tables, containing chemical shift or any other substituent physical-chemical measurements and constants (σ T , σ o R , E s , ...) 15. Multicollinearity in applied economics research and the Bayesian linear regression OpenAIRE EISENSTAT, Eric 2016-01-01 This article revises the popular issue of collinearity amongst explanatory variables in the context of a multiple linear regression analysis, particularly in empirical studies within social science related fields. Some important interpretations and explanations are highlighted from the econometrics literature with respect to the effects of multicollinearity on statistical inference, as well as the general shortcomings of the once fervent search for methods intended to detect and mitigate thes... 16. Extending the linear model with R generalized linear, mixed effects and nonparametric regression models CERN Document Server Faraway, Julian J 2005-01-01 Linear models are central to the practice of statistics and form the foundation of a vast range of statistical methodologies. Julian J. Faraway''s critically acclaimed Linear Models with R examined regression and analysis of variance, demonstrated the different methods available, and showed in which situations each one applies. Following in those footsteps, Extending the Linear Model with R surveys the techniques that grow from the regression model, presenting three extensions to that framework: generalized linear models (GLMs), mixed effect models, and nonparametric regression models. The author''s treatment is thoroughly modern and covers topics that include GLM diagnostics, generalized linear mixed models, trees, and even the use of neural networks in statistics. To demonstrate the interplay of theory and practice, throughout the book the author weaves the use of the R software environment to analyze the data of real examples, providing all of the R commands necessary to reproduce the analyses. All of the ... 17. Establishment of regression dependences. Linear and nonlinear dependences International Nuclear Information System (INIS) Onishchenko, A.M. 1994-01-01 The main problems of determination of linear and 19 types of nonlinear regression dependences are completely discussed. It is taken into consideration that total dispersions are the sum of measurement dispersions and parameter variation dispersions themselves. Approaches to all dispersions determination are described. It is shown that the least square fit gives inconsistent estimation for industrial objects and processes. The correction methods by taking into account comparable measurement errors for both variable give an opportunity to obtain consistent estimation for the regression equation parameters. The condition of the correction technique application expediency is given. The technique for determination of nonlinear regression dependences taking into account the dependence form and comparable errors of both variables is described. 6 refs., 1 tab 18. Return-Volatility Relationship: Insights from Linear and Non-Linear Quantile Regression NARCIS (Netherlands) D.E. Allen (David); A.K. Singh (Abhay); R.J. Powell (Robert); M.J. McAleer (Michael); J. Taylor (James); L. Thomas (Lyn) 2013-01-01 textabstractThe purpose of this paper is to examine the asymmetric relationship between price and implied volatility and the associated extreme quantile dependence using linear and non linear quantile regression approach. Our goal in this paper is to demonstrate that the relationship between the 19. Using the Ridge Regression Procedures to Estimate the Multiple Linear Regression Coefficients Science.gov (United States) Gorgees, HazimMansoor; Mahdi, FatimahAssim 2018-05-01 This article concerns with comparing the performance of different types of ordinary ridge regression estimators that have been already proposed to estimate the regression parameters when the near exact linear relationships among the explanatory variables is presented. For this situations we employ the data obtained from tagi gas filling company during the period (2008-2010). The main result we reached is that the method based on the condition number performs better than other methods since it has smaller mean square error (MSE) than the other stated methods. 20. On macroeconomic values investigation using fuzzy linear regression analysis Directory of Open Access Journals (Sweden) Richard Pospíšil 2017-06-01 Full Text Available The theoretical background for abstract formalization of the vague phenomenon of complex systems is the fuzzy set theory. In the paper, vague data is defined as specialized fuzzy sets - fuzzy numbers and there is described a fuzzy linear regression model as a fuzzy function with fuzzy numbers as vague parameters. To identify the fuzzy coefficients of the model, the genetic algorithm is used. The linear approximation of the vague function together with its possibility area is analytically and graphically expressed. A suitable application is performed in the tasks of the time series fuzzy regression analysis. The time-trend and seasonal cycles including their possibility areas are calculated and expressed. The examples are presented from the economy field, namely the time-development of unemployment, agricultural production and construction respectively between 2009 and 2011 in the Czech Republic. The results are shown in the form of the fuzzy regression models of variables of time series. For the period 2009-2011, the analysis assumptions about seasonal behaviour of variables and the relationship between them were confirmed; in 2010, the system behaved fuzzier and the relationships between the variables were vaguer, that has a lot of causes, from the different elasticity of demand, through state interventions to globalization and transnational impacts. 1. BRGLM, Interactive Linear Regression Analysis by Least Square Fit International Nuclear Information System (INIS) Ringland, J.T.; Bohrer, R.E.; Sherman, M.E. 1985-01-01 1 - Description of program or function: BRGLM is an interactive program written to fit general linear regression models by least squares and to provide a variety of statistical diagnostic information about the fit. Stepwise and all-subsets regression can be carried out also. There are facilities for interactive data management (e.g. setting missing value flags, data transformations) and tools for constructing design matrices for the more commonly-used models such as factorials, cubic Splines, and auto-regressions. 2 - Method of solution: The least squares computations are based on the orthogonal (QR) decomposition of the design matrix obtained using the modified Gram-Schmidt algorithm. 3 - Restrictions on the complexity of the problem: The current release of BRGLM allows maxima of 1000 observations, 99 variables, and 3000 words of main memory workspace. For a problem with N observations and P variables, the number of words of main memory storage required is MAX(N*(P+6), N*P+P*P+3*N, and 3*P*P+6*N). Any linear model may be fit although the in-memory workspace will have to be increased for larger problems 2. TYPE Ia SUPERNOVA COLORS AND EJECTA VELOCITIES: HIERARCHICAL BAYESIAN REGRESSION WITH NON-GAUSSIAN DISTRIBUTIONS International Nuclear Information System (INIS) Mandel, Kaisey S.; Kirshner, Robert P.; Foley, Ryan J. 2014-01-01 We investigate the statistical dependence of the peak intrinsic colors of Type Ia supernovae (SNe Ia) on their expansion velocities at maximum light, measured from the Si II λ6355 spectral feature. We construct a new hierarchical Bayesian regression model, accounting for the random effects of intrinsic scatter, measurement error, and reddening by host galaxy dust, and implement a Gibbs sampler and deviance information criteria to estimate the correlation. The method is applied to the apparent colors from BVRI light curves and Si II velocity data for 79 nearby SNe Ia. The apparent color distributions of high-velocity (HV) and normal velocity (NV) supernovae exhibit significant discrepancies for B – V and B – R, but not other colors. Hence, they are likely due to intrinsic color differences originating in the B band, rather than dust reddening. The mean intrinsic B – V and B – R color differences between HV and NV groups are 0.06 ± 0.02 and 0.09 ± 0.02 mag, respectively. A linear model finds significant slopes of –0.021 ± 0.006 and –0.030 ± 0.009 mag (10 3 km s –1 ) –1 for intrinsic B – V and B – R colors versus velocity, respectively. Because the ejecta velocity distribution is skewed toward high velocities, these effects imply non-Gaussian intrinsic color distributions with skewness up to +0.3. Accounting for the intrinsic-color-velocity correlation results in corrections to A V extinction estimates as large as –0.12 mag for HV SNe Ia and +0.06 mag for NV events. Velocity measurements from SN Ia spectra have the potential to diminish systematic errors from the confounding of intrinsic colors and dust reddening affecting supernova distances 3. A comparison of random forest regression and multiple linear regression for prediction in neuroscience. Science.gov (United States) Smith, Paul F; Ganesh, Siva; Liu, Ping 2013-10-30 Regression is a common statistical tool for prediction in neuroscience. However, linear regression is by far the most common form of regression used, with regression trees receiving comparatively little attention. In this study, the results of conventional multiple linear regression (MLR) were compared with those of random forest regression (RFR), in the prediction of the concentrations of 9 neurochemicals in the vestibular nucleus complex and cerebellum that are part of the l-arginine biochemical pathway (agmatine, putrescine, spermidine, spermine, l-arginine, l-ornithine, l-citrulline, glutamate and γ-aminobutyric acid (GABA)). The R(2) values for the MLRs were higher than the proportion of variance explained values for the RFRs: 6/9 of them were ≥ 0.70 compared to 4/9 for RFRs. Even the variables that had the lowest R(2) values for the MLRs, e.g. ornithine (0.50) and glutamate (0.61), had much lower proportion of variance explained values for the RFRs (0.27 and 0.49, respectively). The RSE values for the MLRs were lower than those for the RFRs in all but two cases. In general, MLRs seemed to be superior to the RFRs in terms of predictive value and error. In the case of this data set, MLR appeared to be superior to RFR in terms of its explanatory value and error. This result suggests that MLR may have advantages over RFR for prediction in neuroscience with this kind of data set, but that RFR can still have good predictive value in some cases. Copyright © 2013 Elsevier B.V. All rights reserved. 4. Relative Importance for Linear Regression in R: The Package relaimpo Directory of Open Access Journals (Sweden) Ulrike Gromping 2006-09-01 Full Text Available Relative importance is a topic that has seen a lot of interest in recent years, particularly in applied work. The R package relaimpo implements six different metrics for assessing relative importance of regressors in the linear model, two of which are recommended - averaging over orderings of regressors and a newly proposed metric (Feldman 2005 called pmvd. Apart from delivering the metrics themselves, relaimpo also provides (exploratory bootstrap confidence intervals. This paper offers a brief tutorial introduction to the package. The methods and relaimpo’s functionality are illustrated using the data set swiss that is generally available in R. The paper targets readers who have a basic understanding of multiple linear regression. For the background of more advanced aspects, references are provided. 5. Stochastic development regression on non-linear manifolds DEFF Research Database (Denmark) Kühnel, Line; Sommer, Stefan Horst 2017-01-01 We introduce a regression model for data on non-linear manifolds. The model describes the relation between a set of manifold valued observations, such as shapes of anatomical objects, and Euclidean explanatory variables. The approach is based on stochastic development of Euclidean diffusion...... processes to the manifold. Defining the data distribution as the transition distribution of the mapped stochastic process, parameters of the model, the non-linear analogue of design matrix and intercept, are found via maximum likelihood. The model is intrinsically related to the geometry encoded...... in the connection of the manifold. We propose an estimation procedure which applies the Laplace approximation of the likelihood function. A simulation study of the performance of the model is performed and the model is applied to a real dataset of Corpus Callosum shapes.... 6. Multivariate sparse group lasso for the multivariate multiple linear regression with an arbitrary group structure. Science.gov (United States) Li, Yanming; Nan, Bin; Zhu, Ji 2015-06-01 We propose a multivariate sparse group lasso variable selection and estimation method for data with high-dimensional predictors as well as high-dimensional response variables. The method is carried out through a penalized multivariate multiple linear regression model with an arbitrary group structure for the regression coefficient matrix. It suits many biology studies well in detecting associations between multiple traits and multiple predictors, with each trait and each predictor embedded in some biological functional groups such as genes, pathways or brain regions. The method is able to effectively remove unimportant groups as well as unimportant individual coefficients within important groups, particularly for large p small n problems, and is flexible in handling various complex group structures such as overlapping or nested or multilevel hierarchical structures. The method is evaluated through extensive simulations with comparisons to the conventional lasso and group lasso methods, and is applied to an eQTL association study. © 2015, The International Biometric Society. 7. Robust linear registration of CT images using random regression forests Science.gov (United States) Konukoglu, Ender; Criminisi, Antonio; Pathak, Sayan; Robertson, Duncan; White, Steve; Haynor, David; Siddiqui, Khan 2011-03-01 Global linear registration is a necessary first step for many different tasks in medical image analysis. Comparing longitudinal studies1, cross-modality fusion2, and many other applications depend heavily on the success of the automatic registration. The robustness and efficiency of this step is crucial as it affects all subsequent operations. Most common techniques cast the linear registration problem as the minimization of a global energy function based on the image intensities. Although these algorithms have proved useful, their robustness in fully automated scenarios is still an open question. In fact, the optimization step often gets caught in local minima yielding unsatisfactory results. Recent algorithms constrain the space of registration parameters by exploiting implicit or explicit organ segmentations, thus increasing robustness4,5. In this work we propose a novel robust algorithm for automatic global linear image registration. Our method uses random regression forests to estimate posterior probability distributions for the locations of anatomical structures - represented as axis aligned bounding boxes6. These posterior distributions are later integrated in a global linear registration algorithm. The biggest advantage of our algorithm is that it does not require pre-defined segmentations or regions. Yet it yields robust registration results. We compare the robustness of our algorithm with that of the state of the art Elastix toolbox7. Validation is performed via 1464 pair-wise registrations in a database of very diverse 3D CT images. We show that our method decreases the "failure" rate of the global linear registration from 12.5% (Elastix) to only 1.9%. 8. Estimating Loess Plateau Average Annual Precipitation with Multiple Linear Regression Kriging and Geographically Weighted Regression Kriging Directory of Open Access Journals (Sweden) Qiutong Jin 2016-06-01 Full Text Available Estimating the spatial distribution of precipitation is an important and challenging task in hydrology, climatology, ecology, and environmental science. In order to generate a highly accurate distribution map of average annual precipitation for the Loess Plateau in China, multiple linear regression Kriging (MLRK and geographically weighted regression Kriging (GWRK methods were employed using precipitation data from the period 1980–2010 from 435 meteorological stations. The predictors in regression Kriging were selected by stepwise regression analysis from many auxiliary environmental factors, such as elevation (DEM, normalized difference vegetation index (NDVI, solar radiation, slope, and aspect. All predictor distribution maps had a 500 m spatial resolution. Validation precipitation data from 130 hydrometeorological stations were used to assess the prediction accuracies of the MLRK and GWRK approaches. Results showed that both prediction maps with a 500 m spatial resolution interpolated by MLRK and GWRK had a high accuracy and captured detailed spatial distribution data; however, MLRK produced a lower prediction error and a higher variance explanation than GWRK, although the differences were small, in contrast to conclusions from similar studies. 9. Comparison of Linear and Non-linear Regression Analysis to Determine Pulmonary Pressure in Hyperthyroidism. Science.gov (United States) Scarneciu, Camelia C; Sangeorzan, Livia; Rus, Horatiu; Scarneciu, Vlad D; Varciu, Mihai S; Andreescu, Oana; Scarneciu, Ioan 2017-01-01 This study aimed at assessing the incidence of pulmonary hypertension (PH) at newly diagnosed hyperthyroid patients and at finding a simple model showing the complex functional relation between pulmonary hypertension in hyperthyroidism and the factors causing it. The 53 hyperthyroid patients (H-group) were evaluated mainly by using an echocardiographical method and compared with 35 euthyroid (E-group) and 25 healthy people (C-group). In order to identify the factors causing pulmonary hypertension the statistical method of comparing the values of arithmetical means is used. The functional relation between the two random variables (PAPs and each of the factors determining it within our research study) can be expressed by linear or non-linear function. By applying the linear regression method described by a first-degree equation the line of regression (linear model) has been determined; by applying the non-linear regression method described by a second degree equation, a parabola-type curve of regression (non-linear or polynomial model) has been determined. We made the comparison and the validation of these two models by calculating the determination coefficient (criterion 1), the comparison of residuals (criterion 2), application of AIC criterion (criterion 3) and use of F-test (criterion 4). From the H-group, 47% have pulmonary hypertension completely reversible when obtaining euthyroidism. The factors causing pulmonary hypertension were identified: previously known- level of free thyroxin, pulmonary vascular resistance, cardiac output; new factors identified in this study- pretreatment period, age, systolic blood pressure. According to the four criteria and to the clinical judgment, we consider that the polynomial model (graphically parabola- type) is better than the linear one. The better model showing the functional relation between the pulmonary hypertension in hyperthyroidism and the factors identified in this study is given by a polynomial equation of second 10. High-throughput quantitative biochemical characterization of algal biomass by NIR spectroscopy; multiple linear regression and multivariate linear regression analysis. Science.gov (United States) Laurens, L M L; Wolfrum, E J 2013-12-18 One of the challenges associated with microalgal biomass characterization and the comparison of microalgal strains and conversion processes is the rapid determination of the composition of algae. We have developed and applied a high-throughput screening technology based on near-infrared (NIR) spectroscopy for the rapid and accurate determination of algal biomass composition. We show that NIR spectroscopy can accurately predict the full composition using multivariate linear regression analysis of varying lipid, protein, and carbohydrate content of algal biomass samples from three strains. We also demonstrate a high quality of predictions of an independent validation set. A high-throughput 96-well configuration for spectroscopy gives equally good prediction relative to a ring-cup configuration, and thus, spectra can be obtained from as little as 10-20 mg of material. We found that lipids exhibit a dominant, distinct, and unique fingerprint in the NIR spectrum that allows for the use of single and multiple linear regression of respective wavelengths for the prediction of the biomass lipid content. This is not the case for carbohydrate and protein content, and thus, the use of multivariate statistical modeling approaches remains necessary. 11. Convergence diagnostics for Eigenvalue problems with linear regression model International Nuclear Information System (INIS) Shi, Bo; Petrovic, Bojan 2011-01-01 Although the Monte Carlo method has been extensively used for criticality/Eigenvalue problems, a reliable, robust, and efficient convergence diagnostics method is still desired. Most methods are based on integral parameters (multiplication factor, entropy) and either condense the local distribution information into a single value (e.g., entropy) or even disregard it. We propose to employ the detailed cycle-by-cycle local flux evolution obtained by using mesh tally mechanism to assess the source and flux convergence. By applying a linear regression model to each individual mesh in a mesh tally for convergence diagnostics, a global convergence criterion can be obtained. We exemplify this method on two problems and obtain promising diagnostics results. (author) 12. Latent Variable Regression 4-Level Hierarchical Model Using Multisite Multiple-Cohorts Longitudinal Data. CRESST Report 801 Science.gov (United States) Choi, Kilchan 2011-01-01 This report explores a new latent variable regression 4-level hierarchical model for monitoring school performance over time using multisite multiple-cohorts longitudinal data. This kind of data set has a 4-level hierarchical structure: time-series observation nested within students who are nested within different cohorts of students. These… 13. Regressão linear geograficamente ponderada em ambiente SIG Directory of Open Access Journals (Sweden) Luís Eduardo Ximenes Carvalho 2009-10-01 Full Text Available Este artigo aborda considerações teóricas e resultados da implementação em ambiente SIG de um modelo confirmatório de estatística espacial — regressão linear geograficamente ponderada (RGP — não disponível em ambiente livre. Os aspectos teóricos deste modelo local de regressão espacial foram amplamente discutidos em virtude da escassa bibliografia existente. O modelo RGP foi implementado na linguagem de programação GISDK do SIG-T TransCAD, utilizando compreensivamente as ferramentas de manipulação, tratamento georreferenciado dos dados e rotinas de análise espacial disponibilizadas em plataformas SIG. Ao final, espera-se ter desenvolvido, ainda que de maneira parcial, uma importante ferramenta que contribuirá para a compreensão e refinamento da modelagem de fenômenos geográficos tão amplamente analisados em estudos de Planejamento de Transportes. 14. Modeling Pan Evaporation for Kuwait by Multiple Linear Regression Science.gov (United States) Almedeij, Jaber 2012-01-01 Evaporation is an important parameter for many projects related to hydrology and water resources systems. This paper constitutes the first study conducted in Kuwait to obtain empirical relations for the estimation of daily and monthly pan evaporation as functions of available meteorological data of temperature, relative humidity, and wind speed. The data used here for the modeling are daily measurements of substantial continuity coverage, within a period of 17 years between January 1993 and December 2009, which can be considered representative of the desert climate of the urban zone of the country. Multiple linear regression technique is used with a procedure of variable selection for fitting the best model forms. The correlations of evaporation with temperature and relative humidity are also transformed in order to linearize the existing curvilinear patterns of the data by using power and exponential functions, respectively. The evaporation models suggested with the best variable combinations were shown to produce results that are in a reasonable agreement with observation values. PMID:23226984 15. Enhancement of Visual Field Predictions with Pointwise Exponential Regression (PER) and Pointwise Linear Regression (PLR). Science.gov (United States) Morales, Esteban; de Leon, John Mark S; Abdollahi, Niloufar; Yu, Fei; Nouri-Mahdavi, Kouros; Caprioli, Joseph 2016-03-01 The study was conducted to evaluate threshold smoothing algorithms to enhance prediction of the rates of visual field (VF) worsening in glaucoma. We studied 798 patients with primary open-angle glaucoma and 6 or more years of follow-up who underwent 8 or more VF examinations. Thresholds at each VF location for the first 4 years or first half of the follow-up time (whichever was greater) were smoothed with clusters defined by the nearest neighbor (NN), Garway-Heath, Glaucoma Hemifield Test (GHT), and weighting by the correlation of rates at all other VF locations. Thresholds were regressed with a pointwise exponential regression (PER) model and a pointwise linear regression (PLR) model. Smaller root mean square error (RMSE) values of the differences between the observed and the predicted thresholds at last two follow-ups indicated better model predictions. The mean (SD) follow-up times for the smoothing and prediction phase were 5.3 (1.5) and 10.5 (3.9) years. The mean RMSE values for the PER and PLR models were unsmoothed data, 6.09 and 6.55; NN, 3.40 and 3.42; Garway-Heath, 3.47 and 3.48; GHT, 3.57 and 3.74; and correlation of rates, 3.59 and 3.64. Smoothed VF data predicted better than unsmoothed data. Nearest neighbor provided the best predictions; PER also predicted consistently more accurately than PLR. Smoothing algorithms should be used when forecasting VF results with PER or PLR. The application of smoothing algorithms on VF data can improve forecasting in VF points to assist in treatment decisions. 16. Principal Covariates Clusterwise Regression (PCCR): Accounting for Multicollinearity and Population Heterogeneity in Hierarchically Organized Data. Science.gov (United States) Wilderjans, Tom Frans; Vande Gaer, Eva; Kiers, Henk A L; Van Mechelen, Iven; Ceulemans, Eva 2017-03-01 In the behavioral sciences, many research questions pertain to a regression problem in that one wants to predict a criterion on the basis of a number of predictors. Although in many cases, ordinary least squares regression will suffice, sometimes the prediction problem is more challenging, for three reasons: first, multiple highly collinear predictors can be available, making it difficult to grasp their mutual relations as well as their relations to the criterion. In that case, it may be very useful to reduce the predictors to a few summary variables, on which one regresses the criterion and which at the same time yields insight into the predictor structure. Second, the population under study may consist of a few unknown subgroups that are characterized by different regression models. Third, the obtained data are often hierarchically structured, with for instance, observations being nested into persons or participants within groups or countries. Although some methods have been developed that partially meet these challenges (i.e., principal covariates regression (PCovR), clusterwise regression (CR), and structural equation models), none of these methods adequately deals with all of them simultaneously. To fill this gap, we propose the principal covariates clusterwise regression (PCCR) method, which combines the key idea's behind PCovR (de Jong & Kiers in Chemom Intell Lab Syst 14(1-3):155-164, 1992) and CR (Späth in Computing 22(4):367-373, 1979). The PCCR method is validated by means of a simulation study and by applying it to cross-cultural data regarding satisfaction with life. 17. Characteristics and Properties of a Simple Linear Regression Model Directory of Open Access Journals (Sweden) Kowal Robert 2016-12-01 Full Text Available A simple linear regression model is one of the pillars of classic econometrics. Despite the passage of time, it continues to raise interest both from the theoretical side as well as from the application side. One of the many fundamental questions in the model concerns determining derivative characteristics and studying the properties existing in their scope, referring to the first of these aspects. The literature of the subject provides several classic solutions in that regard. In the paper, a completely new design is proposed, based on the direct application of variance and its properties, resulting from the non-correlation of certain estimators with the mean, within the scope of which some fundamental dependencies of the model characteristics are obtained in a much more compact manner. The apparatus allows for a simple and uniform demonstration of multiple dependencies and fundamental properties in the model, and it does it in an intuitive manner. The results were obtained in a classic, traditional area, where everything, as it might seem, has already been thoroughly studied and discovered. 18. Exhaustive Search for Sparse Variable Selection in Linear Regression Science.gov (United States) Igarashi, Yasuhiko; Takenaka, Hikaru; Nakanishi-Ohno, Yoshinori; Uemura, Makoto; Ikeda, Shiro; Okada, Masato 2018-04-01 We propose a K-sparse exhaustive search (ES-K) method and a K-sparse approximate exhaustive search method (AES-K) for selecting variables in linear regression. With these methods, K-sparse combinations of variables are tested exhaustively assuming that the optimal combination of explanatory variables is K-sparse. By collecting the results of exhaustively computing ES-K, various approximate methods for selecting sparse variables can be summarized as density of states. With this density of states, we can compare different methods for selecting sparse variables such as relaxation and sampling. For large problems where the combinatorial explosion of explanatory variables is crucial, the AES-K method enables density of states to be effectively reconstructed by using the replica-exchange Monte Carlo method and the multiple histogram method. Applying the ES-K and AES-K methods to type Ia supernova data, we confirmed the conventional understanding in astronomy when an appropriate K is given beforehand. However, we found the difficulty to determine K from the data. Using virtual measurement and analysis, we argue that this is caused by data shortage. 19. Perfect observables for the hierarchical non-linear O(N)-invariant σ-model International Nuclear Information System (INIS) Wieczerkowski, C.; Xylander, Y. 1995-05-01 We compute moving eigenvalues and the eigenvectors of the linear renormalization group transformation for observables along the renormalized trajectory of the hierarchical non-linear O(N)-invariant σ-model by means of perturbation theory in the running coupling constant. Moving eigenvectors are defined as solutions to a Callan-Symanzik type equation. (orig.) 20. Weibull and lognormal Taguchi analysis using multiple linear regression International Nuclear Information System (INIS) Piña-Monarrez, Manuel R.; Ortiz-Yañez, Jesús F. 2015-01-01 The paper provides to reliability practitioners with a method (1) to estimate the robust Weibull family when the Taguchi method (TM) is applied, (2) to estimate the normal operational Weibull family in an accelerated life testing (ALT) analysis to give confidence to the extrapolation and (3) to perform the ANOVA analysis to both the robust and the normal operational Weibull family. On the other hand, because the Weibull distribution neither has the normal additive property nor has a direct relationship with the normal parameters (µ, σ), in this paper, the issues of estimating a Weibull family by using a design of experiment (DOE) are first addressed by using an L_9 (3"4) orthogonal array (OA) in both the TM and in the Weibull proportional hazard model approach (WPHM). Then, by using the Weibull/Gumbel and the lognormal/normal relationships and multiple linear regression, the direct relationships between the Weibull and the lifetime parameters are derived and used to formulate the proposed method. Moreover, since the derived direct relationships always hold, the method is generalized to the lognormal and ALT analysis. Finally, the method’s efficiency is shown through its application to the used OA and to a set of ALT data. - Highlights: • It gives the statistical relations and steps to use the Taguchi Method (TM) to analyze Weibull data. • It gives the steps to determine the unknown Weibull family to both the robust TM setting and the normal ALT level. • It gives a method to determine the expected lifetimes and to perform its ANOVA analysis in TM and ALT analysis. • It gives a method to give confidence to the extrapolation in an ALT analysis by using the Weibull family of the normal level. 1. EPMLR: sequence-based linear B-cell epitope prediction method using multiple linear regression. Science.gov (United States) Lian, Yao; Ge, Meng; Pan, Xian-Ming 2014-12-19 B-cell epitopes have been studied extensively due to their immunological applications, such as peptide-based vaccine development, antibody production, and disease diagnosis and therapy. Despite several decades of research, the accurate prediction of linear B-cell epitopes has remained a challenging task. In this work, based on the antigen's primary sequence information, a novel linear B-cell epitope prediction model was developed using the multiple linear regression (MLR). A 10-fold cross-validation test on a large non-redundant dataset was performed to evaluate the performance of our model. To alleviate the problem caused by the noise of negative dataset, 300 experiments utilizing 300 sub-datasets were performed. We achieved overall sensitivity of 81.8%, precision of 64.1% and area under the receiver operating characteristic curve (AUC) of 0.728. We have presented a reliable method for the identification of linear B cell epitope using antigen's primary sequence information. Moreover, a web server EPMLR has been developed for linear B-cell epitope prediction: http://www.bioinfo.tsinghua.edu.cn/epitope/EPMLR/ . 2. Implicit collinearity effect in linear regression: Application to basal ... African Journals Online (AJOL) Collinearity of predictor variables is a severe problem in the least square regression analysis. It contributes to the instability of regression coefficients and leads to a wrong prediction accuracy. Despite these problems, studies are conducted with a large number of observed and derived variables linked with a response ... 3. Least Squares Adjustment: Linear and Nonlinear Weighted Regression Analysis DEFF Research Database (Denmark) Nielsen, Allan Aasbjerg 2007-01-01 This note primarily describes the mathematics of least squares regression analysis as it is often used in geodesy including land surveying and satellite positioning applications. In these fields regression is often termed adjustment. The note also contains a couple of typical land surveying...... and satellite positioning application examples. In these application areas we are typically interested in the parameters in the model typically 2- or 3-D positions and not in predictive modelling which is often the main concern in other regression analysis applications. Adjustment is often used to obtain...... the clock error) and to obtain estimates of the uncertainty with which the position is determined. Regression analysis is used in many other fields of application both in the natural, the technical and the social sciences. Examples may be curve fitting, calibration, establishing relationships between... 4. Determination of a Differential Item Functioning Procedure Using the Hierarchical Generalized Linear Model Directory of Open Access Journals (Sweden) Tülin Acar 2012-01-01 Full Text Available The aim of this research is to compare the result of the differential item functioning (DIF determining with hierarchical generalized linear model (HGLM technique and the results of the DIF determining with logistic regression (LR and item response theory–likelihood ratio (IRT-LR techniques on the test items. For this reason, first in this research, it is determined whether the students encounter DIF with HGLM, LR, and IRT-LR techniques according to socioeconomic status (SES, in the Turkish, Social Sciences, and Science subtest items of the Secondary School Institutions Examination. When inspecting the correlations among the techniques in terms of determining the items having DIF, it was discovered that there was significant correlation between the results of IRT-LR and LR techniques in all subtests; merely in Science subtest, the results of the correlation between HGLM and IRT-LR techniques were found significant. DIF applications can be made on test items with other DIF analysis techniques that were not taken to the scope of this research. The analysis results, which were determined by using the DIF techniques in different sample sizes, can be compared. 5. Predicting Longitudinal Change in Language Production and Comprehension in Individuals with Down Syndrome: Hierarchical Linear Modeling. Science.gov (United States) Chapman, Robin S.; Hesketh, Linda J.; Kistler, Doris J. 2002-01-01 Longitudinal change in syntax comprehension and production skill, measured over six years, was modeled in 31 individuals (ages 5-20) with Down syndrome. The best fitting Hierarchical Linear Modeling model of comprehension uses age and visual and auditory short-term memory as predictors of initial status, and age for growth trajectory. (Contains… 6. Avoiding Boundary Estimates in Hierarchical Linear Models through Weakly Informative Priors Science.gov (United States) Chung, Yeojin; Rabe-Hesketh, Sophia; Gelman, Andrew; Dorie, Vincent; Liu, Jinchen 2012-01-01 Hierarchical or multilevel linear models are widely used for longitudinal or cross-sectional data on students nested in classes and schools, and are particularly important for estimating treatment effects in cluster-randomized trials, multi-site trials, and meta-analyses. The models can allow for variation in treatment effects, as well as… 7. Measuring Teacher Effectiveness through Hierarchical Linear Models: Exploring Predictors of Student Achievement and Truancy Science.gov (United States) Subedi, Bidya Raj; Reese, Nancy; Powell, Randy 2015-01-01 This study explored significant predictors of student's Grade Point Average (GPA) and truancy (days absent), and also determined teacher effectiveness based on proportion of variance explained at teacher level model. We employed a two-level hierarchical linear model (HLM) with student and teacher data at level-1 and level-2 models, respectively.… 8. Using Hierarchical Linear Modelling to Examine Factors Predicting English Language Students' Reading Achievement Science.gov (United States) Fung, Karen; ElAtia, Samira 2015-01-01 Using Hierarchical Linear Modelling (HLM), this study aimed to identify factors such as ESL/ELL/EAL status that would predict students' reading performance in an English language arts exam taken across Canada. Using data from the 2007 administration of the Pan-Canadian Assessment Program (PCAP) along with the accompanying surveys for students and… 9. A Hierarchical Linear Model for Estimating Gender-Based Earnings Differentials. Science.gov (United States) Haberfield, Yitchak; Semyonov, Moshe; Addi, Audrey 1998-01-01 Estimates of gender earnings inequality in data from 116,431 Jewish workers were compared using a hierarchical linear model (HLM) and ordinary least squares model. The HLM allows estimation of the extent to which earnings inequality depends on occupational characteristics. (SK) 10. Linear regression models for quantitative assessment of left ... African Journals Online (AJOL) Changes in left ventricular structures and function have been reported in cardiomyopathies. No prediction models have been established in this environment. This study established regression models for prediction of left ventricular structures in normal subjects. A sample of normal subjects was drawn from a large urban ... 11. Linearity and Misspecification Tests for Vector Smooth Transition Regression Models DEFF Research Database (Denmark) Teräsvirta, Timo; Yang, Yukai The purpose of the paper is to derive Lagrange multiplier and Lagrange multiplier type specification and misspecification tests for vector smooth transition regression models. We report results from simulation studies in which the size and power properties of the proposed asymptotic tests in small... 12. Using multiple linear regression techniques to quantify carbon ... African Journals Online (AJOL) Fallow ecosystems provide a significant carbon stock that can be quantified for inclusion in the accounts of global carbon budgets. Process and statistical models of productivity, though useful, are often technically rigid as the conditions for their application are not easy to satisfy. Multiple regression techniques have been ... 13. Interpreting Multiple Linear Regression: A Guidebook of Variable Importance Science.gov (United States) Nathans, Laura L.; Oswald, Frederick L.; Nimon, Kim 2012-01-01 Multiple regression (MR) analyses are commonly employed in social science fields. It is also common for interpretation of results to typically reflect overreliance on beta weights, often resulting in very limited interpretations of variable importance. It appears that few researchers employ other methods to obtain a fuller understanding of what… 14. Testing for marginal linear effects in quantile regression KAUST Repository Wang, Huixia Judy 2017-10-23 The paper develops a new marginal testing procedure to detect significant predictors that are associated with the conditional quantiles of a scalar response. The idea is to fit the marginal quantile regression on each predictor one at a time, and then to base the test on the t-statistics that are associated with the most predictive predictors. A resampling method is devised to calibrate this test statistic, which has non-regular limiting behaviour due to the selection of the most predictive variables. Asymptotic validity of the procedure is established in a general quantile regression setting in which the marginal quantile regression models can be misspecified. Even though a fixed dimension is assumed to derive the asymptotic results, the test proposed is applicable and computationally feasible for large dimensional predictors. The method is more flexible than existing marginal screening test methods based on mean regression and has the added advantage of being robust against outliers in the response. The approach is illustrated by using an application to a human immunodeficiency virus drug resistance data set. 15. Testing for marginal linear effects in quantile regression KAUST Repository Wang, Huixia Judy; McKeague, Ian W.; Qian, Min 2017-01-01 The paper develops a new marginal testing procedure to detect significant predictors that are associated with the conditional quantiles of a scalar response. The idea is to fit the marginal quantile regression on each predictor one at a time, and then to base the test on the t-statistics that are associated with the most predictive predictors. A resampling method is devised to calibrate this test statistic, which has non-regular limiting behaviour due to the selection of the most predictive variables. Asymptotic validity of the procedure is established in a general quantile regression setting in which the marginal quantile regression models can be misspecified. Even though a fixed dimension is assumed to derive the asymptotic results, the test proposed is applicable and computationally feasible for large dimensional predictors. The method is more flexible than existing marginal screening test methods based on mean regression and has the added advantage of being robust against outliers in the response. The approach is illustrated by using an application to a human immunodeficiency virus drug resistance data set. 16. Generalised Partially Linear Regression with Misclassified Data and an Application to Labour Market Transitions DEFF Research Database (Denmark) Dlugosz, Stephan; Mammen, Enno; Wilke, Ralf We consider the semiparametric generalised linear regression model which has mainstream empirical models such as the (partially) linear mean regression, logistic and multinomial regression as special cases. As an extension to related literature we allow a misclassified covariate to be interacted... 17. Comparison between Linear and Nonlinear Regression in a Laboratory Heat Transfer Experiment Science.gov (United States) Gonçalves, Carine Messias; Schwaab, Marcio; Pinto, José Carlos 2013-01-01 In order to interpret laboratory experimental data, undergraduate students are used to perform linear regression through linearized versions of nonlinear models. However, the use of linearized models can lead to statistically biased parameter estimates. Even so, it is not an easy task to introduce nonlinear regression and show for the students… 18. Variable selection in multiple linear regression: The influence of ... African Journals Online (AJOL) provide an indication of whether the fit of the selected model improves or ... and calculate M(−i); quantify the influence of case i in terms of a function, f(•), of M and ..... [21] Venter JH & Snyman JLJ, 1997, Linear model selection based on risk ... 19. An introduction to using Bayesian linear regression with clinical data. Science.gov (United States) Baldwin, Scott A; Larson, Michael J 2017-11-01 Statistical training psychology focuses on frequentist methods. Bayesian methods are an alternative to standard frequentist methods. This article provides researchers with an introduction to fundamental ideas in Bayesian modeling. We use data from an electroencephalogram (EEG) and anxiety study to illustrate Bayesian models. Specifically, the models examine the relationship between error-related negativity (ERN), a particular event-related potential, and trait anxiety. Methodological topics covered include: how to set up a regression model in a Bayesian framework, specifying priors, examining convergence of the model, visualizing and interpreting posterior distributions, interval estimates, expected and predicted values, and model comparison tools. We also discuss situations where Bayesian methods can outperform frequentist methods as well has how to specify more complicated regression models. Finally, we conclude with recommendations about reporting guidelines for those using Bayesian methods in their own research. We provide data and R code for replicating our analyses. Copyright © 2017 Elsevier Ltd. All rights reserved. 20. Electricity consumption forecasting in Italy using linear regression models Energy Technology Data Exchange (ETDEWEB) Bianco, Vincenzo; Manca, Oronzio; Nardini, Sergio [DIAM, Seconda Universita degli Studi di Napoli, Via Roma 29, 81031 Aversa (CE) (Italy) 2009-09-15 The influence of economic and demographic variables on the annual electricity consumption in Italy has been investigated with the intention to develop a long-term consumption forecasting model. The time period considered for the historical data is from 1970 to 2007. Different regression models were developed, using historical electricity consumption, gross domestic product (GDP), gross domestic product per capita (GDP per capita) and population. A first part of the paper considers the estimation of GDP, price and GDP per capita elasticities of domestic and non-domestic electricity consumption. The domestic and non-domestic short run price elasticities are found to be both approximately equal to -0.06, while long run elasticities are equal to -0.24 and -0.09, respectively. On the contrary, the elasticities of GDP and GDP per capita present higher values. In the second part of the paper, different regression models, based on co-integrated or stationary data, are presented. Different statistical tests are employed to check the validity of the proposed models. A comparison with national forecasts, based on complex econometric models, such as Markal-Time, was performed, showing that the developed regressions are congruent with the official projections, with deviations of {+-}1% for the best case and {+-}11% for the worst. These deviations are to be considered acceptable in relation to the time span taken into account. (author) 1. Electricity consumption forecasting in Italy using linear regression models International Nuclear Information System (INIS) Bianco, Vincenzo; Manca, Oronzio; Nardini, Sergio 2009-01-01 The influence of economic and demographic variables on the annual electricity consumption in Italy has been investigated with the intention to develop a long-term consumption forecasting model. The time period considered for the historical data is from 1970 to 2007. Different regression models were developed, using historical electricity consumption, gross domestic product (GDP), gross domestic product per capita (GDP per capita) and population. A first part of the paper considers the estimation of GDP, price and GDP per capita elasticities of domestic and non-domestic electricity consumption. The domestic and non-domestic short run price elasticities are found to be both approximately equal to -0.06, while long run elasticities are equal to -0.24 and -0.09, respectively. On the contrary, the elasticities of GDP and GDP per capita present higher values. In the second part of the paper, different regression models, based on co-integrated or stationary data, are presented. Different statistical tests are employed to check the validity of the proposed models. A comparison with national forecasts, based on complex econometric models, such as Markal-Time, was performed, showing that the developed regressions are congruent with the official projections, with deviations of ±1% for the best case and ±11% for the worst. These deviations are to be considered acceptable in relation to the time span taken into account. (author) 2. Relative Importance for Linear Regression in R: The Package relaimpo OpenAIRE Groemping, Ulrike 2006-01-01 Relative importance is a topic that has seen a lot of interest in recent years, particularly in applied work. The R package relaimpo implements six different metrics for assessing relative importance of regressors in the linear model, two of which are recommended - averaging over orderings of regressors and a newly proposed metric (Feldman 2005) called pmvd. Apart from delivering the metrics themselves, relaimpo also provides (exploratory) bootstrap confidence intervals. This paper offers a b... 3. The microcomputer scientific software series 2: general linear model--regression. Science.gov (United States) Harold M. Rauscher 1983-01-01 The general linear model regression (GLMR) program provides the microcomputer user with a sophisticated regression analysis capability. The output provides a regression ANOVA table, estimators of the regression model coefficients, their confidence intervals, confidence intervals around the predicted Y-values, residuals for plotting, a check for multicollinearity, a... 4. Least-Squares Linear Regression and Schrodinger's Cat: Perspectives on the Analysis of Regression Residuals. Science.gov (United States) Hecht, Jeffrey B. The analysis of regression residuals and detection of outliers are discussed, with emphasis on determining how deviant an individual data point must be to be considered an outlier and the impact that multiple suspected outlier data points have on the process of outlier determination and treatment. Only bivariate (one dependent and one independent)… 5. A generalized linear factor model approach to the hierarchical framework for responses and response times. Science.gov (United States) Molenaar, Dylan; Tuerlinckx, Francis; van der Maas, Han L J 2015-05-01 We show how the hierarchical model for responses and response times as developed by van der Linden (2007), Fox, Klein Entink, and van der Linden (2007), Klein Entink, Fox, and van der Linden (2009), and Glas and van der Linden (2010) can be simplified to a generalized linear factor model with only the mild restriction that there is no hierarchical model at the item side. This result is valuable as it enables all well-developed modelling tools and extensions that come with these methods. We show that the restriction we impose on the hierarchical model does not influence parameter recovery under realistic circumstances. In addition, we present two illustrative real data analyses to demonstrate the practical benefits of our approach. © 2014 The British Psychological Society. 6. Two biased estimation techniques in linear regression: Application to aircraft Science.gov (United States) 1988-01-01 Several ways for detection and assessment of collinearity in measured data are discussed. Because data collinearity usually results in poor least squares estimates, two estimation techniques which can limit a damaging effect of collinearity are presented. These two techniques, the principal components regression and mixed estimation, belong to a class of biased estimation techniques. Detection and assessment of data collinearity and the two biased estimation techniques are demonstrated in two examples using flight test data from longitudinal maneuvers of an experimental aircraft. The eigensystem analysis and parameter variance decomposition appeared to be a promising tool for collinearity evaluation. The biased estimators had far better accuracy than the results from the ordinary least squares technique. 7. Detection of epistatic effects with logic regression and a classical linear regression model. Science.gov (United States) Malina, Magdalena; Ickstadt, Katja; Schwender, Holger; Posch, Martin; Bogdan, Małgorzata 2014-02-01 To locate multiple interacting quantitative trait loci (QTL) influencing a trait of interest within experimental populations, usually methods as the Cockerham's model are applied. Within this framework, interactions are understood as the part of the joined effect of several genes which cannot be explained as the sum of their additive effects. However, if a change in the phenotype (as disease) is caused by Boolean combinations of genotypes of several QTLs, this Cockerham's approach is often not capable to identify them properly. To detect such interactions more efficiently, we propose a logic regression framework. Even though with the logic regression approach a larger number of models has to be considered (requiring more stringent multiple testing correction) the efficient representation of higher order logic interactions in logic regression models leads to a significant increase of power to detect such interactions as compared to a Cockerham's approach. The increase in power is demonstrated analytically for a simple two-way interaction model and illustrated in more complex settings with simulation study and real data analysis. 8. A simplified procedure of linear regression in a preliminary analysis Directory of Open Access Journals (Sweden) Silvia Facchinetti 2013-05-01 Full Text Available The analysis of a statistical large data-set can be led by the study of a particularly interesting variable Y – regressed – and an explicative variable X, chosen among the remained variables, conjointly observed. The study gives a simplified procedure to obtain the functional link of the variables y=y(x by a partition of the data-set into m subsets, in which the observations are synthesized by location indices (mean or median of X and Y. Polynomial models for y(x of order r are considered to verify the characteristics of the given procedure, in particular we assume r= 1 and 2. The distributions of the parameter estimators are obtained by simulation, when the fitting is done for m= r + 1. Comparisons of the results, in terms of distribution and efficiency, are made with the results obtained by the ordinary least square methods. The study also gives some considerations on the consistency of the estimated parameters obtained by the given procedure. 9. Identifying predictors of physics item difficulty: A linear regression approach Science.gov (United States) Mesic, Vanes; Muratovic, Hasnija 2011-06-01 Large-scale assessments of student achievement in physics are often approached with an intention to discriminate students based on the attained level of their physics competencies. Therefore, for purposes of test design, it is important that items display an acceptable discriminatory behavior. To that end, it is recommended to avoid extraordinary difficult and very easy items. Knowing the factors that influence physics item difficulty makes it possible to model the item difficulty even before the first pilot study is conducted. Thus, by identifying predictors of physics item difficulty, we can improve the test-design process. Furthermore, we get additional qualitative feedback regarding the basic aspects of student cognitive achievement in physics that are directly responsible for the obtained, quantitative test results. In this study, we conducted a secondary analysis of data that came from two large-scale assessments of student physics achievement at the end of compulsory education in Bosnia and Herzegovina. Foremost, we explored the concept of “physics competence” and performed a content analysis of 123 physics items that were included within the above-mentioned assessments. Thereafter, an item database was created. Items were described by variables which reflect some basic cognitive aspects of physics competence. For each of the assessments, Rasch item difficulties were calculated in separate analyses. In order to make the item difficulties from different assessments comparable, a virtual test equating procedure had to be implemented. Finally, a regression model of physics item difficulty was created. It has been shown that 61.2% of item difficulty variance can be explained by factors which reflect the automaticity, complexity, and modality of the knowledge structure that is relevant for generating the most probable correct solution, as well as by the divergence of required thinking and interference effects between intuitive and formal physics knowledge 10. Identifying predictors of physics item difficulty: A linear regression approach Directory of Open Access Journals (Sweden) Hasnija Muratovic 2011-06-01 Full Text Available Large-scale assessments of student achievement in physics are often approached with an intention to discriminate students based on the attained level of their physics competencies. Therefore, for purposes of test design, it is important that items display an acceptable discriminatory behavior. To that end, it is recommended to avoid extraordinary difficult and very easy items. Knowing the factors that influence physics item difficulty makes it possible to model the item difficulty even before the first pilot study is conducted. Thus, by identifying predictors of physics item difficulty, we can improve the test-design process. Furthermore, we get additional qualitative feedback regarding the basic aspects of student cognitive achievement in physics that are directly responsible for the obtained, quantitative test results. In this study, we conducted a secondary analysis of data that came from two large-scale assessments of student physics achievement at the end of compulsory education in Bosnia and Herzegovina. Foremost, we explored the concept of “physics competence” and performed a content analysis of 123 physics items that were included within the above-mentioned assessments. Thereafter, an item database was created. Items were described by variables which reflect some basic cognitive aspects of physics competence. For each of the assessments, Rasch item difficulties were calculated in separate analyses. In order to make the item difficulties from different assessments comparable, a virtual test equating procedure had to be implemented. Finally, a regression model of physics item difficulty was created. It has been shown that 61.2% of item difficulty variance can be explained by factors which reflect the automaticity, complexity, and modality of the knowledge structure that is relevant for generating the most probable correct solution, as well as by the divergence of required thinking and interference effects between intuitive and formal 11. Multivariate Linear Regression and CART Regression Analysis of TBM Performance at Abu Hamour Phase-I Tunnel Science.gov (United States) Jakubowski, J.; Stypulkowski, J. B.; Bernardeau, F. G. 2017-12-01 The first phase of the Abu Hamour drainage and storm tunnel was completed in early 2017. The 9.5 km long, 3.7 m diameter tunnel was excavated with two Earth Pressure Balance (EPB) Tunnel Boring Machines from Herrenknecht. TBM operation processes were monitored and recorded by Data Acquisition and Evaluation System. The authors coupled collected TBM drive data with available information on rock mass properties, cleansed, completed with secondary variables and aggregated by weeks and shifts. Correlations and descriptive statistics charts were examined. Multivariate Linear Regression and CART regression tree models linking TBM penetration rate (PR), penetration per revolution (PPR) and field penetration index (FPI) with TBM operational and geotechnical characteristics were performed for the conditions of the weak/soft rock of Doha. Both regression methods are interpretable and the data were screened with different computational approaches allowing enriched insight. The primary goal of the analysis was to investigate empirical relations between multiple explanatory and responding variables, to search for best subsets of explanatory variables and to evaluate the strength of linear and non-linear relations. For each of the penetration indices, a predictive model coupling both regression methods was built and validated. The resultant models appeared to be stronger than constituent ones and indicated an opportunity for more accurate and robust TBM performance predictions. 12. An Analysis of Turkey's PISA 2015 Results Using Two-Level Hierarchical Linear Modelling Science.gov (United States) 2017-01-01 In the field of education, most of the data collected are multi-level structured. Cities, city based schools, school based classes and finally students in the classrooms constitute a hierarchical structure. Hierarchical linear models give more accurate results compared to standard models when the data set has a structure going far as individuals,… 13. Hierarchical cluster-based partial least squares regression (HC-PLSR) is an efficient tool for metamodelling of nonlinear dynamic models. Science.gov (United States) Tøndel, Kristin; Indahl, Ulf G; Gjuvsland, Arne B; Vik, Jon Olav; Hunter, Peter; Omholt, Stig W; Martens, Harald 2011-06-01 Deterministic dynamic models of complex biological systems contain a large number of parameters and state variables, related through nonlinear differential equations with various types of feedback. A metamodel of such a dynamic model is a statistical approximation model that maps variation in parameters and initial conditions (inputs) to variation in features of the trajectories of the state variables (outputs) throughout the entire biologically relevant input space. A sufficiently accurate mapping can be exploited both instrumentally and epistemically. Multivariate regression methodology is a commonly used approach for emulating dynamic models. However, when the input-output relations are highly nonlinear or non-monotone, a standard linear regression approach is prone to give suboptimal results. We therefore hypothesised that a more accurate mapping can be obtained by locally linear or locally polynomial regression. We present here a new method for local regression modelling, Hierarchical Cluster-based PLS regression (HC-PLSR), where fuzzy C-means clustering is used to separate the data set into parts according to the structure of the response surface. We compare the metamodelling performance of HC-PLSR with polynomial partial least squares regression (PLSR) and ordinary least squares (OLS) regression on various systems: six different gene regulatory network models with various types of feedback, a deterministic mathematical model of the mammalian circadian clock and a model of the mouse ventricular myocyte function. Our results indicate that multivariate regression is well suited for emulating dynamic models in systems biology. The hierarchical approach turned out to be superior to both polynomial PLSR and OLS regression in all three test cases. The advantage, in terms of explained variance and prediction accuracy, was largest in systems with highly nonlinear functional relationships and in systems with positive feedback loops. HC-PLSR is a promising approach for 14. Hierarchical Cluster-based Partial Least Squares Regression (HC-PLSR is an efficient tool for metamodelling of nonlinear dynamic models Directory of Open Access Journals (Sweden) Omholt Stig W 2011-06-01 Full Text Available Abstract Background Deterministic dynamic models of complex biological systems contain a large number of parameters and state variables, related through nonlinear differential equations with various types of feedback. A metamodel of such a dynamic model is a statistical approximation model that maps variation in parameters and initial conditions (inputs to variation in features of the trajectories of the state variables (outputs throughout the entire biologically relevant input space. A sufficiently accurate mapping can be exploited both instrumentally and epistemically. Multivariate regression methodology is a commonly used approach for emulating dynamic models. However, when the input-output relations are highly nonlinear or non-monotone, a standard linear regression approach is prone to give suboptimal results. We therefore hypothesised that a more accurate mapping can be obtained by locally linear or locally polynomial regression. We present here a new method for local regression modelling, Hierarchical Cluster-based PLS regression (HC-PLSR, where fuzzy C-means clustering is used to separate the data set into parts according to the structure of the response surface. We compare the metamodelling performance of HC-PLSR with polynomial partial least squares regression (PLSR and ordinary least squares (OLS regression on various systems: six different gene regulatory network models with various types of feedback, a deterministic mathematical model of the mammalian circadian clock and a model of the mouse ventricular myocyte function. Results Our results indicate that multivariate regression is well suited for emulating dynamic models in systems biology. The hierarchical approach turned out to be superior to both polynomial PLSR and OLS regression in all three test cases. The advantage, in terms of explained variance and prediction accuracy, was largest in systems with highly nonlinear functional relationships and in systems with positive feedback 15. A distributed-memory hierarchical solver for general sparse linear systems Energy Technology Data Exchange (ETDEWEB) Chen, Chao [Stanford Univ., CA (United States). Inst. for Computational and Mathematical Engineering; Pouransari, Hadi [Stanford Univ., CA (United States). Dept. of Mechanical Engineering; Rajamanickam, Sivasankaran [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Center for Computing Research; Boman, Erik G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Center for Computing Research; Darve, Eric [Stanford Univ., CA (United States). Inst. for Computational and Mathematical Engineering and Dept. of Mechanical Engineering 2017-12-20 We present a parallel hierarchical solver for general sparse linear systems on distributed-memory machines. For large-scale problems, this fully algebraic algorithm is faster and more memory-efficient than sparse direct solvers because it exploits the low-rank structure of fill-in blocks. Depending on the accuracy of low-rank approximations, the hierarchical solver can be used either as a direct solver or as a preconditioner. The parallel algorithm is based on data decomposition and requires only local communication for updating boundary data on every processor. Moreover, the computation-to-communication ratio of the parallel algorithm is approximately the volume-to-surface-area ratio of the subdomain owned by every processor. We also provide various numerical results to demonstrate the versatility and scalability of the parallel algorithm. 16. Comparison between linear and non-parametric regression models for genome-enabled prediction in wheat. Science.gov (United States) Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne 2012-12-01 In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models. 17. SOME STATISTICAL ISSUES RELATED TO MULTIPLE LINEAR REGRESSION MODELING OF BEACH BACTERIA CONCENTRATIONS Science.gov (United States) As a fast and effective technique, the multiple linear regression (MLR) method has been widely used in modeling and prediction of beach bacteria concentrations. Among previous works on this subject, however, several issues were insufficiently or inconsistently addressed. Those is... 18. Predicting Fuel Ignition Quality Using 1H NMR Spectroscopy and Multiple Linear Regression KAUST Repository Abdul Jameel, Abdul Gani; Naser, Nimal; Emwas, Abdul-Hamid M.; Dooley, Stephen; Sarathy, Mani 2016-01-01 An improved model for the prediction of ignition quality of hydrocarbon fuels has been developed using 1H nuclear magnetic resonance (NMR) spectroscopy and multiple linear regression (MLR) modeling. Cetane number (CN) and derived cetane number (DCN 19. How to deal with continuous and dichotomic outcomes in epidemiological research: linear and logistic regression analyses NARCIS (Netherlands) Tripepi, Giovanni; Jager, Kitty J.; Stel, Vianda S.; Dekker, Friedo W.; Zoccali, Carmine 2011-01-01 Because of some limitations of stratification methods, epidemiologists frequently use multiple linear and logistic regression analyses to address specific epidemiological questions. If the dependent variable is a continuous one (for example, systolic pressure and serum creatinine), the researcher 20. Analysis of γ spectra in airborne radioactivity measurements using multiple linear regressions International Nuclear Information System (INIS) Bao Min; Shi Quanlin; Zhang Jiamei 2004-01-01 This paper describes the net peak counts calculating of nuclide 137 Cs at 662 keV of γ spectra in airborne radioactivity measurements using multiple linear regressions. Mathematic model is founded by analyzing every factor that has contribution to Cs peak counts in spectra, and multiple linear regression function is established. Calculating process adopts stepwise regression, and the indistinctive factors are eliminated by F check. The regression results and its uncertainty are calculated using Least Square Estimation, then the Cs peak net counts and its uncertainty can be gotten. The analysis results for experimental spectrum are displayed. The influence of energy shift and energy resolution on the analyzing result is discussed. In comparison with the stripping spectra method, multiple linear regression method needn't stripping radios, and the calculating result has relation with the counts in Cs peak only, and the calculating uncertainty is reduced. (authors) 1. Do clinical and translational science graduate students understand linear regression? Development and early validation of the REGRESS quiz. Science.gov (United States) Enders, Felicity 2013-12-01 Although regression is widely used for reading and publishing in the medical literature, no instruments were previously available to assess students' understanding. The goal of this study was to design and assess such an instrument for graduate students in Clinical and Translational Science and Public Health. A 27-item REsearch on Global Regression Expectations in StatisticS (REGRESS) quiz was developed through an iterative process. Consenting students taking a course on linear regression in a Clinical and Translational Science program completed the quiz pre- and postcourse. Student results were compared to practicing statisticians with a master's or doctoral degree in statistics or a closely related field. Fifty-two students responded precourse, 59 postcourse , and 22 practicing statisticians completed the quiz. The mean (SD) score was 9.3 (4.3) for students precourse and 19.0 (3.5) postcourse (P REGRESS quiz was internally reliable (Cronbach's alpha 0.89). The initial validation is quite promising with statistically significant and meaningful differences across time and study populations. Further work is needed to validate the quiz across multiple institutions. © 2013 Wiley Periodicals, Inc. 2. Improving sub-pixel imperviousness change prediction by ensembling heterogeneous non-linear regression models Directory of Open Access Journals (Sweden) Drzewiecki Wojciech 2016-12-01 Full Text Available In this work nine non-linear regression models were compared for sub-pixel impervious surface area mapping from Landsat images. The comparison was done in three study areas both for accuracy of imperviousness coverage evaluation in individual points in time and accuracy of imperviousness change assessment. The performance of individual machine learning algorithms (Cubist, Random Forest, stochastic gradient boosting of regression trees, k-nearest neighbors regression, random k-nearest neighbors regression, Multivariate Adaptive Regression Splines, averaged neural networks, and support vector machines with polynomial and radial kernels was also compared with the performance of heterogeneous model ensembles constructed from the best models trained using particular techniques. 3. A Technique of Fuzzy C-Mean in Multiple Linear Regression Model toward Paddy Yield Science.gov (United States) Syazwan Wahab, Nur; Saifullah Rusiman, Mohd; Mohamad, Mahathir; Amira Azmi, Nur; Che Him, Norziha; Ghazali Kamardan, M.; Ali, Maselan 2018-04-01 In this paper, we propose a hybrid model which is a combination of multiple linear regression model and fuzzy c-means method. This research involved a relationship between 20 variates of the top soil that are analyzed prior to planting of paddy yields at standard fertilizer rates. Data used were from the multi-location trials for rice carried out by MARDI at major paddy granary in Peninsular Malaysia during the period from 2009 to 2012. Missing observations were estimated using mean estimation techniques. The data were analyzed using multiple linear regression model and a combination of multiple linear regression model and fuzzy c-means method. Analysis of normality and multicollinearity indicate that the data is normally scattered without multicollinearity among independent variables. Analysis of fuzzy c-means cluster the yield of paddy into two clusters before the multiple linear regression model can be used. The comparison between two method indicate that the hybrid of multiple linear regression model and fuzzy c-means method outperform the multiple linear regression model with lower value of mean square error. 4. A simple linear regression method for quantitative trait loci linkage analysis with censored observations. Science.gov (United States) Anderson, Carl A; McRae, Allan F; Visscher, Peter M 2006-07-01 Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using simulation we compare this method to both the Cox and Weibull proportional hazards models and a standard linear regression method that ignores censoring. The grouped linear regression method is of equivalent power to both the Cox and Weibull proportional hazards methods and is significantly better than the standard linear regression method when censored observations are present. The method is also robust to the proportion of censored individuals and the underlying distribution of the trait. On the basis of linear regression methodology, the grouped linear regression model is computationally simple and fast and can be implemented readily in freely available statistical software. 5. Transmission of linear regression patterns between time series: from relationship in time series to complex networks. Science.gov (United States) Gao, Xiangyun; An, Haizhong; Fang, Wei; Huang, Xuan; Li, Huajiao; Zhong, Weiqiong; Ding, Yinghui 2014-07-01 The linear regression parameters between two time series can be different under different lengths of observation period. If we study the whole period by the sliding window of a short period, the change of the linear regression parameters is a process of dynamic transmission over time. We tackle fundamental research that presents a simple and efficient computational scheme: a linear regression patterns transmission algorithm, which transforms linear regression patterns into directed and weighted networks. The linear regression patterns (nodes) are defined by the combination of intervals of the linear regression parameters and the results of the significance testing under different sizes of the sliding window. The transmissions between adjacent patterns are defined as edges, and the weights of the edges are the frequency of the transmissions. The major patterns, the distance, and the medium in the process of the transmission can be captured. The statistical results of weighted out-degree and betweenness centrality are mapped on timelines, which shows the features of the distribution of the results. Many measurements in different areas that involve two related time series variables could take advantage of this algorithm to characterize the dynamic relationships between the time series from a new perspective. 6. The number of subjects per variable required in linear regression analyses NARCIS (Netherlands) P.C. Austin (Peter); E.W. Steyerberg (Ewout) 2015-01-01 textabstractObjectives To determine the number of independent variables that can be included in a linear regression model. Study Design and Setting We used a series of Monte Carlo simulations to examine the impact of the number of subjects per variable (SPV) on the accuracy of estimated regression 7. Tightness of M-estimators for multiple linear regression in time series DEFF Research Database (Denmark) Johansen, Søren; Nielsen, Bent We show tightness of a general M-estimator for multiple linear regression in time series. The positive criterion function for the M-estimator is assumed lower semi-continuous and sufficiently large for large argument: Particular cases are the Huber-skip and quantile regression. Tightness requires... 8. Piecewise linear regression techniques to analyze the timing of head coach dismissals in Dutch soccer clubs NARCIS (Netherlands) Schryver, T. de; Eisinga, R. 2010-01-01 The key question in research on dismissals of head coaches in sports clubs is not whether they should happen but when they will happen. This paper applies piecewise linear regression to advance our understanding of the timing of head coach dismissals. Essentially, the regression sacrifices degrees 9. Investigation of linear regression of EPR dosimetric signal of the man tooth enamel International Nuclear Information System (INIS) Pivovarov, S.P.; Rukhin, A.B.; Zhakparov, R.K.; Vasilevskaya, L.A. 2001-01-01 The experimental relations of the EPR radiation signal in samples of man tooth enamel of three donors of different age up to doses 1350 Gy are examined. To all of them the linear regression is applicable. The considerable errors leading to apparent non-linearity are eliminated most. (author) 10. Genomic prediction based on data from three layer lines using non-linear regression models NARCIS (Netherlands) Huang, H.; Windig, J.J.; Vereijken, A.; Calus, M.P.L. 2014-01-01 Background - Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. Methods - In an attempt to alleviate 11. Multiple linear regression and regression with time series error models in forecasting PM10 concentrations in Peninsular Malaysia. Science.gov (United States) Ng, Kar Yong; Awang, Norhashidah 2018-01-06 Frequent haze occurrences in Malaysia have made the management of PM 10 (particulate matter with aerodynamic less than 10 μm) pollution a critical task. This requires knowledge on factors associating with PM 10 variation and good forecast of PM 10 concentrations. Hence, this paper demonstrates the prediction of 1-day-ahead daily average PM 10 concentrations based on predictor variables including meteorological parameters and gaseous pollutants. Three different models were built. They were multiple linear regression (MLR) model with lagged predictor variables (MLR1), MLR model with lagged predictor variables and PM 10 concentrations (MLR2) and regression with time series error (RTSE) model. The findings revealed that humidity, temperature, wind speed, wind direction, carbon monoxide and ozone were the main factors explaining the PM 10 variation in Peninsular Malaysia. Comparison among the three models showed that MLR2 model was on a same level with RTSE model in terms of forecasting accuracy, while MLR1 model was the worst. 12. Modeling Fire Occurrence at the City Scale: A Comparison between Geographically Weighted Regression and Global Linear Regression. Science.gov (United States) Song, Chao; Kwan, Mei-Po; Zhu, Jiping 2017-04-08 An increasing number of fires are occurring with the rapid development of cities, resulting in increased risk for human beings and the environment. This study compares geographically weighted regression-based models, including geographically weighted regression (GWR) and geographically and temporally weighted regression (GTWR), which integrates spatial and temporal effects and global linear regression models (LM) for modeling fire risk at the city scale. The results show that the road density and the spatial distribution of enterprises have the strongest influences on fire risk, which implies that we should focus on areas where roads and enterprises are densely clustered. In addition, locations with a large number of enterprises have fewer fire ignition records, probably because of strict management and prevention measures. A changing number of significant variables across space indicate that heterogeneity mainly exists in the northern and eastern rural and suburban areas of Hefei city, where human-related facilities or road construction are only clustered in the city sub-centers. GTWR can capture small changes in the spatiotemporal heterogeneity of the variables while GWR and LM cannot. An approach that integrates space and time enables us to better understand the dynamic changes in fire risk. Thus governments can use the results to manage fire safety at the city scale. 13. Principal Covariates Clusterwise Regression (PCCR) : Accounting for multicollinearity and population heterogeneity in hierarchically organized data. NARCIS (Netherlands) Wilderjans, Tom F.; Van de Gaer, E.; Kiers, H.A.L.; Van Mechelen, Iven; Ceulemans, Eva In the behavioral sciences, many research questions pertain to a regression problem in that one wants to predict a criterion on the basis of a number of predictors. Although in many cases, ordinary least squares regression will suffice, sometimes the prediction problem is more challenging, for three 14. OPLS statistical model versus linear regression to assess sonographic predictors of stroke prognosis. Science.gov (United States) 2012-01-01 The objective of the present study was to assess the comparable applicability of orthogonal projections to latent structures (OPLS) statistical model vs traditional linear regression in order to investigate the role of trans cranial doppler (TCD) sonography in predicting ischemic stroke prognosis. The study was conducted on 116 ischemic stroke patients admitted to a specialty neurology ward. The Unified Neurological Stroke Scale was used once for clinical evaluation on the first week of admission and again six months later. All data was primarily analyzed using simple linear regression and later considered for multivariate analysis using PLS/OPLS models through the SIMCA P+12 statistical software package. The linear regression analysis results used for the identification of TCD predictors of stroke prognosis were confirmed through the OPLS modeling technique. Moreover, in comparison to linear regression, the OPLS model appeared to have higher sensitivity in detecting the predictors of ischemic stroke prognosis and detected several more predictors. Applying the OPLS model made it possible to use both single TCD measures/indicators and arbitrarily dichotomized measures of TCD single vessel involvement as well as the overall TCD result. In conclusion, the authors recommend PLS/OPLS methods as complementary rather than alternative to the available classical regression models such as linear regression. 15. Diagnostics for generalized linear hierarchical models in network meta-analysis. Science.gov (United States) Zhao, Hong; Hodges, James S; Carlin, Bradley P 2017-09-01 Network meta-analysis (NMA) combines direct and indirect evidence comparing more than 2 treatments. Inconsistency arises when these 2 information sources differ. Previous work focuses on inconsistency detection, but little has been done on how to proceed after identifying inconsistency. The key issue is whether inconsistency changes an NMA's substantive conclusions. In this paper, we examine such discrepancies from a diagnostic point of view. Our methods seek to detect influential and outlying observations in NMA at a trial-by-arm level. These observations may have a large effect on the parameter estimates in NMA, or they may deviate markedly from other observations. We develop formal diagnostics for a Bayesian hierarchical model to check the effect of deleting any observation. Diagnostics are specified for generalized linear hierarchical NMA models and investigated for both published and simulated datasets. Results from our example dataset using either contrast- or arm-based models and from the simulated datasets indicate that the sources of inconsistency in NMA tend not to be influential, though results from the example dataset suggest that they are likely to be outliers. This mimics a familiar result from linear model theory, in which outliers with low leverage are not influential. Future extensions include incorporating baseline covariates and individual-level patient data. Copyright © 2017 John Wiley & Sons, Ltd. 16. Fuzzy Linear Regression for the Time Series Data which is Fuzzified with SMRGT Method Directory of Open Access Journals (Sweden) Seçil YALAZ 2016-10-01 Full Text Available Our work on regression and classification provides a new contribution to the analysis of time series used in many areas for years. Owing to the fact that convergence could not obtained with the methods used in autocorrelation fixing process faced with time series regression application, success is not met or fall into obligation of changing the models’ degree. Changing the models’ degree may not be desirable in every situation. In our study, recommended for these situations, time series data was fuzzified by using the simple membership function and fuzzy rule generation technique (SMRGT and to estimate future an equation has created by applying fuzzy least square regression (FLSR method which is a simple linear regression method to this data. Although SMRGT has success in determining the flow discharge in open channels and can be used confidently for flow discharge modeling in open canals, as well as in pipe flow with some modifications, there is no clue about that this technique is successful in fuzzy linear regression modeling. Therefore, in order to address the luck of such a modeling, a new hybrid model has been described within this study. In conclusion, to demonstrate our methods’ efficiency, classical linear regression for time series data and linear regression for fuzzy time series data were applied to two different data sets, and these two approaches performances were compared by using different measures. 17. An improved multiple linear regression and data analysis computer program package Science.gov (United States) Sidik, S. M. 1972-01-01 NEWRAP, an improved version of a previous multiple linear regression program called RAPIER, CREDUC, and CRSPLT, allows for a complete regression analysis including cross plots of the independent and dependent variables, correlation coefficients, regression coefficients, analysis of variance tables, t-statistics and their probability levels, rejection of independent variables, plots of residuals against the independent and dependent variables, and a canonical reduction of quadratic response functions useful in optimum seeking experimentation. A major improvement over RAPIER is that all regression calculations are done in double precision arithmetic. 18. A Monte Carlo simulation study comparing linear regression, beta regression, variable-dispersion beta regression and fractional logit regression at recovering average difference measures in a two sample design. Science.gov (United States) Meaney, Christopher; Moineddin, Rahim 2014-01-24 In biomedical research, response variables are often encountered which have bounded support on the open unit interval--(0,1). Traditionally, researchers have attempted to estimate covariate effects on these types of response data using linear regression. Alternative modelling strategies may include: beta regression, variable-dispersion beta regression, and fractional logit regression models. This study employs a Monte Carlo simulation design to compare the statistical properties of the linear regression model to that of the more novel beta regression, variable-dispersion beta regression, and fractional logit regression models. In the Monte Carlo experiment we assume a simple two sample design. We assume observations are realizations of independent draws from their respective probability models. The randomly simulated draws from the various probability models are chosen to emulate average proportion/percentage/rate differences of pre-specified magnitudes. Following simulation of the experimental data we estimate average proportion/percentage/rate differences. We compare the estimators in terms of bias, variance, type-1 error and power. Estimates of Monte Carlo error associated with these quantities are provided. If response data are beta distributed with constant dispersion parameters across the two samples, then all models are unbiased and have reasonable type-1 error rates and power profiles. If the response data in the two samples have different dispersion parameters, then the simple beta regression model is biased. When the sample size is small (N0 = N1 = 25) linear regression has superior type-1 error rates compared to the other models. Small sample type-1 error rates can be improved in beta regression models using bias correction/reduction methods. In the power experiments, variable-dispersion beta regression and fractional logit regression models have slightly elevated power compared to linear regression models. Similar results were observed if the 19. A Logistic Regression Model with a Hierarchical Random Error Term for Analyzing the Utilization of Public Transport Directory of Open Access Journals (Sweden) Chong Wei 2015-01-01 Full Text Available Logistic regression models have been widely used in previous studies to analyze public transport utilization. These studies have shown travel time to be an indispensable variable for such analysis and usually consider it to be a deterministic variable. This formulation does not allow us to capture travelers’ perception error regarding travel time, and recent studies have indicated that this error can have a significant effect on modal choice behavior. In this study, we propose a logistic regression model with a hierarchical random error term. The proposed model adds a new random error term for the travel time variable. This term structure enables us to investigate travelers’ perception error regarding travel time from a given choice behavior dataset. We also propose an extended model that allows constraining the sign of this error in the model. We develop two Gibbs samplers to estimate the basic hierarchical model and the extended model. The performance of the proposed models is examined using a well-known dataset. 20. Investigating the effects of climate variations on bacillary dysentery incidence in northeast China using ridge regression and hierarchical cluster analysis Directory of Open Access Journals (Sweden) Guo Junqiao 2008-09-01 Full Text Available Abstract Background The effects of climate variations on bacillary dysentery incidence have gained more recent concern. However, the multi-collinearity among meteorological factors affects the accuracy of correlation with bacillary dysentery incidence. Methods As a remedy, a modified method to combine ridge regression and hierarchical cluster analysis was proposed for investigating the effects of climate variations on bacillary dysentery incidence in northeast China. Results All weather indicators, temperatures, precipitation, evaporation and relative humidity have shown positive correlation with the monthly incidence of bacillary dysentery, while air pressure had a negative correlation with the incidence. Ridge regression and hierarchical cluster analysis showed that during 1987–1996, relative humidity, temperatures and air pressure affected the transmission of the bacillary dysentery. During this period, all meteorological factors were divided into three categories. Relative humidity and precipitation belonged to one class, temperature indexes and evaporation belonged to another class, and air pressure was the third class. Conclusion Meteorological factors have affected the transmission of bacillary dysentery in northeast China. Bacillary dysentery prevention and control would benefit from by giving more consideration to local climate variations. 1. Use of multiple linear regression and logistic regression models to investigate changes in birthweight for term singleton infants in Scotland. Science.gov (United States) Bonellie, Sandra R 2012-10-01 To illustrate the use of regression and logistic regression models to investigate changes over time in size of babies particularly in relation to social deprivation, age of the mother and smoking. Mean birthweight has been found to be increasing in many countries in recent years, but there are still a group of babies who are born with low birthweights. Population-based retrospective cohort study. Multiple linear regression and logistic regression models are used to analyse data on term 'singleton births' from Scottish hospitals between 1994-2003. Mothers who smoke are shown to give birth to lighter babies on average, a difference of approximately 0.57 Standard deviations lower (95% confidence interval. 0.55-0.58) when adjusted for sex and parity. These mothers are also more likely to have babies that are low birthweight (odds ratio 3.46, 95% confidence interval 3.30-3.63) compared with non-smokers. Low birthweight is 30% more likely where the mother lives in the most deprived areas compared with the least deprived, (odds ratio 1.30, 95% confidence interval 1.21-1.40). Smoking during pregnancy is shown to have a detrimental effect on the size of infants at birth. This effect explains some, though not all, of the observed socioeconomic birthweight. It also explains much of the observed birthweight differences by the age of the mother.   Identifying mothers at greater risk of having a low birthweight baby as important implications for the care and advice this group receives. © 2012 Blackwell Publishing Ltd. 2. Treating experimental data of inverse kinetic method by unitary linear regression analysis International Nuclear Information System (INIS) Zhao Yusen; Chen Xiaoliang 2009-01-01 The theory of treating experimental data of inverse kinetic method by unitary linear regression analysis was described. Not only the reactivity, but also the effective neutron source intensity could be calculated by this method. Computer code was compiled base on the inverse kinetic method and unitary linear regression analysis. The data of zero power facility BFS-1 in Russia were processed and the results were compared. The results show that the reactivity and the effective neutron source intensity can be obtained correctly by treating experimental data of inverse kinetic method using unitary linear regression analysis and the precision of reactivity measurement is improved. The central element efficiency can be calculated by using the reactivity. The result also shows that the effect to reactivity measurement caused by external neutron source should be considered when the reactor power is low and the intensity of external neutron source is strong. (authors) 3. A primer for biomedical scientists on how to execute model II linear regression analysis. Science.gov (United States) Ludbrook, John 2012-04-01 1. There are two very different ways of executing linear regression analysis. One is Model I, when the x-values are fixed by the experimenter. The other is Model II, in which the x-values are free to vary and are subject to error. 2. I have received numerous complaints from biomedical scientists that they have great difficulty in executing Model II linear regression analysis. This may explain the results of a Google Scholar search, which showed that the authors of articles in journals of physiology, pharmacology and biochemistry rarely use Model II regression analysis. 3. I repeat my previous arguments in favour of using least products linear regression analysis for Model II regressions. I review three methods for executing ordinary least products (OLP) and weighted least products (WLP) regression analysis: (i) scientific calculator and/or computer spreadsheet; (ii) specific purpose computer programs; and (iii) general purpose computer programs. 4. Using a scientific calculator and/or computer spreadsheet, it is easy to obtain correct values for OLP slope and intercept, but the corresponding 95% confidence intervals (CI) are inaccurate. 5. Using specific purpose computer programs, the freeware computer program smatr gives the correct OLP regression coefficients and obtains 95% CI by bootstrapping. In addition, smatr can be used to compare the slopes of OLP lines. 6. When using general purpose computer programs, I recommend the commercial programs systat and Statistica for those who regularly undertake linear regression analysis and I give step-by-step instructions in the Supplementary Information as to how to use loss functions. © 2011 The Author. Clinical and Experimental Pharmacology and Physiology. © 2011 Blackwell Publishing Asia Pty Ltd. 4. Modelling subject-specific childhood growth using linear mixed-effect models with cubic regression splines. Science.gov (United States) Grajeda, Laura M; Ivanescu, Andrada; Saito, Mayuko; Crainiceanu, Ciprian; Jaganath, Devan; Gilman, Robert H; Crabtree, Jean E; Kelleher, Dermott; Cabrera, Lilia; Cama, Vitaliano; Checkley, William 2016-01-01 Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and acceleration. We provide a stepwise approach that builds from simple to complex models, and account for the intrinsic complexity of the data. We start with standard cubic splines regression models and build up to a model that includes subject-specific random intercepts and slopes and residual autocorrelation. We then compared cubic regression splines vis-à-vis linear piecewise splines, and with varying number of knots and positions. Statistical code is provided to ensure reproducibility and improve dissemination of methods. Models are applied to longitudinal height measurements in a cohort of 215 Peruvian children followed from birth until their fourth year of life. Unexplained variability, as measured by the variance of the regression model, was reduced from 7.34 when using ordinary least squares to 0.81 (p linear mixed-effect models with random slopes and a first order continuous autoregressive error term. There was substantial heterogeneity in both the intercept (p modeled with a first order continuous autoregressive error term as evidenced by the variogram of the residuals and by a lack of association among residuals. The final model provides a parametric linear regression equation for both estimation and prediction of population- and individual-level growth in height. We show that cubic regression splines are superior to linear regression splines for the case of a small number of knots in both estimation and prediction with the full linear mixed effect model (AIC 19,352 vs. 19 5. The Relationship between Economic Growth and Money Laundering – a Linear Regression Model Directory of Open Access Journals (Sweden) Daniel Rece 2009-09-01 Full Text Available This study provides an overview of the relationship between economic growth and money laundering modeled by a least squares function. The report analyzes statistically data collected from USA, Russia, Romania and other eleven European countries, rendering a linear regression model. The study illustrates that 23.7% of the total variance in the regressand (level of money laundering is “explained” by the linear regression model. In our opinion, this model will provide critical auxiliary judgment and decision support for anti-money laundering service systems. 6. Regressão múltipla stepwise e hierárquica em Psicologia Organizacional: aplicações, problemas e soluções Stepwise and hierarchical multiple regression in organizational psychology: Applications, problemas and solutions Directory of Open Access Journals (Sweden) 2002-01-01 7. LIMO EEG: a toolbox for hierarchical LInear MOdeling of ElectroEncephaloGraphic data. Science.gov (United States) Pernet, Cyril R; Chauveau, Nicolas; Gaspar, Carl; Rousselet, Guillaume A 2011-01-01 Magnetic- and electric-evoked brain responses have traditionally been analyzed by comparing the peaks or mean amplitudes of signals from selected channels and averaged across trials. More recently, tools have been developed to investigate single trial response variability (e.g., EEGLAB) and to test differences between averaged evoked responses over the entire scalp and time dimensions (e.g., SPM, Fieldtrip). LIMO EEG is a Matlab toolbox (EEGLAB compatible) to analyse evoked responses over all space and time dimensions, while accounting for single trial variability using a simple hierarchical linear modelling of the data. In addition, LIMO EEG provides robust parametric tests, therefore providing a new and complementary tool in the analysis of neural evoked responses. 8. Using hierarchical linear growth models to evaluate protective mechanisms that mediate science achievement Science.gov (United States) von Secker, Clare Elaine The study of students at risk is a major topic of science education policy and discussion. Much research has focused on describing conditions and problems associated with the statistical risk of low science achievement among individuals who are members of groups characterized by problems such as poverty and social disadvantage. But outcomes attributed to these factors do not explain the nature and extent of mechanisms that account for differences in performance among individuals at risk. There is ample theoretical and empirical evidence that demographic differences should be conceptualized as social contexts, or collections of variables, that alter the psychological significance and social demands of life events, and affect subsequent relationships between risk and resilience. The hierarchical linear growth models used in this dissertation provide greater specification of the role of social context and the protective effects of attitude, expectations, parenting practices, peer influences, and learning opportunities on science achievement. While the individual influences of these protective factors on science achievement were small, their cumulative effect was substantial. Meta-analysis conducted on the effects associated with psychological and environmental processes that mediate risk mechanisms in sixteen social contexts revealed twenty-two significant differences between groups of students. Positive attitudes, high expectations, and more intense science course-taking had positive effects on achievement of all students, although these factors were not equally protective in all social contexts. In general, effects associated with authoritative parenting and peer influences were negative, regardless of social context. An evaluation comparing the performance and stability of hierarchical linear growth models with traditional repeated measures models is included as well. 9. The number of subjects per variable required in linear regression analyses. Science.gov (United States) Austin, Peter C; Steyerberg, Ewout W 2015-06-01 10. Using the classical linear regression model in analysis of the dependences of conveyor belt life Directory of Open Access Journals (Sweden) Miriam Andrejiová 2013-12-01 Full Text Available The paper deals with the classical linear regression model of the dependence of conveyor belt life on some selected parameters: thickness of paint layer, width and length of the belt, conveyor speed and quantity of transported material. The first part of the article is about regression model design, point and interval estimation of parameters, verification of statistical significance of the model, and about the parameters of the proposed regression model. The second part of the article deals with identification of influential and extreme values that can have an impact on estimation of regression model parameters. The third part focuses on assumptions of the classical regression model, i.e. on verification of independence assumptions, normality and homoscedasticity of residuals. 11. Regression of non-linear coupling of noise in LIGO detectors Science.gov (United States) Da Silva Costa, C. F.; Billman, C.; Effler, A.; Klimenko, S.; Cheng, H.-P. 2018-03-01 In 2015, after their upgrade, the advanced Laser Interferometer Gravitational-Wave Observatory (LIGO) detectors started acquiring data. The effort to improve their sensitivity has never stopped since then. The goal to achieve design sensitivity is challenging. Environmental and instrumental noise couple to the detector output with different, linear and non-linear, coupling mechanisms. The noise regression method we use is based on the Wiener–Kolmogorov filter, which uses witness channels to make noise predictions. We present here how this method helped to determine complex non-linear noise couplings in the output mode cleaner and in the mirror suspension system of the LIGO detector. 12. Predicting multi-level drug response with gene expression profile in multiple myeloma using hierarchical ordinal regression. Science.gov (United States) Zhang, Xinyan; Li, Bingzong; Han, Huiying; Song, Sha; Xu, Hongxia; Hong, Yating; Yi, Nengjun; Zhuang, Wenzhuo 2018-05-10 Multiple myeloma (MM), like other cancers, is caused by the accumulation of genetic abnormalities. Heterogeneity exists in the patients' response to treatments, for example, bortezomib. This urges efforts to identify biomarkers from numerous molecular features and build predictive models for identifying patients that can benefit from a certain treatment scheme. However, previous studies treated the multi-level ordinal drug response as a binary response where only responsive and non-responsive groups are considered. It is desirable to directly analyze the multi-level drug response, rather than combining the response to two groups. In this study, we present a novel method to identify significantly associated biomarkers and then develop ordinal genomic classifier using the hierarchical ordinal logistic model. The proposed hierarchical ordinal logistic model employs the heavy-tailed Cauchy prior on the coefficients and is fitted by an efficient quasi-Newton algorithm. We apply our hierarchical ordinal regression approach to analyze two publicly available datasets for MM with five-level drug response and numerous gene expression measures. Our results show that our method is able to identify genes associated with the multi-level drug response and to generate powerful predictive models for predicting the multi-level response. The proposed method allows us to jointly fit numerous correlated predictors and thus build efficient models for predicting the multi-level drug response. The predictive model for the multi-level drug response can be more informative than the previous approaches. Thus, the proposed approach provides a powerful tool for predicting multi-level drug response and has important impact on cancer studies. 13. A Simple and Convenient Method of Multiple Linear Regression to Calculate Iodine Molecular Constants Science.gov (United States) Cooper, Paul D. 2010-01-01 A new procedure using a student-friendly least-squares multiple linear-regression technique utilizing a function within Microsoft Excel is described that enables students to calculate molecular constants from the vibronic spectrum of iodine. This method is advantageous pedagogically as it calculates molecular constants for ground and excited… 14. Analysis of interactive fixed effects dynamic linear panel regression with measurement error OpenAIRE Nayoung Lee; Hyungsik Roger Moon; Martin Weidner 2011-01-01 This paper studies a simple dynamic panel linear regression model with interactive fixed effects in which the variable of interest is measured with error. To estimate the dynamic coefficient, we consider the least-squares minimum distance (LS-MD) estimation method. 15. An Introduction to Graphical and Mathematical Methods for Detecting Heteroscedasticity in Linear Regression. Science.gov (United States) Thompson, Russel L. Homoscedasticity is an important assumption of linear regression. This paper explains what it is and why it is important to the researcher. Graphical and mathematical methods for testing the homoscedasticity assumption are demonstrated. Sources of homoscedasticity and types of homoscedasticity are discussed, and methods for correction are… 16. INTRODUCTION TO A COMBINED MULTIPLE LINEAR REGRESSION AND ARMA MODELING APPROACH FOR BEACH BACTERIA PREDICTION Science.gov (United States) Due to the complexity of the processes contributing to beach bacteria concentrations, many researchers rely on statistical modeling, among which multiple linear regression (MLR) modeling is most widely used. Despite its ease of use and interpretation, there may be time dependence... 17. Application of range-test in multiple linear regression analysis in ... African Journals Online (AJOL) Application of range-test in multiple linear regression analysis in the presence of outliers is studied in this paper. First, the plot of the explanatory variables (i.e. Administration, Social/Commercial, Economic services and Transfer) on the dependent variable (i.e. GDP) was done to identify the statistical trend over the years. 18. [Prediction model of health workforce and beds in county hospitals of Hunan by multiple linear regression]. Science.gov (United States) Ling, Ru; Liu, Jiawang 2011-12-01 To construct prediction model for health workforce and hospital beds in county hospitals of Hunan by multiple linear regression. We surveyed 16 counties in Hunan with stratified random sampling according to uniform questionnaires,and multiple linear regression analysis with 20 quotas selected by literature view was done. Independent variables in the multiple linear regression model on medical personnels in county hospitals included the counties' urban residents' income, crude death rate, medical beds, business occupancy, professional equipment value, the number of devices valued above 10 000 yuan, fixed assets, long-term debt, medical income, medical expenses, outpatient and emergency visits, hospital visits, actual available bed days, and utilization rate of hospital beds. Independent variables in the multiple linear regression model on county hospital beds included the the population of aged 65 and above in the counties, disposable income of urban residents, medical personnel of medical institutions in county area, business occupancy, the total value of professional equipment, fixed assets, long-term debt, medical income, medical expenses, outpatient and emergency visits, hospital visits, actual available bed days, utilization rate of hospital beds, and length of hospitalization. The prediction model shows good explanatory and fitting, and may be used for short- and mid-term forecasting. 19. Calculation of U, Ra, Th and K contents in uranium ore by multiple linear regression method International Nuclear Information System (INIS) Lin Chao; Chen Yingqiang; Zhang Qingwen; Tan Fuwen; Peng Guanghui 1991-01-01 A multiple linear regression method was used to compute γ spectra of uranium ore samples and to calculate contents of U, Ra, Th, and K. In comparison with the inverse matrix method, its advantage is that no standard samples of pure U, Ra, Th and K are needed for obtaining response coefficients 20. Comparing Regression Coefficients between Nested Linear Models for Clustered Data with Generalized Estimating Equations Science.gov (United States) Yan, Jun; Aseltine, Robert H., Jr.; Harel, Ofer 2013-01-01 Comparing regression coefficients between models when one model is nested within another is of great practical interest when two explanations of a given phenomenon are specified as linear models. The statistical problem is whether the coefficients associated with a given set of covariates change significantly when other covariates are added into… 1. Bayesian linear regression : different conjugate models and their (in)sensitivity to prior-data conflict NARCIS (Netherlands) Walter, G.M.; Augustin, Th.; Kneib, Thomas; Tutz, Gerhard 2010-01-01 The paper is concerned with Bayesian analysis under prior-data conflict, i.e. the situation when observed data are rather unexpected under the prior (and the sample size is not large enough to eliminate the influence of the prior). Two approaches for Bayesian linear regression modeling based on 2. A unified framework for testing in the linear regression model under unknown order of fractional integration DEFF Research Database (Denmark) Christensen, Bent Jesper; Kruse, Robinson; Sibbertsen, Philipp We consider hypothesis testing in a general linear time series regression framework when the possibly fractional order of integration of the error term is unknown. We show that the approach suggested by Vogelsang (1998a) for the case of integer integration does not apply to the case of fractional... 3. Alpins and thibos vectorial astigmatism analyses: proposal of a linear regression model between methods Directory of Open Access Journals (Sweden) Giuliano de Oliveira Freitas 2013-10-01 Full Text Available PURPOSE: To determine linear regression models between Alpins descriptive indices and Thibos astigmatic power vectors (APV, assessing the validity and strength of such correlations. METHODS: This case series prospectively assessed 62 eyes of 31 consecutive cataract patients with preoperative corneal astigmatism between 0.75 and 2.50 diopters in both eyes. Patients were randomly assorted among two phacoemulsification groups: one assigned to receive AcrySof®Toric intraocular lens (IOL in both eyes and another assigned to have AcrySof Natural IOL associated with limbal relaxing incisions, also in both eyes. All patients were reevaluated postoperatively at 6 months, when refractive astigmatism analysis was performed using both Alpins and Thibos methods. The ratio between Thibos postoperative APV and preoperative APV (APVratio and its linear regression to Alpins percentage of success of astigmatic surgery, percentage of astigmatism corrected and percentage of astigmatism reduction at the intended axis were assessed. RESULTS: Significant negative correlation between the ratio of post- and preoperative Thibos APVratio and Alpins percentage of success (%Success was found (Spearman's ρ=-0.93; linear regression is given by the following equation: %Success = (-APVratio + 1.00x100. CONCLUSION: The linear regression we found between APVratio and %Success permits a validated mathematical inference concerning the overall success of astigmatic surgery. 4. Power properties of invariant tests for spatial autocorrelation in linear regression NARCIS (Netherlands) Martellosio, F. 2006-01-01 Many popular tests for residual spatial autocorrelation in the context of the linear regression model belong to the class of invariant tests. This paper derives a number of exact properties of the power function of such tests. In particular, we extend the work of Krämer (2005, Journal of Statistical 5. Estimate the contribution of incubation parameters influence egg hatchability using multiple linear regression analysis. Science.gov (United States) Khalil, Mohamed H; Shebl, Mostafa K; Kosba, Mohamed A; El-Sabrout, Karim; Zaki, Nesma 2016-08-01 This research was conducted to determine the most affecting parameters on hatchability of indigenous and improved local chickens' eggs. Five parameters were studied (fertility, early and late embryonic mortalities, shape index, egg weight, and egg weight loss) on four strains, namely Fayoumi, Alexandria, Matrouh, and Montazah. Multiple linear regression was performed on the studied parameters to determine the most influencing one on hatchability. The results showed significant differences in commercial and scientific hatchability among strains. Alexandria strain has the highest significant commercial hatchability (80.70%). Regarding the studied strains, highly significant differences in hatching chick weight among strains were observed. Using multiple linear regression analysis, fertility made the greatest percent contribution (71.31%) to hatchability, and the lowest percent contributions were made by shape index and egg weight loss. A prediction of hatchability using multiple regression analysis could be a good tool to improve hatchability percentage in chickens. 6. truncSP: An R Package for Estimation of Semi-Parametric Truncated Linear Regression Models Directory of Open Access Journals (Sweden) Maria Karlsson 2014-05-01 Full Text Available Problems with truncated data occur in many areas, complicating estimation and inference. Regarding linear regression models, the ordinary least squares estimator is inconsistent and biased for these types of data and is therefore unsuitable for use. Alternative estimators, designed for the estimation of truncated regression models, have been developed. This paper presents the R package truncSP. The package contains functions for the estimation of semi-parametric truncated linear regression models using three different estimators: the symmetrically trimmed least squares, quadratic mode, and left truncated estimators, all of which have been shown to have good asymptotic and ?nite sample properties. The package also provides functions for the analysis of the estimated models. Data from the environmental sciences are used to illustrate the functions in the package. 7. Genomic prediction based on data from three layer lines using non-linear regression models. Science.gov (United States) Huang, Heyun; Windig, Jack J; Vereijken, Addie; Calus, Mario P L 2014-11-06 Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. In an attempt to alleviate potential discrepancies between assumptions of linear models and multi-population data, two types of alternative models were used: (1) a multi-trait genomic best linear unbiased prediction (GBLUP) model that modelled trait by line combinations as separate but correlated traits and (2) non-linear models based on kernel learning. These models were compared to conventional linear models for genomic prediction for two lines of brown layer hens (B1 and B2) and one line of white hens (W1). The three lines each had 1004 to 1023 training and 238 to 240 validation animals. Prediction accuracy was evaluated by estimating the correlation between observed phenotypes and predicted breeding values. When the training dataset included only data from the evaluated line, non-linear models yielded at best a similar accuracy as linear models. In some cases, when adding a distantly related line, the linear models showed a slight decrease in performance, while non-linear models generally showed no change in accuracy. When only information from a closely related line was used for training, linear models and non-linear radial basis function (RBF) kernel models performed similarly. The multi-trait GBLUP model took advantage of the estimated genetic correlations between the lines. Combining linear and non-linear models improved the accuracy of multi-line genomic prediction. Linear models and non-linear RBF models performed very similarly for genomic prediction, despite the expectation that non-linear models could deal better with the heterogeneous multi-population data. This heterogeneity of the data can be overcome by modelling trait by line combinations as separate but correlated traits, which avoids the occasional 8. Single image super-resolution using locally adaptive multiple linear regression. Science.gov (United States) Yu, Soohwan; Kang, Wonseok; Ko, Seungyong; Paik, Joonki 2015-12-01 This paper presents a regularized superresolution (SR) reconstruction method using locally adaptive multiple linear regression to overcome the limitation of spatial resolution of digital images. In order to make the SR problem better-posed, the proposed method incorporates the locally adaptive multiple linear regression into the regularization process as a local prior. The local regularization prior assumes that the target high-resolution (HR) pixel is generated by a linear combination of similar pixels in differently scaled patches and optimum weight parameters. In addition, we adapt a modified version of the nonlocal means filter as a smoothness prior to utilize the patch redundancy. Experimental results show that the proposed algorithm better restores HR images than existing state-of-the-art methods in the sense of the most objective measures in the literature. 9. Predicting recovery of cognitive function soon after stroke: differential modeling of logarithmic and linear regression. Science.gov (United States) Suzuki, Makoto; Sugimura, Yuko; Yamada, Sumio; Omori, Yoshitsugu; Miyamoto, Masaaki; Yamamoto, Jun-ichi 2013-01-01 Cognitive disorders in the acute stage of stroke are common and are important independent predictors of adverse outcome in the long term. Despite the impact of cognitive disorders on both patients and their families, it is still difficult to predict the extent or duration of cognitive impairments. The objective of the present study was, therefore, to provide data on predicting the recovery of cognitive function soon after stroke by differential modeling with logarithmic and linear regression. This study included two rounds of data collection comprising 57 stroke patients enrolled in the first round for the purpose of identifying the time course of cognitive recovery in the early-phase group data, and 43 stroke patients in the second round for the purpose of ensuring that the correlation of the early-phase group data applied to the prediction of each individual's degree of cognitive recovery. In the first round, Mini-Mental State Examination (MMSE) scores were assessed 3 times during hospitalization, and the scores were regressed on the logarithm and linear of time. In the second round, calculations of MMSE scores were made for the first two scoring times after admission to tailor the structures of logarithmic and linear regression formulae to fit an individual's degree of functional recovery. The time course of early-phase recovery for cognitive functions resembled both logarithmic and linear functions. However, MMSE scores sampled at two baseline points based on logarithmic regression modeling could estimate prediction of cognitive recovery more accurately than could linear regression modeling (logarithmic modeling, R(2) = 0.676, PLogarithmic modeling based on MMSE scores could accurately predict the recovery of cognitive function soon after the occurrence of stroke. This logarithmic modeling with mathematical procedures is simple enough to be adopted in daily clinical practice. 10. Linear regression metamodeling as a tool to summarize and present simulation model results. Science.gov (United States) Jalal, Hawre; Dowd, Bryan; Sainfort, François; Kuntz, Karen M 2013-10-01 Modelers lack a tool to systematically and clearly present complex model results, including those from sensitivity analyses. The objective was to propose linear regression metamodeling as a tool to increase transparency of decision analytic models and better communicate their results. We used a simplified cancer cure model to demonstrate our approach. The model computed the lifetime cost and benefit of 3 treatment options for cancer patients. We simulated 10,000 cohorts in a probabilistic sensitivity analysis (PSA) and regressed the model outcomes on the standardized input parameter values in a set of regression analyses. We used the regression coefficients to describe measures of sensitivity analyses, including threshold and parameter sensitivity analyses. We also compared the results of the PSA to deterministic full-factorial and one-factor-at-a-time designs. The regression intercept represented the estimated base-case outcome, and the other coefficients described the relative parameter uncertainty in the model. We defined simple relationships that compute the average and incremental net benefit of each intervention. Metamodeling produced outputs similar to traditional deterministic 1-way or 2-way sensitivity analyses but was more reliable since it used all parameter values. Linear regression metamodeling is a simple, yet powerful, tool that can assist modelers in communicating model characteristics and sensitivity analyses. 11. A Cross-Domain Collaborative Filtering Algorithm Based on Feature Construction and Locally Weighted Linear Regression. Science.gov (United States) Yu, Xu; Lin, Jun-Yu; Jiang, Feng; Du, Jun-Wei; Han, Ji-Zhong 2018-01-01 Cross-domain collaborative filtering (CDCF) solves the sparsity problem by transferring rating knowledge from auxiliary domains. Obviously, different auxiliary domains have different importance to the target domain. However, previous works cannot evaluate effectively the significance of different auxiliary domains. To overcome this drawback, we propose a cross-domain collaborative filtering algorithm based on Feature Construction and Locally Weighted Linear Regression (FCLWLR). We first construct features in different domains and use these features to represent different auxiliary domains. Thus the weight computation across different domains can be converted as the weight computation across different features. Then we combine the features in the target domain and in the auxiliary domains together and convert the cross-domain recommendation problem into a regression problem. Finally, we employ a Locally Weighted Linear Regression (LWLR) model to solve the regression problem. As LWLR is a nonparametric regression method, it can effectively avoid underfitting or overfitting problem occurring in parametric regression methods. We conduct extensive experiments to show that the proposed FCLWLR algorithm is effective in addressing the data sparsity problem by transferring the useful knowledge from the auxiliary domains, as compared to many state-of-the-art single-domain or cross-domain CF methods. 12. A Cross-Domain Collaborative Filtering Algorithm Based on Feature Construction and Locally Weighted Linear Regression Directory of Open Access Journals (Sweden) Xu Yu 2018-01-01 Full Text Available Cross-domain collaborative filtering (CDCF solves the sparsity problem by transferring rating knowledge from auxiliary domains. Obviously, different auxiliary domains have different importance to the target domain. However, previous works cannot evaluate effectively the significance of different auxiliary domains. To overcome this drawback, we propose a cross-domain collaborative filtering algorithm based on Feature Construction and Locally Weighted Linear Regression (FCLWLR. We first construct features in different domains and use these features to represent different auxiliary domains. Thus the weight computation across different domains can be converted as the weight computation across different features. Then we combine the features in the target domain and in the auxiliary domains together and convert the cross-domain recommendation problem into a regression problem. Finally, we employ a Locally Weighted Linear Regression (LWLR model to solve the regression problem. As LWLR is a nonparametric regression method, it can effectively avoid underfitting or overfitting problem occurring in parametric regression methods. We conduct extensive experiments to show that the proposed FCLWLR algorithm is effective in addressing the data sparsity problem by transferring the useful knowledge from the auxiliary domains, as compared to many state-of-the-art single-domain or cross-domain CF methods. 13. Improving sub-pixel imperviousness change prediction by ensembling heterogeneous non-linear regression models Science.gov (United States) Drzewiecki, Wojciech 2016-12-01 In this work nine non-linear regression models were compared for sub-pixel impervious surface area mapping from Landsat images. The comparison was done in three study areas both for accuracy of imperviousness coverage evaluation in individual points in time and accuracy of imperviousness change assessment. The performance of individual machine learning algorithms (Cubist, Random Forest, stochastic gradient boosting of regression trees, k-nearest neighbors regression, random k-nearest neighbors regression, Multivariate Adaptive Regression Splines, averaged neural networks, and support vector machines with polynomial and radial kernels) was also compared with the performance of heterogeneous model ensembles constructed from the best models trained using particular techniques. The results proved that in case of sub-pixel evaluation the most accurate prediction of change may not necessarily be based on the most accurate individual assessments. When single methods are considered, based on obtained results Cubist algorithm may be advised for Landsat based mapping of imperviousness for single dates. However, Random Forest may be endorsed when the most reliable evaluation of imperviousness change is the primary goal. It gave lower accuracies for individual assessments, but better prediction of change due to more correlated errors of individual predictions. Heterogeneous model ensembles performed for individual time points assessments at least as well as the best individual models. In case of imperviousness change assessment the ensembles always outperformed single model approaches. It means that it is possible to improve the accuracy of sub-pixel imperviousness change assessment using ensembles of heterogeneous non-linear regression models. 14. Analysis of dental caries using generalized linear and count regression models Directory of Open Access Journals (Sweden) Javali M. Phil 2013-11-01 Full Text Available Generalized linear models (GLM are generalization of linear regression models, which allow fitting regression models to response data in all the sciences especially medical and dental sciences that follow a general exponential family. These are flexible and widely used class of such models that can accommodate response variables. Count data are frequently characterized by overdispersion and excess zeros. Zero-inflated count models provide a parsimonious yet powerful way to model this type of situation. Such models assume that the data are a mixture of two separate data generation processes: one generates only zeros, and the other is either a Poisson or a negative binomial data-generating process. Zero inflated count regression models such as the zero-inflated Poisson (ZIP, zero-inflated negative binomial (ZINB regression models have been used to handle dental caries count data with many zeros. We present an evaluation framework to the suitability of applying the GLM, Poisson, NB, ZIP and ZINB to dental caries data set where the count data may exhibit evidence of many zeros and over-dispersion. Estimation of the model parameters using the method of maximum likelihood is provided. Based on the Vuong test statistic and the goodness of fit measure for dental caries data, the NB and ZINB regression models perform better than other count regression models. 15. A Hierarchical Bayesian Setting for an Inverse Problem in Linear Parabolic PDEs with Noisy Boundary Conditions KAUST Repository Ruggeri, Fabrizio 2016-05-12 In this work we develop a Bayesian setting to infer unknown parameters in initial-boundary value problems related to linear parabolic partial differential equations. We realistically assume that the boundary data are noisy, for a given prescribed initial condition. We show how to derive the joint likelihood function for the forward problem, given some measurements of the solution field subject to Gaussian noise. Given Gaussian priors for the time-dependent Dirichlet boundary values, we analytically marginalize the joint likelihood using the linearity of the equation. Our hierarchical Bayesian approach is fully implemented in an example that involves the heat equation. In this example, the thermal diffusivity is the unknown parameter. We assume that the thermal diffusivity parameter can be modeled a priori through a lognormal random variable or by means of a space-dependent stationary lognormal random field. Synthetic data are used to test the inference. We exploit the behavior of the non-normalized log posterior distribution of the thermal diffusivity. Then, we use the Laplace method to obtain an approximated Gaussian posterior and therefore avoid costly Markov Chain Monte Carlo computations. Expected information gains and predictive posterior densities for observable quantities are numerically estimated using Laplace approximation for different experimental setups. 16. Evaluation of linear regression techniques for atmospheric applications: the importance of appropriate weighting Directory of Open Access Journals (Sweden) C. Wu 2018-03-01 Full Text Available Linear regression techniques are widely used in atmospheric science, but they are often improperly applied due to lack of consideration or inappropriate handling of measurement uncertainty. In this work, numerical experiments are performed to evaluate the performance of five linear regression techniques, significantly extending previous works by Chu and Saylor. The five techniques are ordinary least squares (OLS, Deming regression (DR, orthogonal distance regression (ODR, weighted ODR (WODR, and York regression (YR. We first introduce a new data generation scheme that employs the Mersenne twister (MT pseudorandom number generator. The numerical simulations are also improved by (a refining the parameterization of nonlinear measurement uncertainties, (b inclusion of a linear measurement uncertainty, and (c inclusion of WODR for comparison. Results show that DR, WODR and YR produce an accurate slope, but the intercept by WODR and YR is overestimated and the degree of bias is more pronounced with a low R2 XY dataset. The importance of a properly weighting parameter λ in DR is investigated by sensitivity tests, and it is found that an improper λ in DR can lead to a bias in both the slope and intercept estimation. Because the λ calculation depends on the actual form of the measurement error, it is essential to determine the exact form of measurement error in the XY data during the measurement stage. If a priori error in one of the variables is unknown, or the measurement error described cannot be trusted, DR, WODR and YR can provide the least biases in slope and intercept among all tested regression techniques. For these reasons, DR, WODR and YR are recommended for atmospheric studies when both X and Y data have measurement errors. An Igor Pro-based program (Scatter Plot was developed to facilitate the implementation of error-in-variables regressions. 17. Evaluation of linear regression techniques for atmospheric applications: the importance of appropriate weighting Science.gov (United States) Wu, Cheng; Zhen Yu, Jian 2018-03-01 Linear regression techniques are widely used in atmospheric science, but they are often improperly applied due to lack of consideration or inappropriate handling of measurement uncertainty. In this work, numerical experiments are performed to evaluate the performance of five linear regression techniques, significantly extending previous works by Chu and Saylor. The five techniques are ordinary least squares (OLS), Deming regression (DR), orthogonal distance regression (ODR), weighted ODR (WODR), and York regression (YR). We first introduce a new data generation scheme that employs the Mersenne twister (MT) pseudorandom number generator. The numerical simulations are also improved by (a) refining the parameterization of nonlinear measurement uncertainties, (b) inclusion of a linear measurement uncertainty, and (c) inclusion of WODR for comparison. Results show that DR, WODR and YR produce an accurate slope, but the intercept by WODR and YR is overestimated and the degree of bias is more pronounced with a low R2 XY dataset. The importance of a properly weighting parameter λ in DR is investigated by sensitivity tests, and it is found that an improper λ in DR can lead to a bias in both the slope and intercept estimation. Because the λ calculation depends on the actual form of the measurement error, it is essential to determine the exact form of measurement error in the XY data during the measurement stage. If a priori error in one of the variables is unknown, or the measurement error described cannot be trusted, DR, WODR and YR can provide the least biases in slope and intercept among all tested regression techniques. For these reasons, DR, WODR and YR are recommended for atmospheric studies when both X and Y data have measurement errors. An Igor Pro-based program (Scatter Plot) was developed to facilitate the implementation of error-in-variables regressions. 18. Linear Multivariable Regression Models for Prediction of Eddy Dissipation Rate from Available Meteorological Data Science.gov (United States) MCKissick, Burnell T. (Technical Monitor); Plassman, Gerald E.; Mall, Gerald H.; Quagliano, John R. 2005-01-01 Linear multivariable regression models for predicting day and night Eddy Dissipation Rate (EDR) from available meteorological data sources are defined and validated. Model definition is based on a combination of 1997-2000 Dallas/Fort Worth (DFW) data sources, EDR from Aircraft Vortex Spacing System (AVOSS) deployment data, and regression variables primarily from corresponding Automated Surface Observation System (ASOS) data. Model validation is accomplished through EDR predictions on a similar combination of 1994-1995 Memphis (MEM) AVOSS and ASOS data. Model forms include an intercept plus a single term of fixed optimal power for each of these regression variables; 30-minute forward averaged mean and variance of near-surface wind speed and temperature, variance of wind direction, and a discrete cloud cover metric. Distinct day and night models, regressing on EDR and the natural log of EDR respectively, yield best performance and avoid model discontinuity over day/night data boundaries. 19. A method for fitting regression splines with varying polynomial order in the linear mixed model. Science.gov (United States) Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W 2006-02-15 The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles. 20. Linear regression analysis: part 14 of a series on evaluation of scientific publications. Science.gov (United States) Schneider, Astrid; Hommel, Gerhard; Blettner, Maria 2010-11-01 Regression analysis is an important statistical method for the analysis of medical data. It enables the identification and characterization of relationships among multiple factors. It also enables the identification of prognostically relevant risk factors and the calculation of risk scores for individual prognostication. This article is based on selected textbooks of statistics, a selective review of the literature, and our own experience. After a brief introduction of the uni- and multivariable regression models, illustrative examples are given to explain what the important considerations are before a regression analysis is performed, and how the results should be interpreted. The reader should then be able to judge whether the method has been used correctly and interpret the results appropriately. The performance and interpretation of linear regression analysis are subject to a variety of pitfalls, which are discussed here in detail. The reader is made aware of common errors of interpretation through practical examples. Both the opportunities for applying linear regression analysis and its limitations are presented. 1. Exploring the Effects of Congruence and Holland's Personality Codes on Job Satisfaction: An Application of Hierarchical Linear Modeling Techniques Science.gov (United States) Ishitani, Terry T. 2010-01-01 This study applied hierarchical linear modeling to investigate the effect of congruence on intrinsic and extrinsic aspects of job satisfaction. Particular focus was given to differences in job satisfaction by gender and by Holland's first-letter codes. The study sample included nationally represented 1462 female and 1280 male college graduates who… 2. Linear Regression on Sparse Features for Single-Channel Speech Separation DEFF Research Database (Denmark) Schmidt, Mikkel N.; Olsson, Rasmus Kongsgaard 2007-01-01 In this work we address the problem of separating multiple speakers from a single microphone recording. We formulate a linear regression model for estimating each speaker based on features derived from the mixture. The employed feature representation is a sparse, non-negative encoding of the speech...... mixture in terms of pre-learned speaker-dependent dictionaries. Previous work has shown that this feature representation by itself provides some degree of separation. We show that the performance is significantly improved when regression analysis is performed on the sparse, non-negative features, both... 3. Linear regression based on Minimum Covariance Determinant (MCD) and TELBS methods on the productivity of phytoplankton Science.gov (United States) Gusriani, N.; Firdaniza 2018-03-01 The existence of outliers on multiple linear regression analysis causes the Gaussian assumption to be unfulfilled. If the Least Square method is forcedly used on these data, it will produce a model that cannot represent most data. For that, we need a robust regression method against outliers. This paper will compare the Minimum Covariance Determinant (MCD) method and the TELBS method on secondary data on the productivity of phytoplankton, which contains outliers. Based on the robust determinant coefficient value, MCD method produces a better model compared to TELBS method. 4. Motivation, Classroom Environment, and Learning in Introductory Geology: A Hierarchical Linear Model Science.gov (United States) Gilbert, L. A.; Hilpert, J. C.; Van Der Hoeven Kraft, K.; Budd, D.; Jones, M. H.; Matheney, R.; Mcconnell, D. A.; Perkins, D.; Stempien, J. A.; Wirth, K. R. 2013-12-01 Prior research has indicated that highly motivated students perform better and that learning increases in innovative, reformed classrooms, but untangling the student effects from the instructor effects is essential to understanding how to best support student learning. Using a hierarchical linear model, we examine these effects separately and jointly. We use data from nearly 2,000 undergraduate students surveyed by the NSF-funded GARNET (Geoscience Affective Research NETwork) project in 65 different introductory geology classes at research universities, public masters-granting universities, liberal arts colleges and community colleges across the US. Student level effects were measured as increases in expectancy and self-regulation using the Motivated Strategies for Learning Questionnaire (MSLQ; Pintrich et al., 1991). Instructor level effects were measured using the Reformed Teaching Observation Protocol, (RTOP; Sawada et al., 2000), with higher RTOP scores indicating a more reformed, student-centered classroom environment. Learning was measured by learning gains on a Geology Concept Inventory (GCI; Libarkin and Anderson, 2005) and normalized final course grade. The hierarchical linear model yielded significant results at several levels. At the student level, increases in expectancy and self-regulation are significantly and positively related to higher grades regardless of instructor; the higher the increase, the higher the grade. At the instructor level, RTOP scores are positively related to normalized average GCI learning gains. The higher the RTOP score, the higher the average class GCI learning gains. Across both levels, average class GCI learning gains are significantly and positively related to student grades; the higher the GCI learning gain, the higher the grade. Further, the RTOP scores are significantly and negatively related to the relationship between expectancy and course grade. The lower the RTOP score, the higher the correlation between change in 5. Prediction of Mind-Wandering with Electroencephalogram and Non-linear Regression Modeling. Science.gov (United States) Kawashima, Issaku; Kumano, Hiroaki 2017-01-01 Mind-wandering (MW), task-unrelated thought, has been examined by researchers in an increasing number of articles using models to predict whether subjects are in MW, using numerous physiological variables. However, these models are not applicable in general situations. Moreover, they output only binary classification. The current study suggests that the combination of electroencephalogram (EEG) variables and non-linear regression modeling can be a good indicator of MW intensity. We recorded EEGs of 50 subjects during the performance of a Sustained Attention to Response Task, including a thought sampling probe that inquired the focus of attention. We calculated the power and coherence value and prepared 35 patterns of variable combinations and applied Support Vector machine Regression (SVR) to them. Finally, we chose four SVR models: two of them non-linear models and the others linear models; two of the four models are composed of a limited number of electrodes to satisfy model usefulness. Examination using the held-out data indicated that all models had robust predictive precision and provided significantly better estimations than a linear regression model using single electrode EEG variables. Furthermore, in limited electrode condition, non-linear SVR model showed significantly better precision than linear SVR model. The method proposed in this study helps investigations into MW in various little-examined situations. Further, by measuring MW with a high temporal resolution EEG, unclear aspects of MW, such as time series variation, are expected to be revealed. Furthermore, our suggestion that a few electrodes can also predict MW contributes to the development of neuro-feedback studies. 6. Prediction of Mind-Wandering with Electroencephalogram and Non-linear Regression Modeling Directory of Open Access Journals (Sweden) Issaku Kawashima 2017-07-01 Full Text Available Mind-wandering (MW, task-unrelated thought, has been examined by researchers in an increasing number of articles using models to predict whether subjects are in MW, using numerous physiological variables. However, these models are not applicable in general situations. Moreover, they output only binary classification. The current study suggests that the combination of electroencephalogram (EEG variables and non-linear regression modeling can be a good indicator of MW intensity. We recorded EEGs of 50 subjects during the performance of a Sustained Attention to Response Task, including a thought sampling probe that inquired the focus of attention. We calculated the power and coherence value and prepared 35 patterns of variable combinations and applied Support Vector machine Regression (SVR to them. Finally, we chose four SVR models: two of them non-linear models and the others linear models; two of the four models are composed of a limited number of electrodes to satisfy model usefulness. Examination using the held-out data indicated that all models had robust predictive precision and provided significantly better estimations than a linear regression model using single electrode EEG variables. Furthermore, in limited electrode condition, non-linear SVR model showed significantly better precision than linear SVR model. The method proposed in this study helps investigations into MW in various little-examined situations. Further, by measuring MW with a high temporal resolution EEG, unclear aspects of MW, such as time series variation, are expected to be revealed. Furthermore, our suggestion that a few electrodes can also predict MW contributes to the development of neuro-feedback studies. 7. Error analysis of dimensionless scaling experiments with multiple points using linear regression International Nuclear Information System (INIS) Guercan, Oe.D.; Vermare, L.; Hennequin, P.; Bourdelle, C. 2010-01-01 A general method of error estimation in the case of multiple point dimensionless scaling experiments, using linear regression and standard error propagation, is proposed. The method reduces to the previous result of Cordey (2009 Nucl. Fusion 49 052001) in the case of a two-point scan. On the other hand, if the points follow a linear trend, it explains how the estimated error decreases as more points are added to the scan. Based on the analytical expression that is derived, it is argued that for a low number of points, adding points to the ends of the scanned range, rather than the middle, results in a smaller error estimate. (letter) 8. Dynamic Optimization for IPS2 Resource Allocation Based on Improved Fuzzy Multiple Linear Regression Directory of Open Access Journals (Sweden) Maokuan Zheng 2017-01-01 Full Text Available The study mainly focuses on resource allocation optimization for industrial product-service systems (IPS2. The development of IPS2 leads to sustainable economy by introducing cooperative mechanisms apart from commodity transaction. The randomness and fluctuation of service requests from customers lead to the volatility of IPS2 resource utilization ratio. Three basic rules for resource allocation optimization are put forward to improve system operation efficiency and cut unnecessary costs. An approach based on fuzzy multiple linear regression (FMLR is developed, which integrates the strength and concision of multiple linear regression in data fitting and factor analysis and the merit of fuzzy theory in dealing with uncertain or vague problems, which helps reduce those costs caused by unnecessary resource transfer. The iteration mechanism is introduced in the FMLR algorithm to improve forecasting accuracy. A case study of human resource allocation optimization in construction machinery industry is implemented to test and verify the proposed model. 9. COLOR IMAGE RETRIEVAL BASED ON FEATURE FUSION THROUGH MULTIPLE LINEAR REGRESSION ANALYSIS Directory of Open Access Journals (Sweden) K. Seetharaman 2015-08-01 Full Text Available This paper proposes a novel technique based on feature fusion using multiple linear regression analysis, and the least-square estimation method is employed to estimate the parameters. The given input query image is segmented into various regions according to the structure of the image. The color and texture features are extracted on each region of the query image, and the features are fused together using the multiple linear regression model. The estimated parameters of the model, which is modeled based on the features, are formed as a vector called a feature vector. The Canberra distance measure is adopted to compare the feature vectors of the query and target images. The F-measure is applied to evaluate the performance of the proposed technique. The obtained results expose that the proposed technique is comparable to the other existing techniques. 10. BFLCRM: A BAYESIAN FUNCTIONAL LINEAR COX REGRESSION MODEL FOR PREDICTING TIME TO CONVERSION TO ALZHEIMER'S DISEASE. Science.gov (United States) Lee, Eunjee; Zhu, Hongtu; Kong, Dehan; Wang, Yalin; Giovanello, Kelly Sullivan; Ibrahim, Joseph G 2015-12-01 The aim of this paper is to develop a Bayesian functional linear Cox regression model (BFLCRM) with both functional and scalar covariates. This new development is motivated by establishing the likelihood of conversion to Alzheimer's disease (AD) in 346 patients with mild cognitive impairment (MCI) enrolled in the Alzheimer's Disease Neuroimaging Initiative 1 (ADNI-1) and the early markers of conversion. These 346 MCI patients were followed over 48 months, with 161 MCI participants progressing to AD at 48 months. The functional linear Cox regression model was used to establish that functional covariates including hippocampus surface morphology and scalar covariates including brain MRI volumes, cognitive performance (ADAS-Cog), and APOE status can accurately predict time to onset of AD. Posterior computation proceeds via an efficient Markov chain Monte Carlo algorithm. A simulation study is performed to evaluate the finite sample performance of BFLCRM. 11. Inverse estimation of multiple muscle activations based on linear logistic regression. Science.gov (United States) Sekiya, Masashi; Tsuji, Toshiaki 2017-07-01 This study deals with a technology to estimate the muscle activity from the movement data using a statistical model. A linear regression (LR) model and artificial neural networks (ANN) have been known as statistical models for such use. Although ANN has a high estimation capability, it is often in the clinical application that the lack of data amount leads to performance deterioration. On the other hand, the LR model has a limitation in generalization performance. We therefore propose a muscle activity estimation method to improve the generalization performance through the use of linear logistic regression model. The proposed method was compared with the LR model and ANN in the verification experiment with 7 participants. As a result, the proposed method showed better generalization performance than the conventional methods in various tasks. 12. User's Guide to the Weighted-Multiple-Linear Regression Program (WREG version 1.0) Science.gov (United States) Eng, Ken; Chen, Yin-Yu; Kiang, Julie.E. 2009-01-01 Streamflow is not measured at every location in a stream network. Yet hydrologists, State and local agencies, and the general public still seek to know streamflow characteristics, such as mean annual flow or flood flows with different exceedance probabilities, at ungaged basins. The goals of this guide are to introduce and familiarize the user with the weighted multiple-linear regression (WREG) program, and to also provide the theoretical background for program features. The program is intended to be used to develop a regional estimation equation for streamflow characteristics that can be applied at an ungaged basin, or to improve the corresponding estimate at continuous-record streamflow gages with short records. The regional estimation equation results from a multiple-linear regression that relates the observable basin characteristics, such as drainage area, to streamflow characteristics. 13. Alzheimer's Disease Detection by Pseudo Zernike Moment and Linear Regression Classification. Science.gov (United States) Wang, Shui-Hua; Du, Sidan; Zhang, Yin; Phillips, Preetha; Wu, Le-Nan; Chen, Xian-Qing; Zhang, Yu-Dong 2017-01-01 This study presents an improved method based on "Gorji et al. Neuroscience. 2015" by introducing a relatively new classifier-linear regression classification. Our method selects one axial slice from 3D brain image, and employed pseudo Zernike moment with maximum order of 15 to extract 256 features from each image. Finally, linear regression classification was harnessed as the classifier. The proposed approach obtains an accuracy of 97.51%, a sensitivity of 96.71%, and a specificity of 97.73%. Our method performs better than Gorji's approach and five other state-of-the-art approaches. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org. 14. On the Relationship Between Confidence Sets and Exchangeable Weights in Multiple Linear Regression. Science.gov (United States) Pek, Jolynn; Chalmers, R Philip; Monette, Georges 2016-01-01 When statistical models are employed to provide a parsimonious description of empirical relationships, the extent to which strong conclusions can be drawn rests on quantifying the uncertainty in parameter estimates. In multiple linear regression (MLR), regression weights carry two kinds of uncertainty represented by confidence sets (CSs) and exchangeable weights (EWs). Confidence sets quantify uncertainty in estimation whereas the set of EWs quantify uncertainty in the substantive interpretation of regression weights. As CSs and EWs share certain commonalities, we clarify the relationship between these two kinds of uncertainty about regression weights. We introduce a general framework describing how CSs and the set of EWs for regression weights are estimated from the likelihood-based and Wald-type approach, and establish the analytical relationship between CSs and sets of EWs. With empirical examples on posttraumatic growth of caregivers (Cadell et al., 2014; Schneider, Steele, Cadell & Hemsworth, 2011) and on graduate grade point average (Kuncel, Hezlett & Ones, 2001), we illustrate the usefulness of CSs and EWs for drawing strong scientific conclusions. We discuss the importance of considering both CSs and EWs as part of the scientific process, and provide an Online Appendix with R code for estimating Wald-type CSs and EWs for k regression weights. 15. MULTIPLE LINEAR REGRESSION ANALYSIS FOR PREDICTION OF BOILER LOSSES AND BOILER EFFICIENCY OpenAIRE Chayalakshmi C.L 2018-01-01 MULTIPLE LINEAR REGRESSION ANALYSIS FOR PREDICTION OF BOILER LOSSES AND BOILER EFFICIENCY ABSTRACT Calculation of boiler efficiency is essential if its parameters need to be controlled for either maintaining or enhancing its efficiency. But determination of boiler efficiency using conventional method is time consuming and very expensive. Hence, it is not recommended to find boiler efficiency frequently. The work presented in this paper deals with establishing the statistical mo... 16. A Simple Linear Regression Method for Quantitative Trait Loci Linkage Analysis With Censored Observations OpenAIRE Anderson, Carl A.; McRae, Allan F.; Visscher, Peter M. 2006-01-01 Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using... 17. The detection of influential subsets in linear regression using an influence matrix OpenAIRE Peña, Daniel; Yohai, Víctor J. 1991-01-01 This paper presents a new method to identify influential subsets in linear regression problems. The procedure uses the eigenstructure of an influence matrix which is defined as the matrix of uncentered covariance of the effect on the whole data set of deleting each observation, normalized to include the univariate Cook's statistics in the diagonal. It is shown that points in an influential subset will appear with large weight in at least one of the eigenvector linked to the largest eigenvalue... 18. USE OF THE SIMPLE LINEAR REGRESSION MODEL IN MACRO-ECONOMICAL ANALYSES Directory of Open Access Journals (Sweden) Constantin ANGHELACHE 2011-10-01 Full Text Available The article presents the fundamental aspects of the linear regression, as a toolbox which can be used in macroeconomic analyses. The article describes the estimation of the parameters, the statistical tests used, the homoscesasticity and heteroskedasticity. The use of econometrics instrument in macroeconomics is an important factor that guarantees the quality of the models, analyses, results and possible interpretation that can be drawn at this level. 19. The regression-calibration method for fitting generalized linear models with additive measurement error OpenAIRE James W. Hardin; Henrik Schmeidiche; Raymond J. Carroll 2003-01-01 This paper discusses and illustrates the method of regression calibration. This is a straightforward technique for fitting models with additive measurement error. We present this discussion in terms of generalized linear models (GLMs) following the notation defined in Hardin and Carroll (2003). Discussion will include specified measurement error, measurement error estimated by replicate error-prone proxies, and measurement error estimated by instrumental variables. The discussion focuses on s... 20. Teacher characteristics and student performance: An analysis using hierarchical linear modelling Directory of Open Access Journals (Sweden) Paula Armstrong 2015-12-01 Full Text Available This research makes use of hierarchical linear modelling to investigate which teacher characteristics are significantly associated with student performance. Using data from the SACMEQ III study of 2007, an interesting and potentially important finding is that younger teachers are better able to improve the mean mathematics performance of their students. Furthermore, younger teachers themselves perform better on subject tests than do their older counterparts. Identical models are run for Sub Saharan countries bordering on South Africa, as well for Kenya and the strong relationship between teacher age and student performance is not observed. Similarly, the model is run for South Africa using data from SACMEQ II (conducted in 2002 and the relationship between teacher age and student performance is also not observed. It must be noted that South African teachers were not tested in SACMEQ II so it was not possible to observe differences in subject knowledge amongst teachers in different cohorts and it was not possible to control for teachers’ level of subject knowledge when observing the relationship between teacher age and student performance. Changes in teacher education in the late 1990s and early 2000s may explain the differences in the performance of younger teachers relative to their older counterparts observed in the later dataset. 1. Factors influencing the occupational injuries of physical therapists in Taiwan: A hierarchical linear model approach. Science.gov (United States) Tao, Yu-Hui; Wu, Yu-Lung; Huang, Wan-Yun 2017-01-01 The evidence literature suggests that physical therapy practitioners are subjected to a high probability of acquiring work-related injuries, but only a few studies have specifically investigated Taiwanese physical therapy practitioners. This study was conducted to determine the relationships among individual and group hospital-level factors that contribute to the medical expenses for the occupational injuries of physical therapy practitioners in Taiwan. Physical therapy practitioners in Taiwan with occupational injuries were selected from the 2013 National Health Insurance Research Databases (NHIRD). The age, gender, job title, hospitals attributes, and outpatient data of physical therapy practitioners who sustained an occupational injury in 2013 were obtained with SAS 9.3. SPSS 20.0 and HLM 7.01 were used to conduct descriptive and hierarchical linear model analyses, respectively. The job title of physical therapy practitioners at the individual level and the hospital type at the group level exert positive effects on per person medical expenses. Hospital hierarchy moderates the individual-level relationships of age and job title with the per person medical expenses. Considering that age, job title, and hospital hierarchy affect medical expenses for the occupational injuries of physical therapy practitioners, we suggest strengthening related safety education and training and elevating the self-awareness of the risk of occupational injuries of physical therapy practitioners to reduce and prevent the occurrence of such injuries. 2. Assessing exposure to violence using multiple informants: application of hierarchical linear model. Science.gov (United States) Kuo, M; Mohler, B; Raudenbush, S L; Earls, F J 2000-11-01 The present study assesses the effects of demographic risk factors on children's exposure to violence (ETV) and how these effects vary by informants. Data on exposure to violence of 9-, 12-, and 15-year-olds were collected from both child participants (N = 1880) and parents (N = 1776), as part of the assessment of the Project on Human Development in Chicago Neighborhoods (PHDCN). A two-level hierarchical linear model (HLM) with multivariate outcomes was employed to analyze information obtained from these two different groups of informants. The findings indicate that parents generally report less ETV than do their children and that associations of age, gender, and parent education with ETV are stronger in the self-reports than in the parent reports. The findings support a multivariate approach when information obtained from different sources is being integrated. The application of HLM allows an assessment of interactions between risk factors and informants and uses all available data, including data from one informant when data from the other informant is missing. 3. Comparison of l₁-Norm SVR and Sparse Coding Algorithms for Linear Regression. Science.gov (United States) Zhang, Qingtian; Hu, Xiaolin; Zhang, Bo 2015-08-01 Support vector regression (SVR) is a popular function estimation technique based on Vapnik's concept of support vector machine. Among many variants, the l1-norm SVR is known to be good at selecting useful features when the features are redundant. Sparse coding (SC) is a technique widely used in many areas and a number of efficient algorithms are available. Both l1-norm SVR and SC can be used for linear regression. In this brief, the close connection between the l1-norm SVR and SC is revealed and some typical algorithms are compared for linear regression. The results show that the SC algorithms outperform the Newton linear programming algorithm, an efficient l1-norm SVR algorithm, in efficiency. The algorithms are then used to design the radial basis function (RBF) neural networks. Experiments on some benchmark data sets demonstrate the high efficiency of the SC algorithms. In particular, one of the SC algorithms, the orthogonal matching pursuit is two orders of magnitude faster than a well-known RBF network designing algorithm, the orthogonal least squares algorithm. 4. Privacy-Preserving Distributed Linear Regression on High-Dimensional Data Directory of Open Access Journals (Sweden) 2017-10-01 Full Text Available We propose privacy-preserving protocols for computing linear regression models, in the setting where the training dataset is vertically distributed among several parties. Our main contribution is a hybrid multi-party computation protocol that combines Yao’s garbled circuits with tailored protocols for computing inner products. Like many machine learning tasks, building a linear regression model involves solving a system of linear equations. We conduct a comprehensive evaluation and comparison of different techniques for securely performing this task, including a new Conjugate Gradient Descent (CGD algorithm. This algorithm is suitable for secure computation because it uses an efficient fixed-point representation of real numbers while maintaining accuracy and convergence rates comparable to what can be obtained with a classical solution using floating point numbers. Our technique improves on Nikolaenko et al.’s method for privacy-preserving ridge regression (S&P 2013, and can be used as a building block in other analyses. We implement a complete system and demonstrate that our approach is highly scalable, solving data analysis problems with one million records and one hundred features in less than one hour of total running time. 5. LINEAR REGRESSION MODEL ESTİMATİON FOR RIGHT CENSORED DATA Directory of Open Access Journals (Sweden) Ersin Yılmaz 2016-05-01 Full Text Available In this study, firstly we will define a right censored data. If we say shortly right-censored data is censoring values that above the exact line. This may be related with scaling device. And then  we will use response variable acquainted from right-censored explanatory variables. Then the linear regression model will be estimated. For censored data’s existence, Kaplan-Meier weights will be used for  the estimation of the model. With the weights regression model  will be consistent and unbiased with that.   And also there is a method for the censored data that is a semi parametric regression and this method also give  useful results  for censored data too. This study also might be useful for the health studies because of the censored data used in medical issues generally. 6. [Multiple linear regression analysis of X-ray measurement and WOMAC scores of knee osteoarthritis]. Science.gov (United States) Ma, Yu-Feng; Wang, Qing-Fu; Chen, Zhao-Jun; Du, Chun-Lin; Li, Jun-Hai; Huang, Hu; Shi, Zong-Ting; Yin, Yue-Shan; Zhang, Lei; A-Di, Li-Jiang; Dong, Shi-Yu; Wu, Ji 2012-05-01 To perform Multiple Linear Regression analysis of X-ray measurement and WOMAC scores of knee osteoarthritis, and to analyze their relationship with clinical and biomechanical concepts. From March 2011 to July 2011, 140 patients (250 knees) were reviewed, including 132 knees in the left and 118 knees in the right; ranging in age from 40 to 71 years, with an average of 54.68 years. The MB-RULER measurement software was applied to measure femoral angle, tibial angle, femorotibial angle, joint gap angle from antero-posterir and lateral position of X-rays. The WOMAC scores were also collected. Then multiple regression equations was applied for the linear regression analysis of correlation between the X-ray measurement and WOMAC scores. There was statistical significance in the regression equation of AP X-rays value and WOMAC scores (Pregression equation of lateral X-ray value and WOMAC scores (P>0.05). 1) X-ray measurement of knee joint can reflect the WOMAC scores to a certain extent. 2) It is necessary to measure the X-ray mechanical axis of knee, which is important for diagnosis and treatment of osteoarthritis. 3) The correlation between tibial angle,joint gap angle on antero-posterior X-ray and WOMAC scores is significant, which can be used to assess the functional recovery of patients before and after treatment. 7. Using the fuzzy linear regression method to benchmark the energy efficiency of commercial buildings International Nuclear Information System (INIS) Chung, William 2012-01-01 Highlights: ► Fuzzy linear regression method is used for developing benchmarking systems. ► The systems can be used to benchmark energy efficiency of commercial buildings. ► The resulting benchmarking model can be used by public users. ► The resulting benchmarking model can capture the fuzzy nature of input–output data. -- Abstract: Benchmarking systems from a sample of reference buildings need to be developed to conduct benchmarking processes for the energy efficiency of commercial buildings. However, not all benchmarking systems can be adopted by public users (i.e., other non-reference building owners) because of the different methods in developing such systems. An approach for benchmarking the energy efficiency of commercial buildings using statistical regression analysis to normalize other factors, such as management performance, was developed in a previous work. However, the field data given by experts can be regarded as a distribution of possibility. Thus, the previous work may not be adequate to handle such fuzzy input–output data. Consequently, a number of fuzzy structures cannot be fully captured by statistical regression analysis. This present paper proposes the use of fuzzy linear regression analysis to develop a benchmarking process, the resulting model of which can be used by public users. An illustrative example is given as well. 8. Evaluation of accuracy of linear regression models in predicting urban stormwater discharge characteristics. Science.gov (United States) 2014-06-01 9. Hippocampal atrophy and developmental regression as first sign of linear scleroderma "en coup de sabre". Science.gov (United States) Verhelst, Helene E; Beele, Hilde; Joos, Rik; Vanneuville, Benedicte; Van Coster, Rudy N 2008-11-01 An 8-year-old girl with linear scleroderma "en coup de sabre" is reported who, at preschool age, presented with intractable simple partial seizures more than 1 year before skin lesions were first noticed. MRI revealed hippocampal atrophy, controlaterally to the seizures and ipsilaterally to the skin lesions. In the following months, a mental and motor regression was noticed. Cerebral CT scan showed multiple foci of calcifications in the affected hemisphere. In previously reported patients the skin lesions preceded the neurological signs. To the best of our knowledge, hippocampal atrophy was not earlier reported as presenting symptom of linear scleroderma. Linear scleroderma should be included in the differential diagnosis in patients with unilateral hippocampal atrophy even when the typical skin lesions are not present. 10. Computer software for linear and nonlinear regression in organic NMR; Programa de computador para regressao linear e nao linear em R.M.N. organica Energy Technology Data Exchange (ETDEWEB) Canto, Eduardo Leite do; Rittner, Roberto [Universidade Estadual de Campinas, SP (Brazil). Inst. de Quimica 1992-12-31 Calculation involving two variable linear regressions, require specific procedures generally not familiar to chemist. For attending the necessity of fast and efficient handling of NMR data, a self explained and Pc portable software has been developed, which allows user to produce and use diskette recorded tables, containing chemical shift or any other substituent physical-chemical measurements and constants ({sigma}{sub T}, {sigma}{sup o}{sub R}, E{sub s}, ...) 9 refs., 1 fig. 11. Significance tests to determine the direction of effects in linear regression models. Science.gov (United States) Wiedermann, Wolfgang; Hagmann, Michael; von Eye, Alexander 2015-02-01 Previous studies have discussed asymmetric interpretations of the Pearson correlation coefficient and have shown that higher moments can be used to decide on the direction of dependence in the bivariate linear regression setting. The current study extends this approach by illustrating that the third moment of regression residuals may also be used to derive conclusions concerning the direction of effects. Assuming non-normally distributed variables, it is shown that the distribution of residuals of the correctly specified regression model (e.g., Y is regressed on X) is more symmetric than the distribution of residuals of the competing model (i.e., X is regressed on Y). Based on this result, 4 one-sample tests are discussed which can be used to decide which variable is more likely to be the response and which one is more likely to be the explanatory variable. A fifth significance test is proposed based on the differences of skewness estimates, which leads to a more direct test of a hypothesis that is compatible with direction of dependence. A Monte Carlo simulation study was performed to examine the behaviour of the procedures under various degrees of associations, sample sizes, and distributional properties of the underlying population. An empirical example is given which illustrates the application of the tests in practice. © 2014 The British Psychological Society. 12. Improving the Prediction of Total Surgical Procedure Time Using Linear Regression Modeling. Science.gov (United States) Edelman, Eric R; van Kuijk, Sander M J; Hamaekers, Ankie E W; de Korte, Marcel J M; van Merode, Godefridus G; Buhre, Wolfgang F F A 2017-01-01 For efficient utilization of operating rooms (ORs), accurate schedules of assigned block time and sequences of patient cases need to be made. The quality of these planning tools is dependent on the accurate prediction of total procedure time (TPT) per case. In this paper, we attempt to improve the accuracy of TPT predictions by using linear regression models based on estimated surgeon-controlled time (eSCT) and other variables relevant to TPT. We extracted data from a Dutch benchmarking database of all surgeries performed in six academic hospitals in The Netherlands from 2012 till 2016. The final dataset consisted of 79,983 records, describing 199,772 h of total OR time. Potential predictors of TPT that were included in the subsequent analysis were eSCT, patient age, type of operation, American Society of Anesthesiologists (ASA) physical status classification, and type of anesthesia used. First, we computed the predicted TPT based on a previously described fixed ratio model for each record, multiplying eSCT by 1.33. This number is based on the research performed by van Veen-Berkx et al., which showed that 33% of SCT is generally a good approximation of anesthesia-controlled time (ACT). We then systematically tested all possible linear regression models to predict TPT using eSCT in combination with the other available independent variables. In addition, all regression models were again tested without eSCT as a predictor to predict ACT separately (which leads to TPT by adding SCT). TPT was most accurately predicted using a linear regression model based on the independent variables eSCT, type of operation, ASA classification, and type of anesthesia. This model performed significantly better than the fixed ratio model and the method of predicting ACT separately. Making use of these more accurate predictions in planning and sequencing algorithms may enable an increase in utilization of ORs, leading to significant financial and productivity related benefits. 13. Improving the Prediction of Total Surgical Procedure Time Using Linear Regression Modeling Directory of Open Access Journals (Sweden) Eric R. Edelman 2017-06-01 Full Text Available For efficient utilization of operating rooms (ORs, accurate schedules of assigned block time and sequences of patient cases need to be made. The quality of these planning tools is dependent on the accurate prediction of total procedure time (TPT per case. In this paper, we attempt to improve the accuracy of TPT predictions by using linear regression models based on estimated surgeon-controlled time (eSCT and other variables relevant to TPT. We extracted data from a Dutch benchmarking database of all surgeries performed in six academic hospitals in The Netherlands from 2012 till 2016. The final dataset consisted of 79,983 records, describing 199,772 h of total OR time. Potential predictors of TPT that were included in the subsequent analysis were eSCT, patient age, type of operation, American Society of Anesthesiologists (ASA physical status classification, and type of anesthesia used. First, we computed the predicted TPT based on a previously described fixed ratio model for each record, multiplying eSCT by 1.33. This number is based on the research performed by van Veen-Berkx et al., which showed that 33% of SCT is generally a good approximation of anesthesia-controlled time (ACT. We then systematically tested all possible linear regression models to predict TPT using eSCT in combination with the other available independent variables. In addition, all regression models were again tested without eSCT as a predictor to predict ACT separately (which leads to TPT by adding SCT. TPT was most accurately predicted using a linear regression model based on the independent variables eSCT, type of operation, ASA classification, and type of anesthesia. This model performed significantly better than the fixed ratio model and the method of predicting ACT separately. Making use of these more accurate predictions in planning and sequencing algorithms may enable an increase in utilization of ORs, leading to significant financial and productivity related 14. Bivariate least squares linear regression: Towards a unified analytic formalism. I. Functional models Science.gov (United States) Caimmi, R. 2011-08-01 Concerning bivariate least squares linear regression, the classical approach pursued for functional models in earlier attempts ( York, 1966, 1969) is reviewed using a new formalism in terms of deviation (matrix) traces which, for unweighted data, reduce to usual quantities leaving aside an unessential (but dimensional) multiplicative factor. Within the framework of classical error models, the dependent variable relates to the independent variable according to the usual additive model. The classes of linear models considered are regression lines in the general case of correlated errors in X and in Y for weighted data, and in the opposite limiting situations of (i) uncorrelated errors in X and in Y, and (ii) completely correlated errors in X and in Y. The special case of (C) generalized orthogonal regression is considered in detail together with well known subcases, namely: (Y) errors in X negligible (ideally null) with respect to errors in Y; (X) errors in Y negligible (ideally null) with respect to errors in X; (O) genuine orthogonal regression; (R) reduced major-axis regression. In the limit of unweighted data, the results determined for functional models are compared with their counterparts related to extreme structural models i.e. the instrumental scatter is negligible (ideally null) with respect to the intrinsic scatter ( Isobe et al., 1990; Feigelson and Babu, 1992). While regression line slope and intercept estimators for functional and structural models necessarily coincide, the contrary holds for related variance estimators even if the residuals obey a Gaussian distribution, with the exception of Y models. An example of astronomical application is considered, concerning the [O/H]-[Fe/H] empirical relations deduced from five samples related to different stars and/or different methods of oxygen abundance determination. For selected samples and assigned methods, different regression models yield consistent results within the errors (∓ σ) for both 15. Improvement of Storm Forecasts Using Gridded Bayesian Linear Regression for Northeast United States Science.gov (United States) Yang, J.; Astitha, M.; Schwartz, C. S. 2017-12-01 Bayesian linear regression (BLR) is a post-processing technique in which regression coefficients are derived and used to correct raw forecasts based on pairs of observation-model values. This study presents the development and application of a gridded Bayesian linear regression (GBLR) as a new post-processing technique to improve numerical weather prediction (NWP) of rain and wind storm forecasts over northeast United States. Ten controlled variables produced from ten ensemble members of the National Center for Atmospheric Research (NCAR) real-time prediction system are used for a GBLR model. In the GBLR framework, leave-one-storm-out cross-validation is utilized to study the performances of the post-processing technique in a database composed of 92 storms. To estimate the regression coefficients of the GBLR, optimization procedures that minimize the systematic and random error of predicted atmospheric variables (wind speed, precipitation, etc.) are implemented for the modeled-observed pairs of training storms. The regression coefficients calculated for meteorological stations of the National Weather Service are interpolated back to the model domain. An analysis of forecast improvements based on error reductions during the storms will demonstrate the value of GBLR approach. This presentation will also illustrate how the variances are optimized for the training partition in GBLR and discuss the verification strategy for grid points where no observations are available. The new post-processing technique is successful in improving wind speed and precipitation storm forecasts using past event-based data and has the potential to be implemented in real-time. 16. Building a new predictor for multiple linear regression technique-based corrective maintenance turnaround time. Science.gov (United States) Cruz, Antonio M; Barr, Cameron; Puñales-Pozo, Elsa 2008-01-01 This research's main goals were to build a predictor for a turnaround time (TAT) indicator for estimating its values and use a numerical clustering technique for finding possible causes of undesirable TAT values. The following stages were used: domain understanding, data characterisation and sample reduction and insight characterisation. Building the TAT indicator multiple linear regression predictor and clustering techniques were used for improving corrective maintenance task efficiency in a clinical engineering department (CED). The indicator being studied was turnaround time (TAT). Multiple linear regression was used for building a predictive TAT value model. The variables contributing to such model were clinical engineering department response time (CE(rt), 0.415 positive coefficient), stock service response time (Stock(rt), 0.734 positive coefficient), priority level (0.21 positive coefficient) and service time (0.06 positive coefficient). The regression process showed heavy reliance on Stock(rt), CE(rt) and priority, in that order. Clustering techniques revealed the main causes of high TAT values. This examination has provided a means for analysing current technical service quality and effectiveness. In doing so, it has demonstrated a process for identifying areas and methods of improvement and a model against which to analyse these methods' effectiveness. 17. Linear and evolutionary polynomial regression models to forecast coastal dynamics: Comparison and reliability assessment Science.gov (United States) Bruno, Delia Evelina; Barca, Emanuele; Goncalves, Rodrigo Mikosz; de Araujo Queiroz, Heithor Alexandre; Berardi, Luigi; Passarella, Giuseppe 2018-01-01 In this paper, the Evolutionary Polynomial Regression data modelling strategy has been applied to study small scale, short-term coastal morphodynamics, given its capability for treating a wide database of known information, non-linearly. Simple linear and multilinear regression models were also applied to achieve a balance between the computational load and reliability of estimations of the three models. In fact, even though it is easy to imagine that the more complex the model, the more the prediction improves, sometimes a "slight" worsening of estimations can be accepted in exchange for the time saved in data organization and computational load. The models' outcomes were validated through a detailed statistical, error analysis, which revealed a slightly better estimation of the polynomial model with respect to the multilinear model, as expected. On the other hand, even though the data organization was identical for the two models, the multilinear one required a simpler simulation setting and a faster run time. Finally, the most reliable evolutionary polynomial regression model was used in order to make some conjecture about the uncertainty increase with the extension of extrapolation time of the estimation. The overlapping rate between the confidence band of the mean of the known coast position and the prediction band of the estimated position can be a good index of the weakness in producing reliable estimations when the extrapolation time increases too much. The proposed models and tests have been applied to a coastal sector located nearby Torre Colimena in the Apulia region, south Italy. 18. A Linear Regression Model for Global Solar Radiation on Horizontal Surfaces at Warri, Nigeria Directory of Open Access Journals (Sweden) Michael S. Okundamiya 2013-10-01 Full Text Available The growing anxiety on the negative effects of fossil fuels on the environment and the global emission reduction targets call for a more extensive use of renewable energy alternatives. Efficient solar energy utilization is an essential solution to the high atmospheric pollution caused by fossil fuel combustion. Global solar radiation (GSR data, which are useful for the design and evaluation of solar energy conversion system, are not measured at the forty-five meteorological stations in Nigeria. The dearth of the measured solar radiation data calls for accurate estimation. This study proposed a temperature-based linear regression, for predicting the monthly average daily GSR on horizontal surfaces, at Warri (latitude 5.020N and longitude 7.880E an oil city located in the south-south geopolitical zone, in Nigeria. The proposed model is analyzed based on five statistical indicators (coefficient of correlation, coefficient of determination, mean bias error, root mean square error, and t-statistic, and compared with the existing sunshine-based model for the same study. The results indicate that the proposed temperature-based linear regression model could replace the existing sunshine-based model for generating global solar radiation data. Keywords: air temperature; empirical model; global solar radiation; regression analysis; renewable energy; Warri 19. Multiple regression technique for Pth degree polynominals with and without linear cross products Science.gov (United States) Davis, J. W. 1973-01-01 A multiple regression technique was developed by which the nonlinear behavior of specified independent variables can be related to a given dependent variable. The polynomial expression can be of Pth degree and can incorporate N independent variables. Two cases are treated such that mathematical models can be studied both with and without linear cross products. The resulting surface fits can be used to summarize trends for a given phenomenon and provide a mathematical relationship for subsequent analysis. To implement this technique, separate computer programs were developed for the case without linear cross products and for the case incorporating such cross products which evaluate the various constants in the model regression equation. In addition, the significance of the estimated regression equation is considered and the standard deviation, the F statistic, the maximum absolute percent error, and the average of the absolute values of the percent of error evaluated. The computer programs and their manner of utilization are described. Sample problems are included to illustrate the use and capability of the technique which show the output formats and typical plots comparing computer results to each set of input data. 20. Research on the multiple linear regression in non-invasive blood glucose measurement. Science.gov (United States) Zhu, Jianming; Chen, Zhencheng 2015-01-01 A non-invasive blood glucose measurement sensor and the data process algorithm based on the metabolic energy conservation (MEC) method are presented in this paper. The physiological parameters of human fingertip can be measured by various sensing modalities, and blood glucose value can be evaluated with the physiological parameters by the multiple linear regression analysis. Five methods such as enter, remove, forward, backward and stepwise in multiple linear regression were compared, and the backward method had the best performance. The best correlation coefficient was 0.876 with the standard error of the estimate 0.534, and the significance was 0.012 (sig. regression equation was valid. The Clarke error grid analysis was performed to compare the MEC method with the hexokinase method, using 200 data points. The correlation coefficient R was 0.867 and all of the points were located in Zone A and Zone B, which shows the MEC method provides a feasible and valid way for non-invasive blood glucose measurement. 1. Generating linear regression model to predict motor functions by use of laser range finder during TUG. Science.gov (United States) Adachi, Daiki; Nishiguchi, Shu; Fukutani, Naoto; Hotta, Takayuki; Tashiro, Yuto; Morino, Saori; Shirooka, Hidehiko; Nozaki, Yuma; Hirata, Hinako; Yamaguchi, Moe; Yorozu, Ayanori; Takahashi, Masaki; Aoyama, Tomoki 2017-05-01 The purpose of this study was to investigate which spatial and temporal parameters of the Timed Up and Go (TUG) test are associated with motor function in elderly individuals. This study included 99 community-dwelling women aged 72.9 ± 6.3 years. Step length, step width, single support time, variability of the aforementioned parameters, gait velocity, cadence, reaction time from starting signal to first step, and minimum distance between the foot and a marker placed to 3 in front of the chair were measured using our analysis system. The 10-m walk test, five times sit-to-stand (FTSTS) test, and one-leg standing (OLS) test were used to assess motor function. Stepwise multivariate linear regression analysis was used to determine which TUG test parameters were associated with each motor function test. Finally, we calculated a predictive model for each motor function test using each regression coefficient. In stepwise linear regression analysis, step length and cadence were significantly associated with the 10-m walk test, FTSTS and OLS test. Reaction time was associated with the FTSTS test, and step width was associated with the OLS test. Each predictive model showed a strong correlation with the 10-m walk test and OLS test (P motor function test. Moreover, the TUG test time regarded as the lower extremity function and mobility has strong predictive ability in each motor function test. Copyright © 2017 The Japanese Orthopaedic Association. Published by Elsevier B.V. All rights reserved. 2. Partitioning of late gestation energy expenditure in ewes using indirect calorimetry and a linear regression approach DEFF Research Database (Denmark) Kiani, Alishir; Chwalibog, André; Nielsen, Mette O 2007-01-01 Late gestation energy expenditure (EE(gest)) originates from energy expenditure (EE) of development of conceptus (EE(conceptus)) and EE of homeorhetic adaptation of metabolism (EE(homeorhetic)). Even though EE(gest) is relatively easy to quantify, its partitioning is problematic. In the present...... study metabolizable energy (ME) intake ranges for twin-bearing ewes were 220-440, 350- 700, 350-900 kJ per metabolic body weight (W0.75) at week seven, five, two pre-partum respectively. Indirect calorimetry and a linear regression approach were used to quantify EE(gest) and then partition to EE......(conceptus) and EE(homeorhetic). Energy expenditure of basal metabolism of the non-gravid tissues (EE(bmng)), derived from the intercept of the linear regression equation of retained energy [kJ/W0.75] and ME intake [kJ/W(0.75)], was 298 [kJ/ W0.75]. Values of the intercepts of the regression equations at week seven... 3. Robust best linear estimation for regression analysis using surrogate and instrumental variables. Science.gov (United States) Wang, C Y 2012-04-01 We investigate methods for regression analysis when covariates are measured with errors. In a subset of the whole cohort, a surrogate variable is available for the true unobserved exposure variable. The surrogate variable satisfies the classical measurement error model, but it may not have repeated measurements. In addition to the surrogate variables that are available among the subjects in the calibration sample, we assume that there is an instrumental variable (IV) that is available for all study subjects. An IV is correlated with the unobserved true exposure variable and hence can be useful in the estimation of the regression coefficients. We propose a robust best linear estimator that uses all the available data, which is the most efficient among a class of consistent estimators. The proposed estimator is shown to be consistent and asymptotically normal under very weak distributional assumptions. For Poisson or linear regression, the proposed estimator is consistent even if the measurement error from the surrogate or IV is heteroscedastic. Finite-sample performance of the proposed estimator is examined and compared with other estimators via intensive simulation studies. The proposed method and other methods are applied to a bladder cancer case-control study. 4. Predicting recycling behaviour: Comparison of a linear regression model and a fuzzy logic model. Science.gov (United States) Vesely, Stepan; Klöckner, Christian A; Dohnal, Mirko 2016-03-01 In this paper we demonstrate that fuzzy logic can provide a better tool for predicting recycling behaviour than the customarily used linear regression. To show this, we take a set of empirical data on recycling behaviour (N=664), which we randomly divide into two halves. The first half is used to estimate a linear regression model of recycling behaviour, and to develop a fuzzy logic model of recycling behaviour. As the first comparison, the fit of both models to the data included in estimation of the models (N=332) is evaluated. As the second comparison, predictive accuracy of both models for "new" cases (hold-out data not included in building the models, N=332) is assessed. In both cases, the fuzzy logic model significantly outperforms the regression model in terms of fit. To conclude, when accurate predictions of recycling and possibly other environmental behaviours are needed, fuzzy logic modelling seems to be a promising technique. Copyright © 2015 Elsevier Ltd. All rights reserved. 5. Distributed Monitoring of the R(sup 2) Statistic for Linear Regression Science.gov (United States) Bhaduri, Kanishka; Das, Kamalika; Giannella, Chris R. 2011-01-01 The problem of monitoring a multivariate linear regression model is relevant in studying the evolving relationship between a set of input variables (features) and one or more dependent target variables. This problem becomes challenging for large scale data in a distributed computing environment when only a subset of instances is available at individual nodes and the local data changes frequently. Data centralization and periodic model recomputation can add high overhead to tasks like anomaly detection in such dynamic settings. Therefore, the goal is to develop techniques for monitoring and updating the model over the union of all nodes data in a communication-efficient fashion. Correctness guarantees on such techniques are also often highly desirable, especially in safety-critical application scenarios. In this paper we develop DReMo a distributed algorithm with very low resource overhead, for monitoring the quality of a regression model in terms of its coefficient of determination (R2 statistic). When the nodes collectively determine that R2 has dropped below a fixed threshold, the linear regression model is recomputed via a network-wide convergecast and the updated model is broadcast back to all nodes. We show empirically, using both synthetic and real data, that our proposed method is highly communication-efficient and scalable, and also provide theoretical guarantees on correctness. 6. Single Image Super-Resolution Using Global Regression Based on Multiple Local Linear Mappings. Science.gov (United States) Choi, Jae-Seok; Kim, Munchurl 2017-03-01 Super-resolution (SR) has become more vital, because of its capability to generate high-quality ultra-high definition (UHD) high-resolution (HR) images from low-resolution (LR) input images. Conventional SR methods entail high computational complexity, which makes them difficult to be implemented for up-scaling of full-high-definition input images into UHD-resolution images. Nevertheless, our previous super-interpolation (SI) method showed a good compromise between Peak-Signal-to-Noise Ratio (PSNR) performances and computational complexity. However, since SI only utilizes simple linear mappings, it may fail to precisely reconstruct HR patches with complex texture. In this paper, we present a novel SR method, which inherits the large-to-small patch conversion scheme from SI but uses global regression based on local linear mappings (GLM). Thus, our new SR method is called GLM-SI. In GLM-SI, each LR input patch is divided into 25 overlapped subpatches. Next, based on the local properties of these subpatches, 25 different local linear mappings are applied to the current LR input patch to generate 25 HR patch candidates, which are then regressed into one final HR patch using a global regressor. The local linear mappings are learned cluster-wise in our off-line training phase. The main contribution of this paper is as follows: Previously, linear-mapping-based conventional SR methods, including SI only used one simple yet coarse linear mapping to each patch to reconstruct its HR version. On the contrary, for each LR input patch, our GLM-SI is the first to apply a combination of multiple local linear mappings, where each local linear mapping is found according to local properties of the current LR patch. Therefore, it can better approximate nonlinear LR-to-HR mappings for HR patches with complex texture. Experiment results show that the proposed GLM-SI method outperforms most of the state-of-the-art methods, and shows comparable PSNR performance with much lower 7. An evaluation of bias in propensity score-adjusted non-linear regression models. Science.gov (United States) Wan, Fei; Mitra, Nandita 2018-03-01 Propensity score methods are commonly used to adjust for observed confounding when estimating the conditional treatment effect in observational studies. One popular method, covariate adjustment of the propensity score in a regression model, has been empirically shown to be biased in non-linear models. However, no compelling underlying theoretical reason has been presented. We propose a new framework to investigate bias and consistency of propensity score-adjusted treatment effects in non-linear models that uses a simple geometric approach to forge a link between the consistency of the propensity score estimator and the collapsibility of non-linear models. Under this framework, we demonstrate that adjustment of the propensity score in an outcome model results in the decomposition of observed covariates into the propensity score and a remainder term. Omission of this remainder term from a non-collapsible regression model leads to biased estimates of the conditional odds ratio and conditional hazard ratio, but not for the conditional rate ratio. We further show, via simulation studies, that the bias in these propensity score-adjusted estimators increases with larger treatment effect size, larger covariate effects, and increasing dissimilarity between the coefficients of the covariates in the treatment model versus the outcome model. 8. A note on the use of multiple linear regression in molecular ecology. Science.gov (United States) Frasier, Timothy R 2016-03-01 Multiple linear regression analyses (also often referred to as generalized linear models--GLMs, or generalized linear mixed models--GLMMs) are widely used in the analysis of data in molecular ecology, often to assess the relative effects of genetic characteristics on individual fitness or traits, or how environmental characteristics influence patterns of genetic differentiation. However, the coefficients resulting from multiple regression analyses are sometimes misinterpreted, which can lead to incorrect interpretations and conclusions within individual studies, and can propagate to wider-spread errors in the general understanding of a topic. The primary issue revolves around the interpretation of coefficients for independent variables when interaction terms are also included in the analyses. In this scenario, the coefficients associated with each independent variable are often interpreted as the independent effect of each predictor variable on the predicted variable. However, this interpretation is incorrect. The correct interpretation is that these coefficients represent the effect of each predictor variable on the predicted variable when all other predictor variables are zero. This difference may sound subtle, but the ramifications cannot be overstated. Here, my goals are to raise awareness of this issue, to demonstrate and emphasize the problems that can result and to provide alternative approaches for obtaining the desired information. © 2015 John Wiley & Sons Ltd. 9. Weighted functional linear regression models for gene-based association analysis. Science.gov (United States) Belonogova, Nadezhda M; Svishcheva, Gulnara R; Wilson, James F; Campbell, Harry; Axenovich, Tatiana I 2018-01-01 Functional linear regression models are effectively used in gene-based association analysis of complex traits. These models combine information about individual genetic variants, taking into account their positions and reducing the influence of noise and/or observation errors. To increase the power of methods, where several differently informative components are combined, weights are introduced to give the advantage to more informative components. Allele-specific weights have been introduced to collapsing and kernel-based approaches to gene-based association analysis. Here we have for the first time introduced weights to functional linear regression models adapted for both independent and family samples. Using data simulated on the basis of GAW17 genotypes and weights defined by allele frequencies via the beta distribution, we demonstrated that type I errors correspond to declared values and that increasing the weights of causal variants allows the power of functional linear models to be increased. We applied the new method to real data on blood pressure from the ORCADES sample. Five of the six known genes with P models. Moreover, we found an association between diastolic blood pressure and the VMP1 gene (P = 8.18×10-6), when we used a weighted functional model. For this gene, the unweighted functional and weighted kernel-based models had P = 0.004 and 0.006, respectively. The new method has been implemented in the program package FREGAT, which is freely available at https://cran.r-project.org/web/packages/FREGAT/index.html. 10. Time-Frequency Analysis of Non-Stationary Biological Signals with Sparse Linear Regression Based Fourier Linear Combiner Directory of Open Access Journals (Sweden) Yubo Wang 2017-06-01 Full Text Available It is often difficult to analyze biological signals because of their nonlinear and non-stationary characteristics. This necessitates the usage of time-frequency decomposition methods for analyzing the subtle changes in these signals that are often connected to an underlying phenomena. This paper presents a new approach to analyze the time-varying characteristics of such signals by employing a simple truncated Fourier series model, namely the band-limited multiple Fourier linear combiner (BMFLC. In contrast to the earlier designs, we first identified the sparsity imposed on the signal model in order to reformulate the model to a sparse linear regression model. The coefficients of the proposed model are then estimated by a convex optimization algorithm. The performance of the proposed method was analyzed with benchmark test signals. An energy ratio metric is employed to quantify the spectral performance and results show that the proposed method Sparse-BMFLC has high mean energy (0.9976 ratio and outperforms existing methods such as short-time Fourier transfrom (STFT, continuous Wavelet transform (CWT and BMFLC Kalman Smoother. Furthermore, the proposed method provides an overall 6.22% in reconstruction error. 11. Time-Frequency Analysis of Non-Stationary Biological Signals with Sparse Linear Regression Based Fourier Linear Combiner. Science.gov (United States) Wang, Yubo; Veluvolu, Kalyana C 2017-06-14 It is often difficult to analyze biological signals because of their nonlinear and non-stationary characteristics. This necessitates the usage of time-frequency decomposition methods for analyzing the subtle changes in these signals that are often connected to an underlying phenomena. This paper presents a new approach to analyze the time-varying characteristics of such signals by employing a simple truncated Fourier series model, namely the band-limited multiple Fourier linear combiner (BMFLC). In contrast to the earlier designs, we first identified the sparsity imposed on the signal model in order to reformulate the model to a sparse linear regression model. The coefficients of the proposed model are then estimated by a convex optimization algorithm. The performance of the proposed method was analyzed with benchmark test signals. An energy ratio metric is employed to quantify the spectral performance and results show that the proposed method Sparse-BMFLC has high mean energy (0.9976) ratio and outperforms existing methods such as short-time Fourier transfrom (STFT), continuous Wavelet transform (CWT) and BMFLC Kalman Smoother. Furthermore, the proposed method provides an overall 6.22% in reconstruction error. 12. Predictive Ability of Pender's Health Promotion Model for Physical Activity and Exercise in People with Spinal Cord Injuries: A Hierarchical Regression Analysis Science.gov (United States) Keegan, John P.; Chan, Fong; Ditchman, Nicole; Chiu, Chung-Yi 2012-01-01 The main objective of this study was to validate Pender's Health Promotion Model (HPM) as a motivational model for exercise/physical activity self-management for people with spinal cord injuries (SCIs). Quantitative descriptive research design using hierarchical regression analysis (HRA) was used. A total of 126 individuals with SCI were recruited… 13. Adaptive Linear and Normalized Combination of Radial Basis Function Networks for Function Approximation and Regression Directory of Open Access Journals (Sweden) Yunfeng Wu 2014-01-01 Full Text Available This paper presents a novel adaptive linear and normalized combination (ALNC method that can be used to combine the component radial basis function networks (RBFNs to implement better function approximation and regression tasks. The optimization of the fusion weights is obtained by solving a constrained quadratic programming problem. According to the instantaneous errors generated by the component RBFNs, the ALNC is able to perform the selective ensemble of multiple leaners by adaptively adjusting the fusion weights from one instance to another. The results of the experiments on eight synthetic function approximation and six benchmark regression data sets show that the ALNC method can effectively help the ensemble system achieve a higher accuracy (measured in terms of mean-squared error and the better fidelity (characterized by normalized correlation coefficient of approximation, in relation to the popular simple average, weighted average, and the Bagging methods. 14. Linear and support vector regressions based on geometrical correlation of data Directory of Open Access Journals (Sweden) Kaijun Wang 2007-10-01 Full Text Available Linear regression (LR and support vector regression (SVR are widely used in data analysis. Geometrical correlation learning (GcLearn was proposed recently to improve the predictive ability of LR and SVR through mining and using correlations between data of a variable (inner correlation. This paper theoretically analyzes prediction performance of the GcLearn method and proves that GcLearn LR and SVR will have better prediction performance than traditional LR and SVR for prediction tasks when good inner correlations are obtained and predictions by traditional LR and SVR are far away from their neighbor training data under inner correlation. This gives the applicable condition of GcLearn method. 15. Radioligand assays - methods and applications. IV. Uniform regression of hyperbolic and linear radioimmunoassay calibration curves Energy Technology Data Exchange (ETDEWEB) Keilacker, H; Becker, G; Ziegler, M; Gottschling, H D [Zentralinstitut fuer Diabetes, Karlsburg (German Democratic Republic) 1980-10-01 In order to handle all types of radioimmunoassay (RIA) calibration curves obtained in the authors' laboratory in the same way, they tried to find a non-linear expression for their regression which allows calibration curves with different degrees of curvature to be fitted. Considering the two boundary cases of the incubation protocol they derived a hyperbolic inverse regression function: x = a/sub 1/y + a/sub 0/ + asub(-1)y/sup -1/, where x is the total concentration of antigen, asub(i) are constants, and y is the specifically bound radioactivity. An RIA evaluation procedure based on this function is described providing a fitted inverse RIA calibration curve and some statistical quality parameters. The latter are of an order which is normal for RIA systems. There is an excellent agreement between fitted and experimentally obtained calibration curves having a different degree of curvature. 16. A computer tool for a minimax criterion in binary response and heteroscedastic simple linear regression models. Science.gov (United States) Casero-Alonso, V; López-Fidalgo, J; Torsney, B 2017-01-01 Binary response models are used in many real applications. For these models the Fisher information matrix (FIM) is proportional to the FIM of a weighted simple linear regression model. The same is also true when the weight function has a finite integral. Thus, optimal designs for one binary model are also optimal for the corresponding weighted linear regression model. The main objective of this paper is to provide a tool for the construction of MV-optimal designs, minimizing the maximum of the variances of the estimates, for a general design space. MV-optimality is a potentially difficult criterion because of its nondifferentiability at equal variance designs. A methodology for obtaining MV-optimal designs where the design space is a compact interval [a, b] will be given for several standard weight functions. The methodology will allow us to build a user-friendly computer tool based on Mathematica to compute MV-optimal designs. Some illustrative examples will show a representation of MV-optimal designs in the Euclidean plane, taking a and b as the axes. The applet will be explained using two relevant models. In the first one the case of a weighted linear regression model is considered, where the weight function is directly chosen from a typical family. In the second example a binary response model is assumed, where the probability of the outcome is given by a typical probability distribution. Practitioners can use the provided applet to identify the solution and to know the exact support points and design weights. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved. 17. Healthcare Expenditures Associated with Depression Among Individuals with Osteoarthritis: Post-Regression Linear Decomposition Approach. Science.gov (United States) Agarwal, Parul; Sambamoorthi, Usha 2015-12-01 Depression is common among individuals with osteoarthritis and leads to increased healthcare burden. The objective of this study was to examine excess total healthcare expenditures associated with depression among individuals with osteoarthritis in the US. Adults with self-reported osteoarthritis (n = 1881) were identified using data from the 2010 Medical Expenditure Panel Survey (MEPS). Among those with osteoarthritis, chi-square tests and ordinary least square regressions (OLS) were used to examine differences in healthcare expenditures between those with and without depression. Post-regression linear decomposition technique was used to estimate the relative contribution of different constructs of the Anderson's behavioral model, i.e., predisposing, enabling, need, personal healthcare practices, and external environment factors, to the excess expenditures associated with depression among individuals with osteoarthritis. All analysis accounted for the complex survey design of MEPS. Depression coexisted among 20.6 % of adults with osteoarthritis. The average total healthcare expenditures were \$13,684 among adults with depression compared to \$9284 among those without depression. Multivariable OLS regression revealed that adults with depression had 38.8 % higher healthcare expenditures (p regression linear decomposition analysis indicated that 50 % of differences in expenditures among adults with and without depression can be explained by differences in need factors. Among individuals with coexisting osteoarthritis and depression, excess healthcare expenditures associated with depression were mainly due to comorbid anxiety, chronic conditions and poor health status. These expenditures may potentially be reduced by providing timely intervention for need factors or by providing care under a collaborative care model. 18. Tutorial on Biostatistics: Linear Regression Analysis of Continuous Correlated Eye Data. Science.gov (United States) Ying, Gui-Shuang; Maguire, Maureen G; Glynn, Robert; Rosner, Bernard 2017-04-01 To describe and demonstrate appropriate linear regression methods for analyzing correlated continuous eye data. We describe several approaches to regression analysis involving both eyes, including mixed effects and marginal models under various covariance structures to account for inter-eye correlation. We demonstrate, with SAS statistical software, applications in a study comparing baseline refractive error between one eye with choroidal neovascularization (CNV) and the unaffected fellow eye, and in a study determining factors associated with visual field in the elderly. When refractive error from both eyes were analyzed with standard linear regression without accounting for inter-eye correlation (adjusting for demographic and ocular covariates), the difference between eyes with CNV and fellow eyes was 0.15 diopters (D; 95% confidence interval, CI -0.03 to 0.32D, p = 0.10). Using a mixed effects model or a marginal model, the estimated difference was the same but with narrower 95% CI (0.01 to 0.28D, p = 0.03). Standard regression for visual field data from both eyes provided biased estimates of standard error (generally underestimated) and smaller p-values, while analysis of the worse eye provided larger p-values than mixed effects models and marginal models. In research involving both eyes, ignoring inter-eye correlation can lead to invalid inferences. Analysis using only right or left eyes is valid, but decreases power. Worse-eye analysis can provide less power and biased estimates of effect. Mixed effects or marginal models using the eye as the unit of analysis should be used to appropriately account for inter-eye correlation and maximize power and precision. 19. Face Hallucination with Linear Regression Model in Semi-Orthogonal Multilinear PCA Method Science.gov (United States) 2018-04-01 In this paper, we propose a new face hallucination technique, face images reconstruction in HSV color space with a semi-orthogonal multilinear principal component analysis method. This novel hallucination technique can perform directly from tensors via tensor-to-vector projection by imposing the orthogonality constraint in only one mode. In our experiments, we use facial images from FERET database to test our hallucination approach which is demonstrated by extensive experiments with high-quality hallucinated color faces. The experimental results assure clearly demonstrated that we can generate photorealistic color face images by using the SO-MPCA subspace with a linear regression model. 20. Estimating integrated variance in the presence of microstructure noise using linear regression Science.gov (United States) 2017-07-01 Using financial high-frequency data for estimation of integrated variance of asset prices is beneficial but with increasing number of observations so-called microstructure noise occurs. This noise can significantly bias the realized variance estimator. We propose a method for estimation of the integrated variance robust to microstructure noise as well as for testing the presence of the noise. Our method utilizes linear regression in which realized variances estimated from different data subsamples act as dependent variable while the number of observations act as explanatory variable. We compare proposed estimator with other methods on simulated data for several microstructure noise structures. 1. Application of genetic algorithm - multiple linear regressions to predict the activity of RSK inhibitors Directory of Open Access Journals (Sweden) Avval Zhila Mohajeri 2015-01-01 Full Text Available This paper deals with developing a linear quantitative structure-activity relationship (QSAR model for predicting the RSK inhibition activity of some new compounds. A dataset consisting of 62 pyrazino [1,2-α] indole, diazepino [1,2-α] indole, and imidazole derivatives with known inhibitory activities was used. Multiple linear regressions (MLR technique combined with the stepwise (SW and the genetic algorithm (GA methods as variable selection tools was employed. For more checking stability, robustness and predictability of the proposed models, internal and external validation techniques were used. Comparison of the results obtained, indicate that the GA-MLR model is superior to the SW-MLR model and that it isapplicable for designing novel RSK inhibitors. 2. Estimating traffic volume on Wyoming low volume roads using linear and logistic regression methods Directory of Open Access Journals (Sweden) Dick Apronti 2016-12-01 3. Introduction to statistical modelling 2: categorical variables and interactions in linear regression. Science.gov (United States) Lunt, Mark 2015-07-01 4. Method validation using weighted linear regression models for quantification of UV filters in water samples. Science.gov (United States) da Silva, Claudia Pereira; Emídio, Elissandro Soares; de Marchi, Mary Rosa Rodrigues 2015-01-01 This paper describes the validation of a method consisting of solid-phase extraction followed by gas chromatography-tandem mass spectrometry for the analysis of the ultraviolet (UV) filters benzophenone-3, ethylhexyl salicylate, ethylhexyl methoxycinnamate and octocrylene. The method validation criteria included evaluation of selectivity, analytical curve, trueness, precision, limits of detection and limits of quantification. The non-weighted linear regression model has traditionally been used for calibration, but it is not necessarily the optimal model in all cases. Because the assumption of homoscedasticity was not met for the analytical data in this work, a weighted least squares linear regression was used for the calibration method. The evaluated analytical parameters were satisfactory for the analytes and showed recoveries at four fortification levels between 62% and 107%, with relative standard deviations less than 14%. The detection limits ranged from 7.6 to 24.1 ng L(-1). The proposed method was used to determine the amount of UV filters in water samples from water treatment plants in Araraquara and Jau in São Paulo, Brazil. Copyright © 2014 Elsevier B.V. All rights reserved. 5. Reduction of interferences in graphite furnace atomic absorption spectrometry by multiple linear regression modelling Science.gov (United States) Grotti, Marco; Abelmoschi, Maria Luisa; Soggia, Francesco; Tiberiade, Christian; Frache, Roberto 2000-12-01 The multivariate effects of Na, K, Mg and Ca as nitrates on the electrothermal atomisation of manganese, cadmium and iron were studied by multiple linear regression modelling. Since the models proved to efficiently predict the effects of the considered matrix elements in a wide range of concentrations, they were applied to correct the interferences occurring in the determination of trace elements in seawater after pre-concentration of the analytes. In order to obtain a statistically significant number of samples, a large volume of the certified seawater reference materials CASS-3 and NASS-3 was treated with Chelex-100 resin; then, the chelating resin was separated from the solution, divided into several sub-samples, each of them was eluted with nitric acid and analysed by electrothermal atomic absorption spectrometry (for trace element determinations) and inductively coupled plasma optical emission spectrometry (for matrix element determinations). To minimise any other systematic error besides that due to matrix effects, accuracy of the pre-concentration step and contamination levels of the procedure were checked by inductively coupled plasma mass spectrometric measurements. Analytical results obtained by applying the multiple linear regression models were compared with those obtained with other calibration methods, such as external calibration using acid-based standards, external calibration using matrix-matched standards and the analyte addition technique. Empirical models proved to efficiently reduce interferences occurring in the analysis of real samples, allowing an improvement of accuracy better than for other calibration methods. 6. A Feature-Free 30-Disease Pathological Brain Detection System by Linear Regression Classifier. Science.gov (United States) Chen, Yi; Shao, Ying; Yan, Jie; Yuan, Ti-Fei; Qu, Yanwen; Lee, Elizabeth; Wang, Shuihua 2017-01-01 Alzheimer's disease patients are increasing rapidly every year. Scholars tend to use computer vision methods to develop automatic diagnosis system. (Background) In 2015, Gorji et al. proposed a novel method using pseudo Zernike moment. They tested four classifiers: learning vector quantization neural network, pattern recognition neural network trained by Levenberg-Marquardt, by resilient backpropagation, and by scaled conjugate gradient. This study presents an improved method by introducing a relatively new classifier-linear regression classification. Our method selects one axial slice from 3D brain image, and employed pseudo Zernike moment with maximum order of 15 to extract 256 features from each image. Finally, linear regression classification was harnessed as the classifier. The proposed approach obtains an accuracy of 97.51%, a sensitivity of 96.71%, and a specificity of 97.73%. Our method performs better than Gorji's approach and five other state-of-the-art approaches. Therefore, it can be used to detect Alzheimer's disease. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org. 7. Linear Regression between CIE-Lab Color Parameters and Organic Matter in Soils of Tea Plantations Science.gov (United States) Chen, Yonggen; Zhang, Min; Fan, Dongmei; Fan, Kai; Wang, Xiaochang 2018-02-01 To quantify the relationship between the soil organic matter and color parameters using the CIE-Lab system, 62 soil samples (0-10 cm, Ferralic Acrisols) from tea plantations were collected from southern China. After air-drying and sieving, numerical color information and reflectance spectra of soil samples were measured under laboratory conditions using an UltraScan VIS (HunterLab) spectrophotometer equipped with CIE-Lab color models. We found that soil total organic carbon (TOC) and nitrogen (TN) contents were negatively correlated with the L* value (lightness) ( r = -0.84 and -0.80, respectively), a* value (correlation coefficient r = -0.51 and -0.46, respectively) and b* value ( r = -0.76 and -0.70, respectively). There were also linear regressions between TOC and TN contents with the L* value and b* value. Results showed that color parameters from a spectrophotometer equipped with CIE-Lab color models can predict TOC contents well for soils in tea plantations. The linear regression model between color values and soil organic carbon contents showed it can be used as a rapid, cost-effective method to evaluate content of soil organic matter in Chinese tea plantations. 8. Multivariate linear regression of high-dimensional fMRI data with multiple target variables. Science.gov (United States) Valente, Giancarlo; Castellanos, Agustin Lage; Vanacore, Gianluca; Formisano, Elia 2014-05-01 Multivariate regression is increasingly used to study the relation between fMRI spatial activation patterns and experimental stimuli or behavioral ratings. With linear models, informative brain locations are identified by mapping the model coefficients. This is a central aspect in neuroimaging, as it provides the sought-after link between the activity of neuronal populations and subject's perception, cognition or behavior. Here, we show that mapping of informative brain locations using multivariate linear regression (MLR) may lead to incorrect conclusions and interpretations. MLR algorithms for high dimensional data are designed to deal with targets (stimuli or behavioral ratings, in fMRI) separately, and the predictive map of a model integrates information deriving from both neural activity patterns and experimental design. Not accounting explicitly for the presence of other targets whose associated activity spatially overlaps with the one of interest may lead to predictive maps of troublesome interpretation. We propose a new model that can correctly identify the spatial patterns associated with a target while achieving good generalization. For each target, the training is based on an augmented dataset, which includes all remaining targets. The estimation on such datasets produces both maps and interaction coefficients, which are then used to generalize. The proposed formulation is independent of the regression algorithm employed. We validate this model on simulated fMRI data and on a publicly available dataset. Results indicate that our method achieves high spatial sensitivity and good generalization and that it helps disentangle specific neural effects from interaction with predictive maps associated with other targets. Copyright © 2013 Wiley Periodicals, Inc. 9. Two-Sample Tests for High-Dimensional Linear Regression with an Application to Detecting Interactions. Science.gov (United States) Xia, Yin; Cai, Tianxi; Cai, T Tony 2018-01-01 Motivated by applications in genomics, we consider in this paper global and multiple testing for the comparisons of two high-dimensional linear regression models. A procedure for testing the equality of the two regression vectors globally is proposed and shown to be particularly powerful against sparse alternatives. We then introduce a multiple testing procedure for identifying unequal coordinates while controlling the false discovery rate and false discovery proportion. Theoretical justifications are provided to guarantee the validity of the proposed tests and optimality results are established under sparsity assumptions on the regression coefficients. The proposed testing procedures are easy to implement. Numerical properties of the procedures are investigated through simulation and data analysis. The results show that the proposed tests maintain the desired error rates under the null and have good power under the alternative at moderate sample sizes. The procedures are applied to the Framingham Offspring study to investigate the interactions between smoking and cardiovascular related genetic mutations important for an inflammation marker. 10. Synthesis of linear regression coefficients by recovering the within-study covariance matrix from summary statistics. Science.gov (United States) Yoneoka, Daisuke; Henmi, Masayuki 2017-06-01 Recently, the number of regression models has dramatically increased in several academic fields. However, within the context of meta-analysis, synthesis methods for such models have not been developed in a commensurate trend. One of the difficulties hindering the development is the disparity in sets of covariates among literature models. If the sets of covariates differ across models, interpretation of coefficients will differ, thereby making it difficult to synthesize them. Moreover, previous synthesis methods for regression models, such as multivariate meta-analysis, often have problems because covariance matrix of coefficients (i.e. within-study correlations) or individual patient data are not necessarily available. This study, therefore, proposes a brief explanation regarding a method to synthesize linear regression models under different covariate sets by using a generalized least squares method involving bias correction terms. Especially, we also propose an approach to recover (at most) threecorrelations of covariates, which is required for the calculation of the bias term without individual patient data. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd. 11. Soil moisture estimation using multi linear regression with terraSAR-X data Directory of Open Access Journals (Sweden) G. García 2016-06-01 Full Text Available The first five centimeters of soil form an interface where the main heat fluxes exchanges between the land surface and the atmosphere occur. Besides ground measurements, remote sensing has proven to be an excellent tool for the monitoring of spatial and temporal distributed data of the most relevant Earth surface parameters including soil’s parameters. Indeed, active microwave sensors (Synthetic Aperture Radar - SAR offer the opportunity to monitor soil moisture (HS at global, regional and local scales by monitoring involved processes. Several inversion algorithms, that derive geophysical information as HS from SAR data, were developed. Many of them use electromagnetic models for simulating the backscattering coefficient and are based on statistical techniques, such as neural networks, inversion methods and regression models. Recent studies have shown that simple multiple regression techniques yield satisfactory results. The involved geophysical variables in these methodologies are descriptive of the soil structure, microwave characteristics and land use. Therefore, in this paper we aim at developing a multiple linear regression model to estimate HS on flat agricultural regions using TerraSAR-X satellite data and data from a ground weather station. The results show that the backscatter, the precipitation and the relative humidity are the explanatory variables of HS. The results obtained presented a RMSE of 5.4 and a R2  of about 0.6 12. Uncertainty of pesticide residue concentration determined from ordinary and weighted linear regression curve. Science.gov (United States) Yolci Omeroglu, Perihan; Ambrus, Árpad; Boyacioglu, Dilek 2018-03-28 Determination of pesticide residues is based on calibration curves constructed for each batch of analysis. Calibration standard solutions are prepared from a known amount of reference material at different concentration levels covering the concentration range of the analyte in the analysed samples. In the scope of this study, the applicability of both ordinary linear and weighted linear regression (OLR and WLR) for pesticide residue analysis was investigated. We used 782 multipoint calibration curves obtained for 72 different analytical batches with high-pressure liquid chromatography equipped with an ultraviolet detector, and gas chromatography with electron capture, nitrogen phosphorus or mass spectrophotometer detectors. Quality criteria of the linear curves including regression coefficient, standard deviation of relative residuals and deviation of back calculated concentrations were calculated both for WLR and OLR methods. Moreover, the relative uncertainty of the predicted analyte concentration was estimated for both methods. It was concluded that calibration curve based on WLR complies with all the quality criteria set by international guidelines compared to those calculated with OLR. It means that all the data fit well with WLR for pesticide residue analysis. It was estimated that, regardless of the actual concentration range of the calibration, relative uncertainty at the lowest calibrated level ranged between 0.3% and 113.7% for OLR and between 0.2% and 22.1% for WLR. At or above 1/3 of the calibrated range, uncertainty of calibration curve ranged between 0.1% and 16.3% for OLR and 0% and 12.2% for WLR, and therefore, the two methods gave comparable results. 13. Heteroscedasticity as a Basis of Direction Dependence in Reversible Linear Regression Models. Science.gov (United States) Wiedermann, Wolfgang; Artner, Richard; von Eye, Alexander 2017-01-01 Heteroscedasticity is a well-known issue in linear regression modeling. When heteroscedasticity is observed, researchers are advised to remedy possible model misspecification of the explanatory part of the model (e.g., considering alternative functional forms and/or omitted variables). The present contribution discusses another source of heteroscedasticity in observational data: Directional model misspecifications in the case of nonnormal variables. Directional misspecification refers to situations where alternative models are equally likely to explain the data-generating process (e.g., x → y versus y → x). It is shown that the homoscedasticity assumption is likely to be violated in models that erroneously treat true nonnormal predictors as response variables. Recently, Direction Dependence Analysis (DDA) has been proposed as a framework to empirically evaluate the direction of effects in linear models. The present study links the phenomenon of heteroscedasticity with DDA and describes visual diagnostics and nine homoscedasticity tests that can be used to make decisions concerning the direction of effects in linear models. Results of a Monte Carlo simulation that demonstrate the adequacy of the approach are presented. An empirical example is provided, and applicability of the methodology in cases of violated assumptions is discussed. 14. A land use regression model for ambient ultrafine particles in Montreal, Canada: A comparison of linear regression and a machine learning approach. Science.gov (United States) Weichenthal, Scott; Ryswyk, Keith Van; Goldstein, Alon; Bagg, Scott; Shekkarizfard, Maryam; Hatzopoulou, Marianne 2016-04-01 Existing evidence suggests that ambient ultrafine particles (UFPs) (regression model for UFPs in Montreal, Canada using mobile monitoring data collected from 414 road segments during the summer and winter months between 2011 and 2012. Two different approaches were examined for model development including standard multivariable linear regression and a machine learning approach (kernel-based regularized least squares (KRLS)) that learns the functional form of covariate impacts on ambient UFP concentrations from the data. The final models included parameters for population density, ambient temperature and wind speed, land use parameters (park space and open space), length of local roads and rail, and estimated annual average NOx emissions from traffic. The final multivariable linear regression model explained 62% of the spatial variation in ambient UFP concentrations whereas the KRLS model explained 79% of the variance. The KRLS model performed slightly better than the linear regression model when evaluated using an external dataset (R(2)=0.58 vs. 0.55) or a cross-validation procedure (R(2)=0.67 vs. 0.60). In general, our findings suggest that the KRLS approach may offer modest improvements in predictive performance compared to standard multivariable linear regression models used to estimate spatial variations in ambient UFPs. However, differences in predictive performance were not statistically significant when evaluated using the cross-validation procedure. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved. 15. Estimating leaf photosynthetic pigments information by stepwise multiple linear regression analysis and a leaf optical model Science.gov (United States) Liu, Pudong; Shi, Runhe; Wang, Hong; Bai, Kaixu; Gao, Wei 2014-10-01 Leaf pigments are key elements for plant photosynthesis and growth. Traditional manual sampling of these pigments is labor-intensive and costly, which also has the difficulty in capturing their temporal and spatial characteristics. The aim of this work is to estimate photosynthetic pigments at large scale by remote sensing. For this purpose, inverse model were proposed with the aid of stepwise multiple linear regression (SMLR) analysis. Furthermore, a leaf radiative transfer model (i.e. PROSPECT model) was employed to simulate the leaf reflectance where wavelength varies from 400 to 780 nm at 1 nm interval, and then these values were treated as the data from remote sensing observations. Meanwhile, simulated chlorophyll concentration (Cab), carotenoid concentration (Car) and their ratio (Cab/Car) were taken as target to build the regression model respectively. In this study, a total of 4000 samples were simulated via PROSPECT with different Cab, Car and leaf mesophyll structures as 70% of these samples were applied for training while the last 30% for model validation. Reflectance (r) and its mathematic transformations (1/r and log (1/r)) were all employed to build regression model respectively. Results showed fair agreements between pigments and simulated reflectance with all adjusted coefficients of determination (R2) larger than 0.8 as 6 wavebands were selected to build the SMLR model. The largest value of R2 for Cab, Car and Cab/Car are 0.8845, 0.876 and 0.8765, respectively. Meanwhile, mathematic transformations of reflectance showed little influence on regression accuracy. We concluded that it was feasible to estimate the chlorophyll and carotenoids and their ratio based on statistical model with leaf reflectance data. 16. Causal correlation of foliar biochemical concentrations with AVIRIS spectra using forced entry linear regression Science.gov (United States) Dawson, Terence P.; Curran, Paul J.; Kupiec, John A. 1995-01-01 link between wavelengths chosen by stepwise regression and the biochemical of interest, and this in turn has cast doubts on the use of imaging spectrometry for the estimation of foliar biochemical concentrations at sites distant from the training sites. To investigate this problem, an analysis was conducted on the variation in canopy biochemical concentrations and reflectance spectra using forced entry linear regression. 17. Boosted regression trees, multivariate adaptive regression splines and their two-step combinations with multiple linear regression or partial least squares to predict blood-brain barrier passage: a case study. Science.gov (United States) Deconinck, E; Zhang, M H; Petitet, F; Dubus, E; Ijjaali, I; Coomans, D; Vander Heyden, Y 2008-02-18 The use of some unconventional non-linear modeling techniques, i.e. classification and regression trees and multivariate adaptive regression splines-based methods, was explored to model the blood-brain barrier (BBB) passage of drugs and drug-like molecules. The data set contains BBB passage values for 299 structural and pharmacological diverse drugs, originating from a structured knowledge-based database. Models were built using boosted regression trees (BRT) and multivariate adaptive regression splines (MARS), as well as their respective combinations with stepwise multiple linear regression (MLR) and partial least squares (PLS) regression in two-step approaches. The best models were obtained using combinations of MARS with either stepwise MLR or PLS. It could be concluded that the use of combinations of a linear with a non-linear modeling technique results in some improved properties compared to the individual linear and non-linear models and that, when the use of such a combination is appropriate, combinations using MARS as non-linear technique should be preferred over those with BRT, due to some serious drawbacks of the BRT approaches. 18. Daily Suspended Sediment Discharge Prediction Using Multiple Linear Regression and Artificial Neural Network Science.gov (United States) Uca; Toriman, Ekhwan; Jaafar, Othman; Maru, Rosmini; Arfan, Amal; Saleh Ahmar, Ansari 2018-01-01 Prediction of suspended sediment discharge in a catchments area is very important because it can be used to evaluation the erosion hazard, management of its water resources, water quality, hydrology project management (dams, reservoirs, and irrigation) and to determine the extent of the damage that occurred in the catchments. Multiple Linear Regression analysis and artificial neural network can be used to predict the amount of daily suspended sediment discharge. Regression analysis using the least square method, whereas artificial neural networks using Radial Basis Function (RBF) and feedforward multilayer perceptron with three learning algorithms namely Levenberg-Marquardt (LM), Scaled Conjugate Descent (SCD) and Broyden-Fletcher-Goldfarb-Shanno Quasi-Newton (BFGS). The number neuron of hidden layer is three to sixteen, while in output layer only one neuron because only one output target. The mean absolute error (MAE), root mean square error (RMSE), coefficient of determination (R2 ) and coefficient of efficiency (CE) of the multiple linear regression (MLRg) value Model 2 (6 input variable independent) has the lowest the value of MAE and RMSE (0.0000002 and 13.6039) and highest R2 and CE (0.9971 and 0.9971). When compared between LM, SCG and RBF, the BFGS model structure 3-7-1 is the better and more accurate to prediction suspended sediment discharge in Jenderam catchment. The performance value in testing process, MAE and RMSE (13.5769 and 17.9011) is smallest, meanwhile R2 and CE (0.9999 and 0.9998) is the highest if it compared with the another BFGS Quasi-Newton model (6-3-1, 9-10-1 and 12-12-1). Based on the performance statistics value, MLRg, LM, SCG, BFGS and RBF suitable and accurately for prediction by modeling the non-linear complex behavior of suspended sediment responses to rainfall, water depth and discharge. The comparison between artificial neural network (ANN) and MLRg, the MLRg Model 2 accurately for to prediction suspended sediment discharge (kg 19. Carbon 13 nuclear magnetic resonance chemical shifts empiric calculations of polymers by multi linear regression and molecular modeling International Nuclear Information System (INIS) Da Silva Pinto, P.S.; Eustache, R.P.; Audenaert, M.; Bernassau, J.M. 1996-01-01 This work deals with carbon 13 nuclear magnetic resonance chemical shifts empiric calculations by multi linear regression and molecular modeling. The multi linear regression is indeed one way to obtain an equation able to describe the behaviour of the chemical shift for some molecules which are in the data base (rigid molecules with carbons). The methodology consists of structures describer parameters definition which can be bound to carbon 13 chemical shift known for these molecules. Then, the linear regression is used to determine the equation significant parameters. This one can be extrapolated to molecules which presents some resemblances with those of the data base. (O.L.). 20 refs., 4 figs., 1 tab 20. Analysis of the Covered Electrode Welding Process Stability on the Basis of Linear Regression Equation Directory of Open Access Journals (Sweden) Słania J. 2014-10-01 Full Text Available The article presents the process of production of coated electrodes and their welding properties. The factors concerning the welding properties and the currently applied method of assessing are given. The methodology of the testing based on the measuring and recording of instantaneous values of welding current and welding arc voltage is discussed. Algorithm for creation of reference data base of the expert system is shown, aiding the assessment of covered electrodes welding properties. The stability of voltage–current characteristics was discussed. Statistical factors of instantaneous values of welding current and welding arc voltage waveforms used for determining of welding process stability are presented. The results of coated electrodes welding properties are compared. The article presents the results of linear regression as well as the impact of the independent variables on the welding process performance. Finally the conclusions drawn from the research are given. 1. Generalized Partially Linear Regression with Misclassified Data and an Application to Labour Market Transitions DEFF Research Database (Denmark) Dlugosz, Stephan; Mammen, Enno; Wilke, Ralf 2017-01-01 Large data sets that originate from administrative or operational activity are increasingly used for statistical analysis as they often contain very precise information and a large number of observations. But there is evidence that some variables can be subject to severe misclassification...... or contain missing values. Given the size of the data, a flexible semiparametric misclassification model would be good choice but their use in practise is scarce. To close this gap a semiparametric model for the probability of observing labour market transitions is estimated using a sample of 20 m...... observations from Germany. It is shown that estimated marginal effects of a number of covariates are sizeably affected by misclassification and missing values in the analysis data. The proposed generalized partially linear regression extends existing models by allowing a misclassified discrete covariate... 2. hMuLab: A Biomedical Hybrid MUlti-LABel Classifier Based on Multiple Linear Regression. Science.gov (United States) Wang, Pu; Ge, Ruiquan; Xiao, Xuan; Zhou, Manli; Zhou, Fengfeng 2017-01-01 Many biomedical classification problems are multi-label by nature, e.g., a gene involved in a variety of functions and a patient with multiple diseases. The majority of existing classification algorithms assumes each sample with only one class label, and the multi-label classification problem remains to be a challenge for biomedical researchers. This study proposes a novel multi-label learning algorithm, hMuLab, by integrating both feature-based and neighbor-based similarity scores. The multiple linear regression modeling techniques make hMuLab capable of producing multiple label assignments for a query sample. The comparison results over six commonly-used multi-label performance measurements suggest that hMuLab performs accurately and stably for the biomedical datasets, and may serve as a complement to the existing literature. 3. Linear regression models and k-means clustering for statistical analysis of fNIRS data. Science.gov (United States) Bonomini, Viola; Zucchelli, Lucia; Re, Rebecca; Ieva, Francesca; Spinelli, Lorenzo; Contini, Davide; Paganoni, Anna; Torricelli, Alessandro 2015-02-01 We propose a new algorithm, based on a linear regression model, to statistically estimate the hemodynamic activations in fNIRS data sets. The main concern guiding the algorithm development was the minimization of assumptions and approximations made on the data set for the application of statistical tests. Further, we propose a K-means method to cluster fNIRS data (i.e. channels) as activated or not activated. The methods were validated both on simulated and in vivo fNIRS data. A time domain (TD) fNIRS technique was preferred because of its high performances in discriminating cortical activation and superficial physiological changes. However, the proposed method is also applicable to continuous wave or frequency domain fNIRS data sets. 4. Multiple Linear Regression Model Based on Neural Network and Its Application in the MBR Simulation Directory of Open Access Journals (Sweden) Chunqing Li 2012-01-01 Full Text Available The computer simulation of the membrane bioreactor MBR has become the research focus of the MBR simulation. In order to compensate for the defects, for example, long test period, high cost, invisible equipment seal, and so forth, on the basis of conducting in-depth study of the mathematical model of the MBR, combining with neural network theory, this paper proposed a three-dimensional simulation system for MBR wastewater treatment, with fast speed, high efficiency, and good visualization. The system is researched and developed with the hybrid programming of VC++ programming language and OpenGL, with a multifactor linear regression model of affecting MBR membrane fluxes based on neural network, applying modeling method of integer instead of float and quad tree recursion. The experiments show that the three-dimensional simulation system, using the above models and methods, has the inspiration and reference for the future research and application of the MBR simulation technology. 5. Railway Crossing Risk Area Detection Using Linear Regression and Terrain Drop Compensation Techniques Science.gov (United States) Chen, Wen-Yuan; Wang, Mei; Fu, Zhou-Xing 2014-01-01 Most railway accidents happen at railway crossings. Therefore, how to detect humans or objects present in the risk area of a railway crossing and thus prevent accidents are important tasks. In this paper, three strategies are used to detect the risk area of a railway crossing: (1) we use a terrain drop compensation (TDC) technique to solve the problem of the concavity of railway crossings; (2) we use a linear regression technique to predict the position and length of an object from image processing; (3) we have developed a novel strategy called calculating local maximum Y-coordinate object points (CLMYOP) to obtain the ground points of the object. In addition, image preprocessing is also applied to filter out the noise and successfully improve the object detection. From the experimental results, it is demonstrated that our scheme is an effective and corrective method for the detection of railway crossing risk areas. PMID:24936948 6. Railway Crossing Risk Area Detection Using Linear Regression and Terrain Drop Compensation Techniques Directory of Open Access Journals (Sweden) Wen-Yuan Chen 2014-06-01 Full Text Available Most railway accidents happen at railway crossings. Therefore, how to detect humans or objects present in the risk area of a railway crossing and thus prevent accidents are important tasks. In this paper, three strategies are used to detect the risk area of a railway crossing: (1 we use a terrain drop compensation (TDC technique to solve the problem of the concavity of railway crossings; (2 we use a linear regression technique to predict the position and length of an object from image processing; (3 we have developed a novel strategy called calculating local maximum Y-coordinate object points (CLMYOP to obtain the ground points of the object. In addition, image preprocessing is also applied to filter out the noise and successfully improve the object detection. From the experimental results, it is demonstrated that our scheme is an effective and corrective method for the detection of railway crossing risk areas. 7. Predicting Fuel Ignition Quality Using 1H NMR Spectroscopy and Multiple Linear Regression KAUST Repository Abdul Jameel, Abdul Gani 2016-09-14 An improved model for the prediction of ignition quality of hydrocarbon fuels has been developed using 1H nuclear magnetic resonance (NMR) spectroscopy and multiple linear regression (MLR) modeling. Cetane number (CN) and derived cetane number (DCN) of 71 pure hydrocarbons and 54 hydrocarbon blends were utilized as a data set to study the relationship between ignition quality and molecular structure. CN and DCN are functional equivalents and collectively referred to as D/CN, herein. The effect of molecular weight and weight percent of structural parameters such as paraffinic CH3 groups, paraffinic CH2 groups, paraffinic CH groups, olefinic CH–CH2 groups, naphthenic CH–CH2 groups, and aromatic C–CH groups on D/CN was studied. A particular emphasis on the effect of branching (i.e., methyl substitution) on the D/CN was studied, and a new parameter denoted as the branching index (BI) was introduced to quantify this effect. A new formula was developed to calculate the BI of hydrocarbon fuels using 1H NMR spectroscopy. Multiple linear regression (MLR) modeling was used to develop an empirical relationship between D/CN and the eight structural parameters. This was then used to predict the DCN of many hydrocarbon fuels. The developed model has a high correlation coefficient (R2 = 0.97) and was validated with experimentally measured DCN of twenty-two real fuel mixtures (e.g., gasolines and diesels) and fifty-nine blends of known composition, and the predicted values matched well with the experimental data. 8. Standardizing effect size from linear regression models with log-transformed variables for meta-analysis. Science.gov (United States) Rodríguez-Barranco, Miguel; Tobías, Aurelio; Redondo, Daniel; Molina-Portillo, Elena; Sánchez, María José 2017-03-17 Meta-analysis is very useful to summarize the effect of a treatment or a risk factor for a given disease. Often studies report results based on log-transformed variables in order to achieve the principal assumptions of a linear regression model. If this is the case for some, but not all studies, the effects need to be homogenized. We derived a set of formulae to transform absolute changes into relative ones, and vice versa, to allow including all results in a meta-analysis. We applied our procedure to all possible combinations of log-transformed independent or dependent variables. We also evaluated it in a simulation based on two variables either normally or asymmetrically distributed. In all the scenarios, and based on different change criteria, the effect size estimated by the derived set of formulae was equivalent to the real effect size. To avoid biased estimates of the effect, this procedure should be used with caution in the case of independent variables with asymmetric distributions that significantly differ from the normal distribution. We illustrate an application of this procedure by an application to a meta-analysis on the potential effects on neurodevelopment in children exposed to arsenic and manganese. The procedure proposed has been shown to be valid and capable of expressing the effect size of a linear regression model based on different change criteria in the variables. Homogenizing the results from different studies beforehand allows them to be combined in a meta-analysis, independently of whether the transformations had been performed on the dependent and/or independent variables. 9. Multiple linear combination (MLC) regression tests for common variants adapted to linkage disequilibrium structure. Science.gov (United States) Yoo, Yun Joo; Sun, Lei; Poirier, Julia G; Paterson, Andrew D; Bull, Shelley B 2017-02-01 By jointly analyzing multiple variants within a gene, instead of one at a time, gene-based multiple regression can improve power, robustness, and interpretation in genetic association analysis. We investigate multiple linear combination (MLC) test statistics for analysis of common variants under realistic trait models with linkage disequilibrium (LD) based on HapMap Asian haplotypes. MLC is a directional test that exploits LD structure in a gene to construct clusters of closely correlated variants recoded such that the majority of pairwise correlations are positive. It combines variant effects within the same cluster linearly, and aggregates cluster-specific effects in a quadratic sum of squares and cross-products, producing a test statistic with reduced degrees of freedom (df) equal to the number of clusters. By simulation studies of 1000 genes from across the genome, we demonstrate that MLC is a well-powered and robust choice among existing methods across a broad range of gene structures. Compared to minimum P-value, variance-component, and principal-component methods, the mean power of MLC is never much lower than that of other methods, and can be higher, particularly with multiple causal variants. Moreover, the variation in gene-specific MLC test size and power across 1000 genes is less than that of other methods, suggesting it is a complementary approach for discovery in genome-wide analysis. The cluster construction of the MLC test statistics helps reveal within-gene LD structure, allowing interpretation of clustered variants as haplotypic effects, while multiple regression helps to distinguish direct and indirect associations. © 2016 The Authors Genetic Epidemiology Published by Wiley Periodicals, Inc. 10. A SOCIOLOGICAL ANALYSIS OF THE CHILDBEARING COEFFICIENT IN THE ALTAI REGION BASED ON METHOD OF FUZZY LINEAR REGRESSION Directory of Open Access Journals (Sweden) 2017-06-01 Full Text Available Purpose. Construction of a mathematical model of the dynamics of childbearing change in the Altai region in 2000–2016, analysis of the dynamics of changes in birth rates for multiple age categories of women of childbearing age. Methodology. A auxiliary analysis element is the construction of linear mathematical models of the dynamics of childbearing by using fuzzy linear regression method based on fuzzy numbers. Fuzzy linear regression is considered as an alternative to standard statistical linear regression for short time series and unknown distribution law. The parameters of fuzzy linear and standard statistical regressions for childbearing time series were defined with using the built in language MatLab algorithm. Method of fuzzy linear regression is not used in sociological researches yet. Results. There are made the conclusions about the socio-demographic changes in society, the high efficiency of the demographic policy of the leadership of the region and the country, and the applicability of the method of fuzzy linear regression for sociological analysis. 11. Direct integral linear least square regression method for kinetic evaluation of hepatobiliary scintigraphy International Nuclear Information System (INIS) Shuke, Noriyuki 1991-01-01 In hepatobiliary scintigraphy, kinetic model analysis, which provides kinetic parameters like hepatic extraction or excretion rate, have been done for quantitative evaluation of liver function. In this analysis, unknown model parameters are usually determined using nonlinear least square regression method (NLS method) where iterative calculation and initial estimate for unknown parameters are required. As a simple alternative to NLS method, direct integral linear least square regression method (DILS method), which can determine model parameters by a simple calculation without initial estimate, is proposed, and tested the applicability to analysis of hepatobiliary scintigraphy. In order to see whether DILS method could determine model parameters as good as NLS method, or to determine appropriate weight for DILS method, simulated theoretical data based on prefixed parameters were fitted to 1 compartment model using both DILS method with various weightings and NLS method. The parameter values obtained were then compared with prefixed values which were used for data generation. The effect of various weights on the error of parameter estimate was examined, and inverse of time was found to be the best weight to make the error minimum. When using this weight, DILS method could give parameter values close to those obtained by NLS method and both parameter values were very close to prefixed values. With appropriate weighting, the DILS method could provide reliable parameter estimate which is relatively insensitive to the data noise. In conclusion, the DILS method could be used as a simple alternative to NLS method, providing reliable parameter estimate. (author) 12. Neck-focused panic attacks among Cambodian refugees; a logistic and linear regression analysis. Science.gov (United States) Hinton, Devon E; Chhean, Dara; Pich, Vuth; Um, Khin; Fama, Jeanne M; Pollack, Mark H 2006-01-01 Consecutive Cambodian refugees attending a psychiatric clinic were assessed for the presence and severity of current--i.e., at least one episode in the last month--neck-focused panic. Among the whole sample (N=130), in a logistic regression analysis, the Anxiety Sensitivity Index (ASI; odds ratio=3.70) and the Clinician-Administered PTSD Scale (CAPS; odds ratio=2.61) significantly predicted the presence of current neck panic (NP). Among the neck panic patients (N=60), in the linear regression analysis, NP severity was significantly predicted by NP-associated flashbacks (beta=.42), NP-associated catastrophic cognitions (beta=.22), and CAPS score (beta=.28). Further analysis revealed the effect of the CAPS score to be significantly mediated (Sobel test [Baron, R. M., & Kenny, D. A. (1986). The moderator-mediator variable distinction in social psychological research: conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 51, 1173-1182]) by both NP-associated flashbacks and catastrophic cognitions. In the care of traumatized Cambodian refugees, NP severity, as well as NP-associated flashbacks and catastrophic cognitions, should be specifically assessed and treated. 13. QSAR Study of Insecticides of Phthalamide Derivatives Using Multiple Linear Regression and Artificial Neural Network Methods Directory of Open Access Journals (Sweden) 2014-03-01 Full Text Available Quantitative structure activity relationship (QSAR for 21 insecticides of phthalamides containing hydrazone (PCH was studied using multiple linear regression (MLR, principle component regression (PCR and artificial neural network (ANN. Five descriptors were included in the model for MLR and ANN analysis, and five latent variables obtained from principle component analysis (PCA were used in PCR analysis. Calculation of descriptors was performed using semi-empirical PM6 method. ANN analysis was found to be superior statistical technique compared to the other methods and gave a good correlation between descriptors and activity (r2 = 0.84. Based on the obtained model, we have successfully designed some new insecticides with higher predicted activity than those of previously synthesized compounds, e.g.2-(decalinecarbamoyl-5-chloro-N’-((5-methylthiophen-2-ylmethylene benzohydrazide, 2-(decalinecarbamoyl-5-chloro-N’-((thiophen-2-yl-methylene benzohydrazide and 2-(decaline carbamoyl-N’-(4-fluorobenzylidene-5-chlorobenzohydrazide with predicted log LC50 of 1.640, 1.672, and 1.769 respectively. 14. Bayesian linear regression with skew-symmetric error distributions with applications to survival analysis KAUST Repository Rubio, Francisco J. 2016-02-09 We study Bayesian linear regression models with skew-symmetric scale mixtures of normal error distributions. These kinds of models can be used to capture departures from the usual assumption of normality of the errors in terms of heavy tails and asymmetry. We propose a general noninformative prior structure for these regression models and show that the corresponding posterior distribution is proper under mild conditions. We extend these propriety results to cases where the response variables are censored. The latter scenario is of interest in the context of accelerated failure time models, which are relevant in survival analysis. We present a simulation study that demonstrates good frequentist properties of the posterior credible intervals associated with the proposed priors. This study also sheds some light on the trade-off between increased model flexibility and the risk of over-fitting. We illustrate the performance of the proposed models with real data. Although we focus on models with univariate response variables, we also present some extensions to the multivariate case in the Supporting Information. 15. A simplified calculation procedure for mass isotopomer distribution analysis (MIDA) based on multiple linear regression. Science.gov (United States) Fernández-Fernández, Mario; Rodríguez-González, Pablo; García Alonso, J Ignacio 2016-10-01 We have developed a novel, rapid and easy calculation procedure for Mass Isotopomer Distribution Analysis based on multiple linear regression which allows the simultaneous calculation of the precursor pool enrichment and the fraction of newly synthesized labelled proteins (fractional synthesis) using linear algebra. To test this approach, we used the peptide RGGGLK as a model tryptic peptide containing three subunits of glycine. We selected glycine labelled in two 13 C atoms ( 13 C 2 -glycine) as labelled amino acid to demonstrate that spectral overlap is not a problem in the proposed methodology. The developed methodology was tested first in vitro by changing the precursor pool enrichment from 10 to 40% of 13 C 2 -glycine. Secondly, a simulated in vivo synthesis of proteins was designed by combining the natural abundance RGGGLK peptide and 10 or 20% 13 C 2 -glycine at 1 : 1, 1 : 3 and 3 : 1 ratios. Precursor pool enrichments and fractional synthesis values were calculated with satisfactory precision and accuracy using a simple spreadsheet. This novel approach can provide a relatively rapid and easy means to measure protein turnover based on stable isotope tracers. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd. 16. Describing Growth Pattern of Bali Cows Using Non-linear Regression Models Directory of Open Access Journals (Sweden) Mohd. Hafiz A.W 2016-12-01 Full Text Available The objective of this study was to evaluate the best fit non-linear regression model to describe the growth pattern of Bali cows. Estimates of asymptotic mature weight, rate of maturing and constant of integration were derived from Brody, von Bertalanffy, Gompertz and Logistic models which were fitted to cross-sectional data of body weight taken from 74 Bali cows raised in MARDI Research Station Muadzam Shah Pahang. Coefficient of determination (R2 and residual mean squares (MSE were used to determine the best fit model in describing the growth pattern of Bali cows. Von Bertalanffy model was the best model among the four growth functions evaluated to determine the mature weight of Bali cattle as shown by the highest R2 and lowest MSE values (0.973 and 601.9, respectively, followed by Gompertz (0.972 and 621.2, respectively, Logistic (0.971 and 648.4, respectively and Brody (0.932 and 660.5, respectively models. The correlation between rate of maturing and mature weight was found to be negative in the range of -0.170 to -0.929 for all models, indicating that animals of heavier mature weight had lower rate of maturing. The use of non-linear model could summarize the weight-age relationship into several biologically interpreted parameters compared to the entire lifespan weight-age data points that are difficult and time consuming to interpret. 17. Association of footprint measurements with plantar kinetics: a linear regression model. Science.gov (United States) Fascione, Jeanna M; Crews, Ryan T; Wrobel, James S 2014-03-01 The use of foot measurements to classify morphology and interpret foot function remains one of the focal concepts of lower-extremity biomechanics. However, only 27% to 55% of midfoot variance in foot pressures has been determined in the most comprehensive models. We investigated whether dynamic walking footprint measurements are associated with inter-individual foot loading variability. Thirty individuals (15 men and 15 women; mean ± SD age, 27.17 ± 2.21 years) walked at a self-selected speed over an electronic pedography platform using the midgait technique. Kinetic variables (contact time, peak pressure, pressure-time integral, and force-time integral) were collected for six masked regions. Footprints were digitized for area and linear boundaries using digital photo planimetry software. Six footprint measurements were determined: contact area, footprint index, arch index, truncated arch index, Chippaux-Smirak index, and Staheli index. Linear regression analysis with a Bonferroni adjustment was performed to determine the association between the footprint measurements and each of the kinetic variables. The findings demonstrate that a relationship exists between increased midfoot contact and increased kinetic values in respective locations. Many of these variables produced large effect sizes while describing 38% to 71% of the common variance of select plantar kinetic variables in the medial midfoot region. In addition, larger footprints were associated with larger kinetic values at the medial heel region and both masked forefoot regions. Dynamic footprint measurements are associated with dynamic plantar loading kinetics, with emphasis on the midfoot region. 18. A simple bias correction in linear regression for quantitative trait association under two-tail extreme selection. Science.gov (United States) Kwan, Johnny S H; Kung, Annie W C; Sham, Pak C 2011-09-01 Selective genotyping can increase power in quantitative trait association. One example of selective genotyping is two-tail extreme selection, but simple linear regression analysis gives a biased genetic effect estimate. Here, we present a simple correction for the bias. 19. Using the Coefficient of Determination "R"[superscript 2] to Test the Significance of Multiple Linear Regression Science.gov (United States) Quinino, Roberto C.; Reis, Edna A.; Bessegato, Lupercio F. 2013-01-01 This article proposes the use of the coefficient of determination as a statistic for hypothesis testing in multiple linear regression based on distributions acquired by beta sampling. (Contains 3 figures.) 20. Human capital, social capital and scientific research in Europe: an application of linear hierarchical models OpenAIRE Mathieu Goudard; Michel Lubrano 2011-01-01 The theory of human capital is one way to explain individual decisions to produce scientific research. However, this theory, even if it reckons the importance of time in science, is too short for explaining the existing diversity of scientific output. The present paper introduces the social capital of Bourdieu (1980), Coleman (1988) and Putnam (1995) as a necessary complement to explain the creation of scientific human capital. This paper connects these two concepts by means of a hierarchical... 1. Monopole and dipole estimation for multi-frequency sky maps by linear regression Science.gov (United States) Wehus, I. K.; Fuskeland, U.; Eriksen, H. K.; Banday, A. J.; Dickinson, C.; Ghosh, T.; Górski, K. M.; Lawrence, C. R.; Leahy, J. P.; Maino, D.; Reich, P.; Reich, W. 2017-01-01 We describe a simple but efficient method for deriving a consistent set of monopole and dipole corrections for multi-frequency sky map data sets, allowing robust parametric component separation with the same data set. The computational core of this method is linear regression between pairs of frequency maps, often called T-T plots. Individual contributions from monopole and dipole terms are determined by performing the regression locally in patches on the sky, while the degeneracy between different frequencies is lifted whenever the dominant foreground component exhibits a significant spatial spectral index variation. Based on this method, we present two different, but each internally consistent, sets of monopole and dipole coefficients for the nine-year WMAP, Planck 2013, SFD 100 μm, Haslam 408 MHz and Reich & Reich 1420 MHz maps. The two sets have been derived with different analysis assumptions and data selection, and provide an estimate of residual systematic uncertainties. In general, our values are in good agreement with previously published results. Among the most notable results are a relative dipole between the WMAP and Planck experiments of 10-15μK (depending on frequency), an estimate of the 408 MHz map monopole of 8.9 ± 1.3 K, and a non-zero dipole in the 1420 MHz map of 0.15 ± 0.03 K pointing towards Galactic coordinates (l,b) = (308°,-36°) ± 14°. These values represent the sum of any instrumental and data processing offsets, as well as any Galactic or extra-Galactic component that is spectrally uniform over the full sky. 2. Modeling of Soil Aggregate Stability using Support Vector Machines and Multiple Linear Regression Directory of Open Access Journals (Sweden) Ali Asghar Besalatpour 2016-02-01 Full Text Available Introduction: Soil aggregate stability is a key factor in soil resistivity to mechanical stresses, including the impacts of rainfall and surface runoff, and thus to water erosion (Canasveras et al., 2010. Various indicators have been proposed to characterize and quantify soil aggregate stability, for example percentage of water-stable aggregates (WSA, mean weight diameter (MWD, geometric mean diameter (GMD of aggregates, and water-dispersible clay (WDC content (Calero et al., 2008. Unfortunately, the experimental methods available to determine these indicators are laborious, time-consuming and difficult to standardize (Canasveras et al., 2010. Therefore, it would be advantageous if aggregate stability could be predicted indirectly from more easily available data (Besalatpour et al., 2014. The main objective of this study is to investigate the potential use of support vector machines (SVMs method for estimating soil aggregate stability (as quantified by GMD as compared to multiple linear regression approach. Materials and Methods: The study area was part of the Bazoft watershed (31° 37′ to 32° 39′ N and 49° 34′ to 50° 32′ E, which is located in the Northern part of the Karun river basin in central Iran. A total of 160 soil samples were collected from the top 5 cm of soil surface. Some easily available characteristics including topographic, vegetation, and soil properties were used as inputs. Soil organic matter (SOM content was determined by the Walkley-Black method (Nelson & Sommers, 1986. Particle size distribution in the soil samples (clay, silt, sand, fine sand, and very fine sand were measured using the procedure described by Gee & Bauder (1986 and calcium carbonate equivalent (CCE content was determined by the back-titration method (Nelson, 1982. The modified Kemper & Rosenau (1986 method was used to determine wet-aggregate stability (GMD. The topographic attributes of elevation, slope, and aspect were characterized using a 20-m 3. [Comparison of application of Cochran-Armitage trend test and linear regression analysis for rate trend analysis in epidemiology study]. Science.gov (United States) Wang, D Z; Wang, C; Shen, C F; Zhang, Y; Zhang, H; Song, G D; Xue, X D; Xu, Z L; Zhang, S; Jiang, G H 2017-05-10 We described the time trend of acute myocardial infarction (AMI) from 1999 to 2013 in Tianjin incidence rate with Cochran-Armitage trend (CAT) test and linear regression analysis, and the results were compared. Based on actual population, CAT test had much stronger statistical power than linear regression analysis for both overall incidence trend and age specific incidence trend (Cochran-Armitage trend P valuelinear regression P value). The statistical power of CAT test decreased, while the result of linear regression analysis remained the same when population size was reduced by 100 times and AMI incidence rate remained unchanged. The two statistical methods have their advantages and disadvantages. It is necessary to choose statistical method according the fitting degree of data, or comprehensively analyze the results of two methods. 4. Development of statistical linear regression model for metals from transportation land uses. Science.gov (United States) Maniquiz, Marla C; Lee, Soyoung; Lee, Eunju; Kim, Lee-Hyung 2009-01-01 The transportation landuses possessing impervious surfaces such as highways, parking lots, roads, and bridges were recognized as the highly polluted non-point sources (NPSs) in the urban areas. Lots of pollutants from urban transportation are accumulating on the paved surfaces during dry periods and are washed-off during a storm. In Korea, the identification and monitoring of NPSs still represent a great challenge. Since 2004, the Ministry of Environment (MOE) has been engaged in several researches and monitoring to develop stormwater management policies and treatment systems for future implementation. The data over 131 storm events during May 2004 to September 2008 at eleven sites were analyzed to identify correlation relationships between particulates and metals, and to develop simple linear regression (SLR) model to estimate event mean concentration (EMC). Results indicate that there was no significant relationship between metals and TSS EMC. However, the SLR estimation models although not providing useful results are valuable indicators of high uncertainties that NPS pollution possess. Therefore, long term monitoring employing proper methods and precise statistical analysis of the data should be undertaken to eliminate these uncertainties. 5. Water quality control in Third River Reservoir (Argentina using geographical information systems and linear regression models Directory of Open Access Journals (Sweden) Claudia Ledesma 2013-08-01 Full Text Available Water quality is traditionally monitored and evaluated based upon field data collected at limited locations. The storage capacity of reservoirs is reduced by deposits of suspended matter. The major factors affecting surface water quality are suspended sediments, chlorophyll and nutrients. Modeling and monitoring the biogeochemical status of reservoirs can be done through data from remote sensors. Since the improvement of sensors’ spatial and spectral resolutions, satellites have been used to monitor the interior areas of bodies of water. Water quality parameters, such as chlorophyll-a concentration and secchi disk depth, were found to have a high correlation with transformed spectral variables derived from bands 1, 2, 3 and 4 of LANDSAT 5TM satellite. We created models of estimated responses in regard to values of chlorophyll-a. To do so, we used population models of single and multiple linear regression, whose parameters are associated with the reflectance data of bands 2 and 4 of the sub-image of the satellite, as well as the data of chlorophyll-a obtained in 25 selected stations. According to the physico-chemical analyzes performed, the characteristics of the water in the reservoir of Rio Tercero, correspond to somewhat hard freshwater with calcium bicarbonate. The water was classified as usable as a source of plant treatment, excellent for irrigation because of its low salinity and low residual sodium carbonate content, but unsuitable for animal consumption because of its low salt content. 6. An Application of Robust Method in Multiple Linear Regression Model toward Credit Card Debt Science.gov (United States) Amira Azmi, Nur; Saifullah Rusiman, Mohd; Khalid, Kamil; Roslan, Rozaini; Sufahani, Suliadi; Mohamad, Mahathir; Salleh, Rohayu Mohd; Hamzah, Nur Shamsidah Amir 2018-04-01 Credit card is a convenient alternative replaced cash or cheque, and it is essential component for electronic and internet commerce. In this study, the researchers attempt to determine the relationship and significance variables between credit card debt and demographic variables such as age, household income, education level, years with current employer, years at current address, debt to income ratio and other debt. The provided data covers 850 customers information. There are three methods that applied to the credit card debt data which are multiple linear regression (MLR) models, MLR models with least quartile difference (LQD) method and MLR models with mean absolute deviation method. After comparing among three methods, it is found that MLR model with LQD method became the best model with the lowest value of mean square error (MSE). According to the final model, it shows that the years with current employer, years at current address, household income in thousands and debt to income ratio are positively associated with the amount of credit debt. Meanwhile variables for age, level of education and other debt are negatively associated with amount of credit debt. This study may serve as a reference for the bank company by using robust methods, so that they could better understand their options and choice that is best aligned with their goals for inference regarding to the credit card debt. 7. Performance Prediction Modelling for Flexible Pavement on Low Volume Roads Using Multiple Linear Regression Analysis Directory of Open Access Journals (Sweden) C. Makendran 2015-01-01 8. A consensus successive projections algorithm--multiple linear regression method for analyzing near infrared spectra. Science.gov (United States) Liu, Ke; Chen, Xiaojing; Li, Limin; Chen, Huiling; Ruan, Xiukai; Liu, Wenbin 2015-02-09 The successive projections algorithm (SPA) is widely used to select variables for multiple linear regression (MLR) modeling. However, SPA used only once may not obtain all the useful information of the full spectra, because the number of selected variables cannot exceed the number of calibration samples in the SPA algorithm. Therefore, the SPA-MLR method risks the loss of useful information. To make a full use of the useful information in the spectra, a new method named "consensus SPA-MLR" (C-SPA-MLR) is proposed herein. This method is the combination of consensus strategy and SPA-MLR method. In the C-SPA-MLR method, SPA-MLR is used to construct member models with different subsets of variables, which are selected from the remaining variables iteratively. A consensus prediction is obtained by combining the predictions of the member models. The proposed method is evaluated by analyzing the near infrared (NIR) spectra of corn and diesel. The results of C-SPA-MLR method showed a better prediction performance compared with the SPA-MLR and full-spectra PLS methods. Moreover, these results could serve as a reference for combination the consensus strategy and other variable selection methods when analyzing NIR spectra and other spectroscopic techniques. Copyright © 2014 Elsevier B.V. All rights reserved. 9. [Multiple linear regression and ROC curve analysis of the factors of lumbar spine bone mineral density]. Science.gov (United States) Zhang, Xiaodong; Zhao, Yinxia; Hu, Shaoyong; Hao, Shuai; Yan, Jiewen; Zhang, Lingyan; Zhao, Jing; Li, Shaolin 2015-09-01 To investigate the correlation between the lumbar vertebra bone mineral density (BMD) and age, gender, height, weight, body mass index, waistline, hipline, bone marrow and abdomen fat, and to explore the key factor affecting the BMD. A total of 72 cases were randomly recruited. All the subjects underwent a spectroscopic examination of the third lumber vertebra with single-voxel method in 1.5T MR. Lipid fractions (FF%) were measured. Quantitative CT were also performed to get the BMD of L3 and the corresponding abdomen subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT). The statistical analysis were performed by SPSS 19.0. Multiple linear regression showed except the age and FF% showed significant difference (P0.05). The correlation of age and FF% with BMD was statistically negatively significant (r=-0.830, -0.521, P<0.05). The ROC curve analysis showed that the sensitivety and specificity of predicting osteoporosis were 81.8% and 86.9%, with a threshold of 58.5 years old. And it showed that the sensitivety and specificity of predicting osteoporosis were 90.9% and 55.7%, with a threshold of 52.8% for FF%. The lumbar vertebra BMD was significantly and negatively correlated with age and bone marrow FF%, but it was not significantly correlated with gender, height, weight, BMI, waistline, hipline, SAT and VAT. And age was the critical factor. 10. Non-linear auto-regressive models for cross-frequency coupling in neural time series Science.gov (United States) Tallot, Lucille; Grabot, Laetitia; Doyère, Valérie; Grenier, Yves; Gramfort, Alexandre 2017-01-01 We address the issue of reliably detecting and quantifying cross-frequency coupling (CFC) in neural time series. Based on non-linear auto-regressive models, the proposed method provides a generative and parametric model of the time-varying spectral content of the signals. As this method models the entire spectrum simultaneously, it avoids the pitfalls related to incorrect filtering or the use of the Hilbert transform on wide-band signals. As the model is probabilistic, it also provides a score of the model “goodness of fit” via the likelihood, enabling easy and legitimate model selection and parameter comparison; this data-driven feature is unique to our model-based approach. Using three datasets obtained with invasive neurophysiological recordings in humans and rodents, we demonstrate that these models are able to replicate previous results obtained with other metrics, but also reveal new insights such as the influence of the amplitude of the slow oscillation. Using simulations, we demonstrate that our parametric method can reveal neural couplings with shorter signals than non-parametric methods. We also show how the likelihood can be used to find optimal filtering parameters, suggesting new properties on the spectrum of the driving signal, but also to estimate the optimal delay between the coupled signals, enabling a directionality estimation in the coupling. PMID:29227989 11. Prediction of Depression in Cancer Patients With Different Classification Criteria, Linear Discriminant Analysis versus Logistic Regression. Science.gov (United States) Shayan, Zahra; Mohammad Gholi Mezerji, Naser; Shayan, Leila; Naseri, Parisa 2015-11-03 Logistic regression (LR) and linear discriminant analysis (LDA) are two popular statistical models for prediction of group membership. Although they are very similar, the LDA makes more assumptions about the data. When categorical and continuous variables used simultaneously, the optimal choice between the two models is questionable. In most studies, classification error (CE) is used to discriminate between subjects in several groups, but this index is not suitable to predict the accuracy of the outcome. The present study compared LR and LDA models using classification indices. This cross-sectional study selected 243 cancer patients. Sample sets of different sizes (n = 50, 100, 150, 200, 220) were randomly selected and the CE, B, and Q classification indices were calculated by the LR and LDA models. CE revealed the a lack of superiority for one model over the other, but the results showed that LR performed better than LDA for the B and Q indices in all situations. No significant effect for sample size on CE was noted for selection of an optimal model. Assessment of the accuracy of prediction of real data indicated that the B and Q indices are appropriate for selection of an optimal model. The results of this study showed that LR performs better in some cases and LDA in others when based on CE. The CE index is not appropriate for classification, although the B and Q indices performed better and offered more efficient criteria for comparison and discrimination between groups. 12. An Ionospheric Index Model based on Linear Regression and Neural Network Approaches Science.gov (United States) Tshisaphungo, Mpho; McKinnell, Lee-Anne; Bosco Habarulema, John 2017-04-01 The ionosphere is well known to reflect radio wave signals in the high frequency (HF) band due to the present of electron and ions within the region. To optimise the use of long distance HF communications, it is important to understand the drivers of ionospheric storms and accurately predict the propagation conditions especially during disturbed days. This paper presents the development of an ionospheric storm-time index over the South African region for the application of HF communication users. The model will result into a valuable tool to measure the complex ionospheric behaviour in an operational space weather monitoring and forecasting environment. The development of an ionospheric storm-time index is based on a single ionosonde station data over Grahamstown (33.3°S,26.5°E), South Africa. Critical frequency of the F2 layer (foF2) measurements for a period 1996-2014 were considered for this study. The model was developed based on linear regression and neural network approaches. In this talk validation results for low, medium and high solar activity periods will be discussed to demonstrate model's performance. 13. Time series linear regression of half-hourly radon levels in a residence International Nuclear Information System (INIS) Hull, D.A. 1990-01-01 This paper uses time series linear regression modelling to assess the impact of temperature and pressure differences on the radon measured in the basement and in the basement drain of a research house in the Princeton area of New Jersey. The models examine half-hour averages of several climate and house parameters for several periods of up to 11 days. The drain radon concentrations follow a strong diurnal pattern that shifts 12 hours in phase between the summer and the fall seasons. This shift can be linked both to the change in temperature differences between seasons and to an experiment which involved sealing the connection between the drain and the basement. We have found that both the basement and the drain radon concentrations are correlated to basement-outdoor and soil-outdoor temperature differences (the coefficient of determination varies between 0.6 and 0.8). The statistical models for the summer periods clearly describe a physical system where the basement drain pumps radon in during the night and sucks radon out during the day 14. A linear regression approach to evaluate the green supply chain management impact on industrial organizational performance. Science.gov (United States) Mumtaz, Ubaidullah; Ali, Yousaf; Petrillo, Antonella 2018-05-15 The increase in the environmental pollution is one of the most important topic in today's world. In this context, the industrial activities can pose a significant threat to the environment. To manage problems associate to industrial activities several methods, techniques and approaches have been developed. Green supply chain management (GSCM) is considered one of the most important "environmental management approach". In developing countries such as Pakistan the implementation of GSCM practices is still in its initial stages. Lack of knowledge about its effects on economic performance is the reason because of industries fear to implement these practices. The aim of this research is to perceive the effects of GSCM practices on organizational performance in Pakistan. In this research the GSCM practices considered are: internal practices, external practices, investment recovery and eco-design. While, the performance parameters considered are: environmental pollution, operational cost and organizational flexibility. A set of hypothesis propose the effect of each GSCM practice on the performance parameters. Factor analysis and linear regression are used to analyze the survey data of Pakistani industries, in order to authenticate these hypotheses. The findings of this research indicate a decrease in environmental pollution and operational cost with the implementation of GSCM practices, whereas organizational flexibility has not improved for Pakistani industries. These results aim to help managers regarding their decision of implementing GSCM practices in the industrial sector of Pakistan. Copyright © 2017 Elsevier B.V. All rights reserved. 15. Influence of plant root morphology and tissue composition on phenanthrene uptake: Stepwise multiple linear regression analysis International Nuclear Information System (INIS) Zhan, Xinhua; Liang, Xiao; Xu, Guohua; Zhou, Lixiang 2013-01-01 Polycyclic aromatic hydrocarbons (PAHs) are contaminants that reside mainly in surface soils. Dietary intake of plant-based foods can make a major contribution to total PAH exposure. Little information is available on the relationship between root morphology and plant uptake of PAHs. An understanding of plant root morphologic and compositional factors that affect root uptake of contaminants is important and can inform both agricultural (chemical contamination of crops) and engineering (phytoremediation) applications. Five crop plant species are grown hydroponically in solutions containing the PAH phenanthrene. Measurements are taken for 1) phenanthrene uptake, 2) root morphology – specific surface area, volume, surface area, tip number and total root length and 3) root tissue composition – water, lipid, protein and carbohydrate content. These factors are compared through Pearson's correlation and multiple linear regression analysis. The major factors which promote phenanthrene uptake are specific surface area and lipid content. -- Highlights: •There is no correlation between phenanthrene uptake and total root length, and water. •Specific surface area and lipid are the most crucial factors for phenanthrene uptake. •The contribution of specific surface area is greater than that of lipid. -- The contribution of specific surface area is greater than that of lipid in the two most important root morphological and compositional factors affecting phenanthrene uptake 16. Forecasting on the total volumes of Malaysia's imports and exports by multiple linear regression Science.gov (United States) Beh, W. L.; Yong, M. K. Au 2017-04-01 This study is to give an insight on the doubt of the important of macroeconomic variables that affecting the total volumes of Malaysia's imports and exports by using multiple linear regression (MLR) analysis. The time frame for this study will be determined by using quarterly data of the total volumes of Malaysia's imports and exports covering the period between 2000-2015. The macroeconomic variables will be limited to eleven variables which are the exchange rate of US Dollar with Malaysia Ringgit (USD-MYR), exchange rate of China Yuan with Malaysia Ringgit (RMB-MYR), exchange rate of European Euro with Malaysia Ringgit (EUR-MYR), exchange rate of Singapore Dollar with Malaysia Ringgit (SGD-MYR), crude oil prices, gold prices, producer price index (PPI), interest rate, consumer price index (CPI), industrial production index (IPI) and gross domestic product (GDP). This study has applied the Johansen Co-integration test to investigate the relationship among the total volumes to Malaysia's imports and exports. The result shows that crude oil prices, RMB-MYR, EUR-MYR and IPI play important roles in the total volumes of Malaysia's imports. Meanwhile crude oil price, USD-MYR and GDP play important roles in the total volumes of Malaysia's exports. 17. A hybrid genetic algorithm and linear regression for prediction of NOx emission in power generation plant International Nuclear Information System (INIS) Bunyamin, Muhammad Afif; Yap, Keem Siah; Aziz, Nur Liyana Afiqah Abdul; Tiong, Sheih Kiong; Wong, Shen Yuong; Kamal, Md Fauzan 2013-01-01 This paper presents a new approach of gas emission estimation in power generation plant using a hybrid Genetic Algorithm (GA) and Linear Regression (LR) (denoted as GA-LR). The LR is one of the approaches that model the relationship between an output dependant variable, y, with one or more explanatory variables or inputs which denoted as x. It is able to estimate unknown model parameters from inputs data. On the other hand, GA is used to search for the optimal solution until specific criteria is met causing termination. These results include providing good solutions as compared to one optimal solution for complex problems. Thus, GA is widely used as feature selection. By combining the LR and GA (GA-LR), this new technique is able to select the most important input features as well as giving more accurate prediction by minimizing the prediction errors. This new technique is able to produce more consistent of gas emission estimation, which may help in reducing population to the environment. In this paper, the study's interest is focused on nitrous oxides (NOx) prediction. The results of the experiment are encouraging. 18. Correction of TRMM 3B42V7 Based on Linear Regression Models over China Directory of Open Access Journals (Sweden) Shaohua Liu 2016-01-01 Full Text Available High temporal-spatial precipitation is necessary for hydrological simulation and water resource management, and remotely sensed precipitation products (RSPPs play a key role in supporting high temporal-spatial precipitation, especially in sparse gauge regions. TRMM 3B42V7 data (TRMM precipitation is an essential RSPP outperforming other RSPPs. Yet the utilization of TRMM precipitation is still limited by the inaccuracy and low spatial resolution at regional scale. In this paper, linear regression models (LRMs have been constructed to correct and downscale the TRMM precipitation based on the gauge precipitation at 2257 stations over China from 1998 to 2013. Then, the corrected TRMM precipitation was validated by gauge precipitation at 839 out of 2257 stations in 2014 at station and grid scales. The results show that both monthly and annual LRMs have obviously improved the accuracy of corrected TRMM precipitation with acceptable error, and monthly LRM performs slightly better than annual LRM in Mideastern China. Although the performance of corrected TRMM precipitation from the LRMs has been increased in Northwest China and Tibetan plateau, the error of corrected TRMM precipitation is still significant due to the large deviation between TRMM precipitation and low-density gauge precipitation. 19. Greater expectations: using hierarchical linear modeling to examine expectancy for treatment outcome as a predictor of treatment response. Science.gov (United States) Price, Matthew; Anderson, Page; Henrich, Christopher C; Rothbaum, Barbara Olasov 2008-12-01 A client's expectation that therapy will be beneficial has long been considered an important factor contributing to therapeutic outcomes, but recent empirical work examining this hypothesis has primarily yielded null findings. The present study examined the contribution of expectancies for treatment outcome to actual treatment outcome from the start of therapy through 12-month follow-up in a clinical sample of individuals (n=72) treated for fear of flying with either in vivo exposure or virtual reality exposure therapy. Using a piecewise hierarchical linear model, outcome expectancy predicted treatment gains made during therapy but not during follow-up. Compared to lower levels, higher expectations for treatment outcome yielded stronger rates of symptom reduction from the beginning to the end of treatment on 2 standardized self-report questionnaires on fear of flying. The analytic approach of the current study is one potential reason that findings contrast with prior literature. The advantages of using hierarchical linear modeling to assess interindividual differences in longitudinal data are discussed. 20. Multiple Linear Regression for Reconstruction of Gene Regulatory Networks in Solving Cascade Error Problems Directory of Open Access Journals (Sweden) Faridah Hani Mohamed Salleh 2017-01-01 1. Correlation of concentration of modified cassava flour for banana fritter flour using simple linear regression Science.gov (United States) Herminiati, A.; Rahman, T.; Turmala, E.; Fitriany, C. G. 2017-12-01 The purpose of this study was to determine the correlation of different concentrations of modified cassava flour that was processed for banana fritter flour. The research method consists of two stages: (1) to determine the different types of flour: cassava flour, modified cassava flour-A (using the method of the lactid acid bacteria), and modified cassava flour-B (using the method of the autoclaving cooling cycle), then conducted on organoleptic test and physicochemical analysis; (2) to determine the correlation of concentration of modified cassava flour for banana fritter flour, by design was used simple linear regression. The factors were used different concentrations of modified cassava flour-B (y1) 40%, (y2) 50%, and (y3) 60%. The response in the study includes physical analysis (whiteness of flour, water holding capacity-WHC, oil holding capacity-OHC), chemical analysis (moisture content, ash content, crude fiber content, starch content), and organoleptic (color, aroma, taste, texture). The results showed that the type of flour selected from the organoleptic test was modified cassava flour-B. Analysis results of modified cassava flour-B component containing whiteness of flour 60.42%; WHC 41.17%; OHC 21.15%; moisture content 4.4%; ash content 1.75%; crude fiber content 1.86%; starch content 67.31%. The different concentrations of modified cassava flour-B with the results of the analysis provides correlation to the whiteness of flour, WHC, OHC, moisture content, ash content, crude fiber content, and starch content. The different concentrations of modified cassava flour-B does not affect the color, aroma, taste, and texture. 2. Multiple Linear Regression and Artificial Neural Network to Predict Blood Glucose in Overweight Patients. Science.gov (United States) Wang, J; Wang, F; Liu, Y; Xu, J; Lin, H; Jia, B; Zuo, W; Jiang, Y; Hu, L; Lin, F 2016-01-01 Overweight individuals are at higher risk for developing type II diabetes than the general population. We conducted this study to analyze the correlation between blood glucose and biochemical parameters, and developed a blood glucose prediction model tailored to overweight patients. A total of 346 overweight Chinese people patients ages 18-81 years were involved in this study. Their levels of fasting glucose (fs-GLU), blood lipids, and hepatic and renal functions were measured and analyzed by multiple linear regression (MLR). Based the MLR results, we developed a back propagation artificial neural network (BP-ANN) model by selecting tansig as the transfer function of the hidden layers nodes, and purelin for the output layer nodes, with training goal of 0.5×10(-5). There was significant correlation between fs-GLU with age, BMI, and blood biochemical indexes (P<0.05). The results of MLR analysis indicated that age, fasting alanine transaminase (fs-ALT), blood urea nitrogen (fs-BUN), total protein (fs-TP), uric acid (fs-BUN), and BMI are 6 independent variables related to fs-GLU. Based on these parameters, the BP-ANN model was performed well and reached high prediction accuracy when training 1 000 epoch (R=0.9987). The level of fs-GLU was predictable using the proposed BP-ANN model based on 6 related parameters (age, fs-ALT, fs-BUN, fs-TP, fs-UA and BMI) in overweight patients. © Georg Thieme Verlag KG Stuttgart · New York. 3. Multiple Linear Regression for Reconstruction of Gene Regulatory Networks in Solving Cascade Error Problems. Science.gov (United States) Salleh, Faridah Hani Mohamed; Zainudin, Suhaila; Arif, Shereena M 2017-01-01 4. Reflexion on linear regression trip production modelling method for ensuring good model quality Science.gov (United States) Suprayitno, Hitapriya; Ratnasari, Vita 2017-11-01 Transport Modelling is important. For certain cases, the conventional model still has to be used, in which having a good trip production model is capital. A good model can only be obtained from a good sample. Two of the basic principles of a good sampling is having a sample capable to represent the population characteristics and capable to produce an acceptable error at a certain confidence level. It seems that this principle is not yet quite understood and used in trip production modeling. Therefore, investigating the Trip Production Modelling practice in Indonesia and try to formulate a better modeling method for ensuring the Model Quality is necessary. This research result is presented as follows. Statistics knows a method to calculate span of prediction value at a certain confidence level for linear regression, which is called Confidence Interval of Predicted Value. The common modeling practice uses R2 as the principal quality measure, the sampling practice varies and not always conform to the sampling principles. An experiment indicates that small sample is already capable to give excellent R2 value and sample composition can significantly change the model. Hence, good R2 value, in fact, does not always mean good model quality. These lead to three basic ideas for ensuring good model quality, i.e. reformulating quality measure, calculation procedure, and sampling method. A quality measure is defined as having a good R2 value and a good Confidence Interval of Predicted Value. Calculation procedure must incorporate statistical calculation method and appropriate statistical tests needed. A good sampling method must incorporate random well distributed stratified sampling with a certain minimum number of samples. These three ideas need to be more developed and tested. 5. Identifying keystone species in the human gut microbiome from metagenomic timeseries using sparse linear regression. Directory of Open Access Journals (Sweden) Charles K Fisher Full Text Available Human associated microbial communities exert tremendous influence over human health and disease. With modern metagenomic sequencing methods it is now possible to follow the relative abundance of microbes in a community over time. These microbial communities exhibit rich ecological dynamics and an important goal of microbial ecology is to infer the ecological interactions between species directly from sequence data. Any algorithm for inferring ecological interactions must overcome three major obstacles: 1 a correlation between the abundances of two species does not imply that those species are interacting, 2 the sum constraint on the relative abundances obtained from metagenomic studies makes it difficult to infer the parameters in timeseries models, and 3 errors due to experimental uncertainty, or mis-assignment of sequencing reads into operational taxonomic units, bias inferences of species interactions due to a statistical problem called "errors-in-variables". Here we introduce an approach, Learning Interactions from MIcrobial Time Series (LIMITS, that overcomes these obstacles. LIMITS uses sparse linear regression with boostrap aggregation to infer a discrete-time Lotka-Volterra model for microbial dynamics. We tested LIMITS on synthetic data and showed that it could reliably infer the topology of the inter-species ecological interactions. We then used LIMITS to characterize the species interactions in the gut microbiomes of two individuals and found that the interaction networks varied significantly between individuals. Furthermore, we found that the interaction networks of the two individuals are dominated by distinct "keystone species", Bacteroides fragilis and Bacteroided stercosis, that have a disproportionate influence on the structure of the gut microbiome even though they are only found in moderate abundance. Based on our results, we hypothesize that the abundances of certain keystone species may be responsible for individuality in 6. Sparse Estimation Using Bayesian Hierarchical Prior Modeling for Real and Complex Linear Models DEFF Research Database (Denmark) Pedersen, Niels Lovmand; Manchón, Carles Navarro; Badiu, Mihai Alin 2015-01-01 In sparse Bayesian learning (SBL), Gaussian scale mixtures (GSMs) have been used to model sparsity-inducing priors that realize a class of concave penalty functions for the regression task in real-valued signal models. Motivated by the relative scarcity of formal tools for SBL in complex-valued m......In sparse Bayesian learning (SBL), Gaussian scale mixtures (GSMs) have been used to model sparsity-inducing priors that realize a class of concave penalty functions for the regression task in real-valued signal models. Motivated by the relative scarcity of formal tools for SBL in complex...... error, and robustness in low and medium signal-to-noise ratio regimes.... 7. A novel simple QSAR model for the prediction of anti-HIV activity using multiple linear regression analysis. Science.gov (United States) Afantitis, Antreas; Melagraki, Georgia; Sarimveis, Haralambos; Koutentis, Panayiotis A; Markopoulos, John; Igglessi-Markopoulou, Olga 2006-08-01 A quantitative-structure activity relationship was obtained by applying Multiple Linear Regression Analysis to a series of 80 1-[2-hydroxyethoxy-methyl]-6-(phenylthio) thymine (HEPT) derivatives with significant anti-HIV activity. For the selection of the best among 37 different descriptors, the Elimination Selection Stepwise Regression Method (ES-SWR) was utilized. The resulting QSAR model (R (2) (CV) = 0.8160; S (PRESS) = 0.5680) proved to be very accurate both in training and predictive stages. 8. Evaluating Non-Linear Regression Models in Analysis of Persian Walnut Fruit Growth Directory of Open Access Journals (Sweden) I. Karamatlou 2016-02-01 Full Text Available Introduction: Persian walnut (Juglans regia L. is a large, wind-pollinated, monoecious, dichogamous, long lived, perennial tree cultivated for its high quality wood and nuts throughout the temperate regions of the world. Growth model methodology has been widely used in the modeling of plant growth. Mathematical models are important tools to study the plant growth and agricultural systems. These models can be applied for decision-making anddesigning management procedures in horticulture. Through growth analysis, planning for planting systems, fertilization, pruning operations, harvest time as well as obtaining economical yield can be more accessible.Non-linear models are more difficult to specify and estimate than linear models. This research was aimed to studynon-linear regression models based on data obtained from fruit weight, length and width. Selecting the best models which explain that fruit inherent growth pattern of Persian walnut was a further goal of this study. Materials and Methods: The experimental material comprising 14 Persian walnut genotypes propagated by seed collected from a walnut orchard in Golestan province, Minoudasht region, Iran, at latitude 37◦04’N; longitude 55◦32’E; altitude 1060 m, in a silt loam soil type. These genotypes were selected as a representative sampling of the many walnut genotypes available throughout the Northeastern Iran. The age range of walnut trees was 30 to 50 years. The annual mean temperature at the location is16.3◦C, with annual mean rainfall of 690 mm.The data used here is the average of walnut fresh fruit and measured withgram/millimeter/day in2011.According to the data distribution pattern, several equations have been proposed to describesigmoidal growth patterns. Here, we used double-sigmoid and logistic–monomolecular models to evaluate fruit growth based on fruit weight and4different regression models in cluding Richards, Gompertz, Logistic and Exponential growth for evaluation 9. Estimation of breeding values for mean and dispersion, their variance and correlation using double hierarchical generalized linear models. Science.gov (United States) Felleki, M; Lee, D; Lee, Y; Gilmour, A R; Rönnegård, L 2012-12-01 The possibility of breeding for uniform individuals by selecting animals expressing a small response to environment has been studied extensively in animal breeding. Bayesian methods for fitting models with genetic components in the residual variance have been developed for this purpose, but have limitations due to the computational demands. We use the hierarchical (h)-likelihood from the theory of double hierarchical generalized linear models (DHGLM) to derive an estimation algorithm that is computationally feasible for large datasets. Random effects for both the mean and residual variance parts of the model are estimated together with their variance/covariance components. An important feature of the algorithm is that it can fit a correlation between the random effects for mean and variance. An h-likelihood estimator is implemented in the R software and an iterative reweighted least square (IRWLS) approximation of the h-likelihood is implemented using ASReml. The difference in variance component estimates between the two implementations is investigated, as well as the potential bias of the methods, using simulations. IRWLS gives the same results as h-likelihood in simple cases with no severe indication of bias. For more complex cases, only IRWLS could be used, and bias did appear. The IRWLS is applied on the pig litter size data previously analysed by Sorensen & Waagepetersen (2003) using Bayesian methodology. The estimates we obtained by using IRWLS are similar to theirs, with the estimated correlation between the random genetic effects being -0·52 for IRWLS and -0·62 in Sorensen & Waagepetersen (2003). 10. Trend analysis by a piecewise linear regression model applied to surface air temperatures in Southeastern Spain (1973–2014) OpenAIRE Campra, Pablo; Morales, Maria 2016-01-01 The magnitude of the trends of environmental and climatic changes is mostly derived from the slopes of the linear trends using ordinary least-square fitting. An alternative flexible fitting model, piecewise regression, has been applied here to surface air temperature records in southeastern Spain for the recent warming period (1973–2014) to gain accuracy in the description of the inner structure of change, dividing the time series into linear segments with different slopes. Breakpoint y... 11. Seasonal Variability of Aragonite Saturation State in the North Pacific Ocean Predicted by Multiple Linear Regression Science.gov (United States) Kim, T. W.; Park, G. H. 2014-12-01 Seasonal variation of aragonite saturation state (Ωarag) in the North Pacific Ocean (NPO) was investigated, using multiple linear regression (MLR) models produced from the PACIFICA (Pacific Ocean interior carbon) dataset. Data within depth ranges of 50-1200m were used to derive MLR models, and three parameters (potential temperature, nitrate, and apparent oxygen utilization (AOU)) were chosen as predictor variables because these parameters are associated with vertical mixing, DIC (dissolved inorganic carbon) removal and release which all affect Ωarag in water column directly or indirectly. The PACIFICA dataset was divided into 5° × 5° grids, and a MLR model was produced in each grid, giving total 145 independent MLR models over the NPO. Mean RMSE (root mean square error) and r2 (coefficient of determination) of all derived MLR models were approximately 0.09 and 0.96, respectively. Then the obtained MLR coefficients for each of predictor variables and an intercept were interpolated over the study area, thereby making possible to allocate MLR coefficients to data-sparse ocean regions. Predictability from the interpolated coefficients was evaluated using Hawaiian time-series data, and as a result mean residual between measured and predicted Ωarag values was approximately 0.08, which is less than the mean RMSE of our MLR models. The interpolated MLR coefficients were combined with seasonal climatology of World Ocean Atlas 2013 (1° × 1°) to produce seasonal Ωarag distributions over various depths. Large seasonal variability in Ωarag was manifested in the mid-latitude Western NPO (24-40°N, 130-180°E) and low-latitude Eastern NPO (0-12°N, 115-150°W). In the Western NPO, seasonal fluctuations of water column stratification appeared to be responsible for the seasonal variation in Ωarag (~ 0.5 at 50 m) because it closely followed temperature variations in a layer of 0-75 m. In contrast, remineralization of organic matter was the main cause for the seasonal 12. Multiple linear regression to estimate time-frequency electrophysiological responses in single trials. Science.gov (United States) Hu, L; Zhang, Z G; Mouraux, A; Iannetti, G D 2015-05-01 Transient sensory, motor or cognitive event elicit not only phase-locked event-related potentials (ERPs) in the ongoing electroencephalogram (EEG), but also induce non-phase-locked modulations of ongoing EEG oscillations. These modulations can be detected when single-trial waveforms are analysed in the time-frequency domain, and consist in stimulus-induced decreases (event-related desynchronization, ERD) or increases (event-related synchronization, ERS) of synchrony in the activity of the underlying neuronal populations. ERD and ERS reflect changes in the parameters that control oscillations in neuronal networks and, depending on the frequency at which they occur, represent neuronal mechanisms involved in cortical activation, inhibition and binding. ERD and ERS are commonly estimated by averaging the time-frequency decomposition of single trials. However, their trial-to-trial variability that can reflect physiologically-important information is lost by across-trial averaging. Here, we aim to (1) develop novel approaches to explore single-trial parameters (including latency, frequency and magnitude) of ERP/ERD/ERS; (2) disclose the relationship between estimated single-trial parameters and other experimental factors (e.g., perceived intensity). We found that (1) stimulus-elicited ERP/ERD/ERS can be correctly separated using principal component analysis (PCA) decomposition with Varimax rotation on the single-trial time-frequency distributions; (2) time-frequency multiple linear regression with dispersion term (TF-MLRd) enhances the signal-to-noise ratio of ERP/ERD/ERS in single trials, and provides an unbiased estimation of their latency, frequency, and magnitude at single-trial level; (3) these estimates can be meaningfully correlated with each other and with other experimental factors at single-trial level (e.g., perceived stimulus intensity and ERP magnitude). The methods described in this article allow exploring fully non-phase-locked stimulus-induced cortical 13. Hierarchical linear modeling (HLM) of longitudinal brain structural and cognitive changes in alcohol-dependent individuals during sobriety DEFF Research Database (Denmark) Yeh, P.H.; Gazdzinski, S.; Durazzo, T.C. 2007-01-01 faster brain volume gains, which were also related to greater smoking and drinking severities. Over 7 months of abstinence from alcohol, sALC compared to nsALC showed less improvements in visuospatial learning and memory despite larger brain volume gains and ventricular shrinkage. Conclusions: Different......)-derived brain volume changes and cognitive changes in abstinent alcohol-dependent individuals as a function of smoking status, smoking severity, and drinking quantities. Methods: Twenty non-smoking recovering alcoholics (nsALC) and 30 age-matched smoking recovering alcoholics (sALC) underwent quantitative MRI...... time points. Using HLM, we modeled volumetric and cognitive outcome measures as a function of cigarette and alcohol use variables. Results: Different hierarchical linear models with unique model structures are presented and discussed. The results show that smaller brain volumes at baseline predict... 14. Isolating and Examining Sources of Suppression and Multicollinearity in Multiple Linear Regression Science.gov (United States) 2012-01-01 The presence of suppression (and multicollinearity) in multiple regression analysis complicates interpretation of predictor-criterion relationships. The mathematical conditions that produce suppression in regression analysis have received considerable attention in the methodological literature but until now nothing in the way of an analytic… 15. Toward Customer-Centric Organizational Science: A Common Language Effect Size Indicator for Multiple Linear Regressions and Regressions With Higher-Order Terms. Science.gov (United States) Krasikova, Dina V; Le, Huy; Bachura, Eric 2018-01-22 To address a long-standing concern regarding a gap between organizational science and practice, scholars called for more intuitive and meaningful ways of communicating research results to users of academic research. In this article, we develop a common language effect size index (CLβ) that can help translate research results to practice. We demonstrate how CLβ can be computed and used to interpret the effects of continuous and categorical predictors in multiple linear regression models. We also elaborate on how the proposed CLβ index is computed and used to interpret interactions and nonlinear effects in regression models. In addition, we test the robustness of the proposed index to violations of normality and provide means for computing standard errors and constructing confidence intervals around its estimates. (PsycINFO Database Record (c) 2018 APA, all rights reserved). 16. QSAR models for prediction study of HIV protease inhibitors using support vector machines, neural networks and multiple linear regression Directory of Open Access Journals (Sweden) Rachid Darnag 2017-02-01 Full Text Available Support vector machines (SVM represent one of the most promising Machine Learning (ML tools that can be applied to develop a predictive quantitative structure–activity relationship (QSAR models using molecular descriptors. Multiple linear regression (MLR and artificial neural networks (ANNs were also utilized to construct quantitative linear and non linear models to compare with the results obtained by SVM. The prediction results are in good agreement with the experimental value of HIV activity; also, the results reveal the superiority of the SVM over MLR and ANN model. The contribution of each descriptor to the structure–activity relationships was evaluated. 17. Linear regressive model structures for estimation and prediction of compartmental diffusive systems NARCIS (Netherlands) Vries, D; Keesman, K.J.; Zwart, Heiko J. In input-output relations of (compartmental) diffusive systems, physical parameters appear non-linearly, resulting in the use of (constrained) non-linear parameter estimation techniques with its short-comings regarding global optimality and computational effort. Given a LTI system in state space 18. Linear regressive model structures for estimation and prediction of compartmental diffusive systems NARCIS (Netherlands) Vries, D.; Keesman, K.J.; Zwart, H. 2006-01-01 Abstract In input-output relations of (compartmental) diffusive systems, physical parameters appear non-linearly, resulting in the use of (constrained) non-linear parameter estimation techniques with its short-comings regarding global optimality and computational effort. Given a LTI system in state 19. A STATISTICAL ANALYSIS OF GDP AND FINAL CONSUMPTION USING SIMPLE LINEAR REGRESSION. THE CASE OF ROMANIA 1990–2010 OpenAIRE Aniela Balacescu; Marian Zaharia 2011-01-01 This paper aims to examine the causal relationship between GDP and final consumption. The authors used linear regression model in which GDP is considered variable results, and final consumption variable factor. In drafting article we used Excel software application that is a modern computing and statistical data analysis. 20. A simple bias correction in linear regression for quantitative trait association under two-tail extreme selection OpenAIRE Kwan, Johnny S. H.; Kung, Annie W. C.; Sham, Pak C. 2011-01-01 Selective genotyping can increase power in quantitative trait association. One example of selective genotyping is two-tail extreme selection, but simple linear regression analysis gives a biased genetic effect estimate. Here, we present a simple correction for the bias. © The Author(s) 2011. 1. Estimation of error components in a multi-error linear regression model, with an application to track fitting International Nuclear Information System (INIS) Fruehwirth, R. 1993-01-01 We present an estimation procedure of the error components in a linear regression model with multiple independent stochastic error contributions. After solving the general problem we apply the results to the estimation of the actual trajectory in track fitting with multiple scattering. (orig.) 2. The Prediction Properties of Inverse and Reverse Regression for the Simple Linear Calibration Problem Science.gov (United States) Parker, Peter A.; Geoffrey, Vining G.; Wilson, Sara R.; Szarka, John L., III; Johnson, Nels G. 2010-01-01 The calibration of measurement systems is a fundamental but under-studied problem within industrial statistics. The origins of this problem go back to basic chemical analysis based on NIST standards. In today's world these issues extend to mechanical, electrical, and materials engineering. Often, these new scenarios do not provide "gold standards" such as the standard weights provided by NIST. This paper considers the classic "forward regression followed by inverse regression" approach. In this approach the initial experiment treats the "standards" as the regressor and the observed values as the response to calibrate the instrument. The analyst then must invert the resulting regression model in order to use the instrument to make actual measurements in practice. This paper compares this classical approach to "reverse regression," which treats the standards as the response and the observed measurements as the regressor in the calibration experiment. Such an approach is intuitively appealing because it avoids the need for the inverse regression. However, it also violates some of the basic regression assumptions. 3. Use of empirical likelihood to calibrate auxiliary information in partly linear monotone regression models. Science.gov (United States) Chen, Baojiang; Qin, Jing 2014-05-10 In statistical analysis, a regression model is needed if one is interested in finding the relationship between a response variable and covariates. When the response depends on the covariate, then it may also depend on the function of this covariate. If one has no knowledge of this functional form but expect for monotonic increasing or decreasing, then the isotonic regression model is preferable. Estimation of parameters for isotonic regression models is based on the pool-adjacent-violators algorithm (PAVA), where the monotonicity constraints are built in. With missing data, people often employ the augmented estimating method to improve estimation efficiency by incorporating auxiliary information through a working regression model. However, under the framework of the isotonic regression model, the PAVA does not work as the monotonicity constraints are violated. In this paper, we develop an empirical likelihood-based method for isotonic regression model to incorporate the auxiliary information. Because the monotonicity constraints still hold, the PAVA can be used for parameter estimation. Simulation studies demonstrate that the proposed method can yield more efficient estimates, and in some situations, the efficiency improvement is substantial. We apply this method to a dementia study. Copyright © 2013 John Wiley & Sons, Ltd. 4. Evaluation of a multiple linear regression model and SARIMA model in forecasting heat demand for district heating system International Nuclear Information System (INIS) Fang, Tingting; Lahdelma, Risto 2016-01-01 Highlights: • Social factor is considered for the linear regression models besides weather file. • Simultaneously optimize all the coefficients for linear regression models. • SARIMA combined with linear regression is used to forecast the heat demand. • The accuracy for both linear regression and time series models are evaluated. - Abstract: Forecasting heat demand is necessary for production and operation planning of district heating (DH) systems. In this study we first propose a simple regression model where the hourly outdoor temperature and wind speed forecast the heat demand. Weekly rhythm of heat consumption as a social component is added to the model to significantly improve the accuracy. The other type of model is the seasonal autoregressive integrated moving average (SARIMA) model with exogenous variables as a combination to take weather factors, and the historical heat consumption data as depending variables. One outstanding advantage of the model is that it peruses the high accuracy for both long-term and short-term forecast by considering both exogenous factors and time series. The forecasting performance of both linear regression models and time series model are evaluated based on real-life heat demand data for the city of Espoo in Finland by out-of-sample tests for the last 20 full weeks of the year. The results indicate that the proposed linear regression model (T168h) using 168-h demand pattern with midweek holidays classified as Saturdays or Sundays gives the highest accuracy and strong robustness among all the tested models based on the tested forecasting horizon and corresponding data. Considering the parsimony of the input, the ease of use and the high accuracy, the proposed T168h model is the best in practice. The heat demand forecasting model can also be developed for individual buildings if automated meter reading customer measurements are available. This would allow forecasting the heat demand based on more accurate heat consumption 5. Comparison of some biased estimation methods (including ordinary subset regression) in the linear model Science.gov (United States) Sidik, S. M. 1975-01-01 Ridge, Marquardt's generalized inverse, shrunken, and principal components estimators are discussed in terms of the objectives of point estimation of parameters, estimation of the predictive regression function, and hypothesis testing. It is found that as the normal equations approach singularity, more consideration must be given to estimable functions of the parameters as opposed to estimation of the full parameter vector; that biased estimators all introduce constraints on the parameter space; that adoption of mean squared error as a criterion of goodness should be independent of the degree of singularity; and that ordinary least-squares subset regression is the best overall method. 6. Straight line fitting and predictions: On a marginal likelihood approach to linear regression and errors-in-variables models Science.gov (United States) Christiansen, Bo 2015-04-01 Linear regression methods are without doubt the most used approaches to describe and predict data in the physical sciences. They are often good first order approximations and they are in general easier to apply and interpret than more advanced methods. However, even the properties of univariate regression can lead to debate over the appropriateness of various models as witnessed by the recent discussion about climate reconstruction methods. Before linear regression is applied important choices have to be made regarding the origins of the noise terms and regarding which of the two variables under consideration that should be treated as the independent variable. These decisions are often not easy to make but they may have a considerable impact on the results. We seek to give a unified probabilistic - Bayesian with flat priors - treatment of univariate linear regression and prediction by taking, as starting point, the general errors-in-variables model (Christiansen, J. Clim., 27, 2014-2031, 2014). Other versions of linear regression can be obtained as limits of this model. We derive the likelihood of the model parameters and predictands of the general errors-in-variables model by marginalizing over the nuisance parameters. The resulting likelihood is relatively simple and easy to analyze and calculate. The well known unidentifiability of the errors-in-variables model is manifested as the absence of a well-defined maximum in the likelihood. However, this does not mean that probabilistic inference can not be made; the marginal likelihoods of model parameters and the predictands have, in general, well-defined maxima. We also include a probabilistic version of classical calibration and show how it is related to the errors-in-variables model. The results are illustrated by an example from the coupling between the lower stratosphere and the troposphere in the Northern Hemisphere winter. 7. Non-Linear Behaviour Of Gelatin Networks Reveals A Hierarchical Structure KAUST Repository Yang, Zhi; Hemar, Yacine; Hilliou, loic; Gilbert, Elliot P.; McGillivray, Duncan James; Williams, Martin A. K.; Chaieb, Saharoui 2015-01-01 8. Non-Linear Behaviour Of Gelatin Networks Reveals A Hierarchical Structure KAUST Repository Yang, Zhi 2015-12-14 9. Endogenous glucose production from infancy to adulthood: a non-linear regression model NARCIS (Netherlands) Huidekoper, Hidde H.; Ackermans, Mariëtte T.; Ruiter, An F. C.; Sauerwein, Hans P.; Wijburg, Frits A. 2014-01-01 To construct a regression model for endogenous glucose production (EGP) as a function of age, and compare this with glucose supplementation using commonly used dextrose-based saline solutions at fluid maintenance rate in children. A model was constructed based on EGP data, as quantified by 10. Weighted linear regression using D2H and D2 as the independent variables Science.gov (United States) Hans T. Schreuder; Michael S. Williams 1998-01-01 Several error structures for weighted regression equations used for predicting volume were examined for 2 large data sets of felled and standing loblolly pine trees (Pinus taeda L.). The generally accepted model with variance of error proportional to the value of the covariate squared ( D2H = diameter squared times height or D... 11. Comparing Linear Discriminant Function with Logistic Regression for the Two-Group Classification Problem. Science.gov (United States) Fan, Xitao; Wang, Lin The Monte Carlo study compared the performance of predictive discriminant analysis (PDA) and that of logistic regression (LR) for the two-group classification problem. Prior probabilities were used for classification, but the cost of misclassification was assumed to be equal. The study used a fully crossed three-factor experimental design (with… 12. NetRaVE: constructing dependency networks using sparse linear regression DEFF Research Database (Denmark) Phatak, A.; Kiiveri, H.; Clemmensen, Line Katrine Harder 2010-01-01 NetRaVE is a small suite of R functions for generating dependency networks using sparse regression methods. Such networks provide an alternative to interpreting 'top n lists' of genes arising out of an analysis of microarray data, and they provide a means of organizing and visualizing the resulting... 13. FIRE: an SPSS program for variable selection in multiple linear regression analysis via the relative importance of predictors. Science.gov (United States) Lorenzo-Seva, Urbano; Ferrando, Pere J 2011-03-01 We provide an SPSS program that implements currently recommended techniques and recent developments for selecting variables in multiple linear regression analysis via the relative importance of predictors. The approach consists of: (1) optimally splitting the data for cross-validation, (2) selecting the final set of predictors to be retained in the equation regression, and (3) assessing the behavior of the chosen model using standard indices and procedures. The SPSS syntax, a short manual, and data files related to this article are available as supplemental materials from brm.psychonomic-journals.org/content/supplemental. 14. Isotherms and thermodynamics by linear and non-linear regression analysis for the sorption of methylene blue onto activated carbon: Comparison of various error functions International Nuclear Information System (INIS) Kumar, K. Vasanth; Porkodi, K.; Rocha, F. 2008-01-01 A comparison of linear and non-linear regression method in selecting the optimum isotherm was made to the experimental equilibrium data of methylene blue sorption by activated carbon. The r 2 was used to select the best fit linear theoretical isotherm. In the case of non-linear regression method, six error functions, namely coefficient of determination (r 2 ), hybrid fractional error function (HYBRID), Marquardt's percent standard deviation (MPSD), average relative error (ARE), sum of the errors squared (ERRSQ) and sum of the absolute errors (EABS) were used to predict the parameters involved in the two and three parameter isotherms and also to predict the optimum isotherm. For two parameter isotherm, MPSD was found to be the best error function in minimizing the error distribution between the experimental equilibrium data and predicted isotherms. In the case of three parameter isotherm, r 2 was found to be the best error function to minimize the error distribution structure between experimental equilibrium data and theoretical isotherms. The present study showed that the size of the error function alone is not a deciding factor to choose the optimum isotherm. In addition to the size of error function, the theory behind the predicted isotherm should be verified with the help of experimental data while selecting the optimum isotherm. A coefficient of non-determination, K 2 was explained and was found to be very useful in identifying the best error function while selecting the optimum isotherm 15. Regression Is a Univariate General Linear Model Subsuming Other Parametric Methods as Special Cases. Science.gov (United States) Vidal, Sherry Although the concept of the general linear model (GLM) has existed since the 1960s, other univariate analyses such as the t-test and the analysis of variance models have remained popular. The GLM produces an equation that minimizes the mean differences of independent variables as they are related to a dependent variable. From a computer printout… 16. Model structure learning: A support vector machine approach for LPV linear-regression models NARCIS (Netherlands) Toth, R.; Laurain, V.; Zheng, W-X.; Poolla, K. 2011-01-01 Accurate parametric identification of Linear Parameter-Varying (LPV) systems requires an optimal prior selection of a set of functional dependencies for the parametrization of the model coefficients. Inaccurate selection leads to structural bias while over-parametrization results in a variance 17. The estimation and prediction of the inventories for the liquid and gaseous radwaste systems using the linear regression analysis International Nuclear Information System (INIS) Kim, J. Y.; Shin, C. H.; Kim, J. K.; Lee, J. K.; Park, Y. J. 2003-01-01 The variation transitions of the inventories for the liquid radwaste system and the radioactive gas have being released in containment, and their predictive values according to the operation histories of Yonggwang(YGN) 3 and 4 were analyzed by linear regression analysis methodology. The results show that the variation transitions of the inventories for those systems are linearly increasing according to the operation histories but the inventories released to the environment are considerably lower than the recommended values based on the FSAR suggestions. It is considered that some conservation were presented in the estimation methodology in preparing stage of FSAR 18. The Effects Of Gender, Engineering Identification, and Engineering Program Expectancy On Engineering Career Intentions: Applying Hierarchical Linear Modeling (HLM) In Engineering Education Research Science.gov (United States) Tendhar, Chosang; Paretti, Marie C.; Jones, Brett D. 2017-01-01 This study had three purposes and four hypotheses were tested. Three purposes: (1) To use hierarchical linear modeling (HLM) to investigate whether students' perceptions of their engineering career intentions changed over time; (2) To use HLM to test the effects of gender, engineering identification (the degree to which an individual values a… 19. Discussion on Regression Methods Based on Ensemble Learning and Applicability Domains of Linear Submodels. Science.gov (United States) Kaneko, Hiromasa 2018-02-26 To develop a new ensemble learning method and construct highly predictive regression models in chemoinformatics and chemometrics, applicability domains (ADs) are introduced into the ensemble learning process of prediction. When estimating values of an objective variable using subregression models, only the submodels with ADs that cover a query sample, i.e., the sample is inside the model's AD, are used. By constructing submodels and changing a list of selected explanatory variables, the union of the submodels' ADs, which defines the overall AD, becomes large, and the prediction performance is enhanced for diverse compounds. By analyzing a quantitative structure-activity relationship data set and a quantitative structure-property relationship data set, it is confirmed that the ADs can be enlarged and the estimation performance of regression models is improved compared with traditional methods. 20. Linear Regression with a Randomly Censored Covariate: Application to an Alzheimer's Study. Science.gov (United States) Atem, Folefac D; Qian, Jing; Maye, Jacqueline E; Johnson, Keith A; Betensky, Rebecca A 2017-01-01 The association between maternal age of onset of dementia and amyloid deposition (measured by in vivo positron emission tomography (PET) imaging) in cognitively normal older offspring is of interest. In a regression model for amyloid, special methods are required due to the random right censoring of the covariate of maternal age of onset of dementia. Prior literature has proposed methods to address the problem of censoring due to assay limit of detection, but not random censoring. We propose imputation methods and a survival regression method that do not require parametric assumptions about the distribution of the censored covariate. Existing imputation methods address missing covariates, but not right censored covariates. In simulation studies, we compare these methods to the simple, but inefficient complete case analysis, and to thresholding approaches. We apply the methods to the Alzheimer's study. 1. SOCP relaxation bounds for the optimal subset selection problem applied to robust linear regression OpenAIRE 2015-01-01 This paper deals with the problem of finding the globally optimal subset of h elements from a larger set of n elements in d space dimensions so as to minimize a quadratic criterion, with an special emphasis on applications to computing the Least Trimmed Squares Estimator (LTSE) for robust regression. The computation of the LTSE is a challenging subset selection problem involving a nonlinear program with continuous and binary variables, linked in a highly nonlinear fashion. The selection of a ... 2. Comparing Machine Learning Classifiers and Linear/Logistic Regression to Explore the Relationship between Hand Dimensions and Demographic Characteristics. Science.gov (United States) Miguel-Hurtado, Oscar; Guest, Richard; Stevenage, Sarah V; Neil, Greg J; Black, Sue 2016-01-01 Understanding the relationship between physiological measurements from human subjects and their demographic data is important within both the biometric and forensic domains. In this paper we explore the relationship between measurements of the human hand and a range of demographic features. We assess the ability of linear regression and machine learning classifiers to predict demographics from hand features, thereby providing evidence on both the strength of relationship and the key features underpinning this relationship. Our results show that we are able to predict sex, height, weight and foot size accurately within various data-range bin sizes, with machine learning classification algorithms out-performing linear regression in most situations. In addition, we identify the features used to provide these relationships applicable across multiple applications. 3. Development of a Multiple Linear Regression Model to Forecast Facility Electrical Consumption at an Air Force Base. Science.gov (United States) 1981-09-01 corresponds to the same square footage that consumed the electrical energy. 3. The basic assumptions of multiple linear regres- sion, as enumerated in...7. Data related to the sample of bases is assumed to be representative of bases in the population. Limitations Basic limitations on this research were... Ratemaking --Overview. Rand Report R-5894, Santa Monica CA, May 1977. Chatterjee, Samprit, and Bertram Price. Regression Analysis by Example. New York: John 4. Mathematical considerations regarding the stability of the trace element systems by linear regressions International Nuclear Information System (INIS) Mihai, Maria; Popescu, I.V. 2002-01-01 In this paper we present a mathematical model that would describe the stability and instability conditions, respectively of the organs of human body assumed as a living cybernetic system with feedback. We tested the theoretical model on the following trace elements: Mn, Zn and As. The trace elements were determined from the nose-pharyngeal carcinoma. We utilise the linear approximation to describe the dependencies between the trace elements determined in the hair of the patient. We present the results graphically. (authors) 5. Lattice Designs in Standard and Simple Implicit Multi-linear Regression OpenAIRE Wooten, Rebecca D. 2016-01-01 Statisticians generally use ordinary least squares to minimize the random error in a subject response with respect to independent explanatory variable. However, Wooten shows illustrates how ordinary least squares can be used to minimize the random error in the system without defining a subject response. Using lattice design Wooten shows that non-response analysis is a superior alternative rotation of the pyramidal relationship between random variables and parameter estimates in multi-linear r... 6. Modeling daily soil temperature over diverse climate conditions in Iran—a comparison of multiple linear regression and support vector regression techniques Science.gov (United States) Delbari, Masoomeh; Sharifazari, Salman; Mohammadi, Ehsan 2018-02-01 The knowledge of soil temperature at different depths is important for agricultural industry and for understanding climate change. The aim of this study is to evaluate the performance of a support vector regression (SVR)-based model in estimating daily soil temperature at 10, 30 and 100 cm depth at different climate conditions over Iran. The obtained results were compared to those obtained from a more classical multiple linear regression (MLR) model. The correlation sensitivity for the input combinations and periodicity effect were also investigated. Climatic data used as inputs to the models were minimum and maximum air temperature, solar radiation, relative humidity, dew point, and the atmospheric pressure (reduced to see level), collected from five synoptic stations Kerman, Ahvaz, Tabriz, Saghez, and Rasht located respectively in the hyper-arid, arid, semi-arid, Mediterranean, and hyper-humid climate conditions. According to the results, the performance of both MLR and SVR models was quite well at surface layer, i.e., 10-cm depth. However, SVR performed better than MLR in estimating soil temperature at deeper layers especially 100 cm depth. Moreover, both models performed better in humid climate condition than arid and hyper-arid areas. Further, adding a periodicity component into the modeling process considerably improved the models' performance especially in the case of SVR. 7. Modeling the frequency of opposing left-turn conflicts at signalized intersections using generalized linear regression models. Science.gov (United States) Zhang, Xin; Liu, Pan; Chen, Yuguang; Bai, Lu; Wang, Wei 2014-01-01 The primary objective of this study was to identify whether the frequency of traffic conflicts at signalized intersections can be modeled. The opposing left-turn conflicts were selected for the development of conflict predictive models. Using data collected at 30 approaches at 20 signalized intersections, the underlying distributions of the conflicts under different traffic conditions were examined. Different conflict-predictive models were developed to relate the frequency of opposing left-turn conflicts to various explanatory variables. The models considered include a linear regression model, a negative binomial model, and separate models developed for four traffic scenarios. The prediction performance of different models was compared. The frequency of traffic conflicts follows a negative binominal distribution. The linear regression model is not appropriate for the conflict frequency data. In addition, drivers behaved differently under different traffic conditions. Accordingly, the effects of conflicting traffic volumes on conflict frequency vary across different traffic conditions. The occurrences of traffic conflicts at signalized intersections can be modeled using generalized linear regression models. The use of conflict predictive models has potential to expand the uses of surrogate safety measures in safety estimation and evaluation. 8. Improving ASTER GDEM Accuracy Using Land Use-Based Linear Regression Methods: A Case Study of Lianyungang, East China Directory of Open Access Journals (Sweden) Xiaoyan Yang 2018-04-01 Full Text Available The Advanced Spaceborne Thermal-Emission and Reflection Radiometer Global Digital Elevation Model (ASTER GDEM is important to a wide range of geographical and environmental studies. Its accuracy, to some extent associated with land-use types reflecting topography, vegetation coverage, and human activities, impacts the results and conclusions of these studies. In order to improve the accuracy of ASTER GDEM prior to its application, we investigated ASTER GDEM errors based on individual land-use types and proposed two linear regression calibration methods, one considering only land use-specific errors and the other considering the impact of both land-use and topography. Our calibration methods were tested on the coastal prefectural city of Lianyungang in eastern China. Results indicate that (1 ASTER GDEM is highly accurate for rice, wheat, grass and mining lands but less accurate for scenic, garden, wood and bare lands; (2 despite improvements in ASTER GDEM2 accuracy, multiple linear regression calibration requires more data (topography and a relatively complex calibration process; (3 simple linear regression calibration proves a practicable and simplified means to systematically investigate and improve the impact of land-use on ASTER GDEM accuracy. Our method is applicable to areas with detailed land-use data based on highly accurate field-based point-elevation measurements. 9. A study on direct determination of uranium in ore by analyzing γ-ray spectrum with dual linear regression International Nuclear Information System (INIS) Liu Chunkui 1996-01-01 The method introduced is based on different energy of γ-ray emitted from radionuclide in the uranium-radium decay series in ore. The pulse counting rates of two spectra bands, i.e. N 1 (55∼193 keV) and N 2 (260∼1500 keV), are measured by portable type HYX-3 400-channel γ-ray spectrometer. On the other side, the uranium content (Q U ) is obtained by chemical analysis of channel sampling. Then the regression coefficients (b 0 , b 1 ,b 2 ) can be determined through dual linear regression by using Q U and N 1 , N 2 . The direct determination of uranium can be made with the regression equation Q U = b 0 + b 1 N 1 + b 2 N 2 10. Data processing for potentiometric precipitation titration of mixtures of isovalent ions by linear regression analysis International Nuclear Information System (INIS) Mar'yanov, B.M.; Shumar, S.V.; Gavrilenko, M.A. 1994-01-01 A method for the computer processing of the curves of potentiometric differential titration using the precipitation reactions is developed. This method is based on transformation of the titration curve into a line of multiphase regression, whose parameters determine the equivalence points and the solubility products of the formed precipitates. The computational algorithm is tested using experimental curves for the titration of solutions containing Hg(2) and Cd(2) by the solution of sodium diethyldithiocarbamate. The random errors (RSD) for the titration of 1x10 -4 M solutions are in the range of 3-6%. 7 refs.; 2 figs.; 1 tab 11. Single camera multi-view anthropometric measurement of human height and mid-upper arm circumference using linear regression. Science.gov (United States) Liu, Yingying; Sowmya, Arcot; Khamis, Heba 2018-01-01 Manually measured anthropometric quantities are used in many applications including human malnutrition assessment. Training is required to collect anthropometric measurements manually, which is not ideal in resource-constrained environments. Photogrammetric methods have been gaining attention in recent years, due to the availability and affordability of digital cameras. The primary goal is to demonstrate that height and mid-upper arm circumference (MUAC)-indicators of malnutrition-can be accurately estimated by applying linear regression to distance measurements from photographs of participants taken from five views, and determine the optimal view combinations. A secondary goal is to observe the effect on estimate error of two approaches which reduce complexity of the setup, computational requirements and the expertise required of the observer. Thirty-one participants (11 female, 20 male; 18-37 years) were photographed from five views. Distances were computed using both camera calibration and reference object techniques from manually annotated photos. To estimate height, linear regression was applied to the distances between the top of the participants head and the floor, as well as the height of a bounding box enclosing the participant's silhouette which eliminates the need to identify the floor. To estimate MUAC, linear regression was applied to the mid-upper arm width. Estimates were computed for all view combinations and performance was compared to other photogrammetric methods from the literature-linear distance method for height, and shape models for MUAC. The mean absolute difference (MAD) between the linear regression estimates and manual measurements were smaller compared to other methods. For the optimal view combinations (smallest MAD), the technical error of measurement and coefficient of reliability also indicate the linear regression methods are more reliable. The optimal view combination was the front and side views. When estimating height by linear 12. Using Hierarchical Linear Modeling to Examine How Individual SLPs Differentially Contribute to Children's Language and Literacy Gains in Public Schools. Science.gov (United States) Farquharson, Kelly; Tambyraja, Sherine R; Logan, Jessica; Justice, Laura M; Schmitt, Mary Beth 2015-08-01 The purpose of this study was twofold: (a) to determine the unique contributions in children's language and literacy gains, over 1 academic year, that are attributable to the individual speech-language pathologist (SLP) and (b) to explore possible child- and SLP-level factors that may further explain SLPs' contributions to children's language and literacy gains. Participants were 288 kindergarten and 1st-grade children with language impairment who were currently receiving school-based language intervention from SLPs. Using hierarchical linear modeling, we partitioned the variance in children's gains in language (i.e., grammar, vocabulary) and literacy (i.e., word decoding) that could be attributed to their individual SLP. Results revealed a significant contribution of individual SLPs to children's gains in grammar, vocabulary, and word decoding. Children's fall language scores and grade were significant predictors of SLPs' contributions, although no SLP-level predictors were significant. The present study makes a first step toward incorporating implementation science and suggests that, for children receiving school-based language intervention, variance in child language and literacy gains in an academic year is at least partially attributable to SLPs. Continued work in this area should examine the possible SLP-level characteristics that may further explicate the relative contributions of SLPs. 13. Predicting the aquatic toxicity mode of action using logistic regression and linear discriminant analysis. Science.gov (United States) Ren, Y Y; Zhou, L C; Yang, L; Liu, P Y; Zhao, B W; Liu, H X 2016-09-01 The paper highlights the use of the logistic regression (LR) method in the construction of acceptable statistically significant, robust and predictive models for the classification of chemicals according to their aquatic toxic modes of action. Essentials accounting for a reliable model were all considered carefully. The model predictors were selected by stepwise forward discriminant analysis (LDA) from a combined pool of experimental data and chemical structure-based descriptors calculated by the CODESSA and DRAGON software packages. Model predictive ability was validated both internally and externally. The applicability domain was checked by the leverage approach to verify prediction reliability. The obtained models are simple and easy to interpret. In general, LR performs much better than LDA and seems to be more attractive for the prediction of the more toxic compounds, i.e. compounds that exhibit excess toxicity versus non-polar narcotic compounds and more reactive compounds versus less reactive compounds. In addition, model fit and regression diagnostics was done through the influence plot which reflects the hat-values, studentized residuals, and Cook's distance statistics of each sample. Overdispersion was also checked for the LR model. The relationships between the descriptors and the aquatic toxic behaviour of compounds are also discussed. 14. 10 km running performance predicted by a multiple linear regression model with allometrically adjusted variables. Science.gov (United States) Abad, Cesar C C; Barros, Ronaldo V; Bertuzzi, Romulo; Gagliardi, João F L; Lima-Silva, Adriano E; Lambert, Mike I; Pires, Flavio O 2016-06-01 The aim of this study was to verify the power of VO 2max , peak treadmill running velocity (PTV), and running economy (RE), unadjusted or allometrically adjusted, in predicting 10 km running performance. Eighteen male endurance runners performed: 1) an incremental test to exhaustion to determine VO 2max and PTV; 2) a constant submaximal run at 12 km·h -1 on an outdoor track for RE determination; and 3) a 10 km running race. Unadjusted (VO 2max , PTV and RE) and adjusted variables (VO 2max 0.72 , PTV 0.72 and RE 0.60 ) were investigated through independent multiple regression models to predict 10 km running race time. There were no significant correlations between 10 km running time and either the adjusted or unadjusted VO 2max . Significant correlations (p 0.84 and power > 0.88. The allometrically adjusted predictive model was composed of PTV 0.72 and RE 0.60 and explained 83% of the variance in 10 km running time with a standard error of the estimate (SEE) of 1.5 min. The unadjusted model composed of a single PVT accounted for 72% of the variance in 10 km running time (SEE of 1.9 min). Both regression models provided powerful estimates of 10 km running time; however, the unadjusted PTV may provide an uncomplicated estimation. 15. Application of empirical mode decomposition with local linear quantile regression in financial time series forecasting. Science.gov (United States) Jaber, Abobaker M; Ismail, Mohd Tahir; Altaher, Alsaidi M 2014-01-01 This paper mainly forecasts the daily closing price of stock markets. We propose a two-stage technique that combines the empirical mode decomposition (EMD) with nonparametric methods of local linear quantile (LLQ). We use the proposed technique, EMD-LLQ, to forecast two stock index time series. Detailed experiments are implemented for the proposed method, in which EMD-LPQ, EMD, and Holt-Winter methods are compared. The proposed EMD-LPQ model is determined to be superior to the EMD and Holt-Winter methods in predicting the stock closing prices. 16. Comparison of Adaline and Multiple Linear Regression Methods for Rainfall Forecasting Science.gov (United States) Sutawinaya, IP; Astawa, INGA; Hariyanti, NKD 2018-01-01 Heavy rainfall can cause disaster, therefore need a forecast to predict rainfall intensity. Main factor that cause flooding is there is a high rainfall intensity and it makes the river become overcapacity. This will cause flooding around the area. Rainfall factor is a dynamic factor, so rainfall is very interesting to be studied. In order to support the rainfall forecasting, there are methods that can be used from Artificial Intelligence (AI) to statistic. In this research, we used Adaline for AI method and Regression for statistic method. The more accurate forecast result shows the method that used is good for forecasting the rainfall. Through those methods, we expected which is the best method for rainfall forecasting here. 17. The Systematic Bias of Ingestible Core Temperature Sensors Requires a Correction by Linear Regression. Science.gov (United States) Hunt, Andrew P; Bach, Aaron J E; Borg, David N; Costello, Joseph T; Stewart, Ian B 2017-01-01 An accurate measure of core body temperature is critical for monitoring individuals, groups and teams undertaking physical activity in situations of high heat stress or prolonged cold exposure. This study examined the range in systematic bias of ingestible temperature sensors compared to a certified and traceable reference thermometer. A total of 119 ingestible temperature sensors were immersed in a circulated water bath at five water temperatures (TEMP A: 35.12 ± 0.60°C, TEMP B: 37.33 ± 0.56°C, TEMP C: 39.48 ± 0.73°C, TEMP D: 41.58 ± 0.97°C, and TEMP E: 43.47 ± 1.07°C) along with a certified traceable reference thermometer. Thirteen sensors (10.9%) demonstrated a systematic bias > ±0.1°C, of which 4 (3.3%) were > ± 0.5°C. Limits of agreement (95%) indicated that systematic bias would likely fall in the range of -0.14 to 0.26°C, highlighting that it is possible for temperatures measured between sensors to differ by more than 0.4°C. The proportion of sensors with systematic bias > ±0.1°C (10.9%) confirms that ingestible temperature sensors require correction to ensure their accuracy. An individualized linear correction achieved a mean systematic bias of 0.00°C, and limits of agreement (95%) to 0.00-0.00°C, with 100% of sensors achieving ±0.1°C accuracy. Alternatively, a generalized linear function (Corrected Temperature (°C) = 1.00375 × Sensor Temperature (°C) - 0.205549), produced as the average slope and intercept of a sub-set of 51 sensors and excluding sensors with accuracy outside ±0.5°C, reduced the systematic bias to Correction of sensor temperature to a reference thermometer by linear function eliminates this systematic bias (individualized functions) or ensures systematic bias is within ±0.1°C in 98% of the sensors (generalized function). 18. The Systematic Bias of Ingestible Core Temperature Sensors Requires a Correction by Linear Regression Directory of Open Access Journals (Sweden) Andrew P. Hunt 2017-04-01 Full Text Available An accurate measure of core body temperature is critical for monitoring individuals, groups and teams undertaking physical activity in situations of high heat stress or prolonged cold exposure. This study examined the range in systematic bias of ingestible temperature sensors compared to a certified and traceable reference thermometer. A total of 119 ingestible temperature sensors were immersed in a circulated water bath at five water temperatures (TEMP A: 35.12 ± 0.60°C, TEMP B: 37.33 ± 0.56°C, TEMP C: 39.48 ± 0.73°C, TEMP D: 41.58 ± 0.97°C, and TEMP E: 43.47 ± 1.07°C along with a certified traceable reference thermometer. Thirteen sensors (10.9% demonstrated a systematic bias > ±0.1°C, of which 4 (3.3% were > ± 0.5°C. Limits of agreement (95% indicated that systematic bias would likely fall in the range of −0.14 to 0.26°C, highlighting that it is possible for temperatures measured between sensors to differ by more than 0.4°C. The proportion of sensors with systematic bias > ±0.1°C (10.9% confirms that ingestible temperature sensors require correction to ensure their accuracy. An individualized linear correction achieved a mean systematic bias of 0.00°C, and limits of agreement (95% to 0.00–0.00°C, with 100% of sensors achieving ±0.1°C accuracy. Alternatively, a generalized linear function (Corrected Temperature (°C = 1.00375 × Sensor Temperature (°C − 0.205549, produced as the average slope and intercept of a sub-set of 51 sensors and excluding sensors with accuracy outside ±0.5°C, reduced the systematic bias to < ±0.1°C in 98.4% of the remaining sensors (n = 64. In conclusion, these data show that using an uncalibrated ingestible temperature sensor may provide inaccurate data that still appears to be statistically, physiologically, and clinically meaningful. Correction of sensor temperature to a reference thermometer by linear function eliminates this systematic bias (individualized functions or ensures 19. Searching for the main anti-bacterial components in artificial Calculus bovis using UPLC and microcalorimetry coupled with multi-linear regression analysis. Science.gov (United States) Zang, Qing-Ce; Wang, Jia-Bo; Kong, Wei-Jun; Jin, Cheng; Ma, Zhi-Jie; Chen, Jing; Gong, Qian-Feng; Xiao, Xiao-He 2011-12-01 The fingerprints of artificial Calculus bovis extracts from different solvents were established by ultra-performance liquid chromatography (UPLC) and the anti-bacterial activities of artificial C. bovis extracts on Staphylococcus aureus (S. aureus) growth were studied by microcalorimetry. The UPLC fingerprints were evaluated using hierarchical clustering analysis. Some quantitative parameters obtained from the thermogenic curves of S. aureus growth affected by artificial C. bovis extracts were analyzed using principal component analysis. The spectrum-effect relationships between UPLC fingerprints and anti-bacterial activities were investigated using multi-linear regression analysis. The results showed that peak 1 (taurocholate sodium), peak 3 (unknown compound), peak 4 (cholic acid), and peak 6 (chenodeoxycholic acid) are more significant than the other peaks with the standard parameter estimate 0.453, -0.166, 0.749, 0.025, respectively. So, compounds cholic acid, taurocholate sodium, and chenodeoxycholic acid might be the major anti-bacterial components in artificial C. bovis. Altogether, this work provides a general model of the combination of UPLC chromatography and anti-bacterial effect to study the spectrum-effect relationships of artificial C. bovis extracts, which can be used to discover the main anti-bacterial components in artificial C. bovis or other Chinese herbal medicines with anti-bacterial effects. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. 20. Trace analysis of acids and bases by conductometric titration with multiparametric non-linear regression. Science.gov (United States) Coelho, Lúcia H G; Gutz, Ivano G R 2006-03-15 A chemometric method for analysis of conductometric titration data was introduced to extend its applicability to lower concentrations and more complex acid-base systems. Auxiliary pH measurements were made during the titration to assist the calculation of the distribution of protonable species on base of known or guessed equilibrium constants. Conductivity values of each ionized or ionizable species possibly present in the sample were introduced in a general equation where the only unknown parameters were the total concentrations of (conjugated) bases and of strong electrolytes not involved in acid-base equilibria. All these concentrations were adjusted by a multiparametric nonlinear regression (NLR) method, based on the Levenberg-Marquardt algorithm. This first conductometric titration method with NLR analysis (CT-NLR) was successfully applied to simulated conductometric titration data and to synthetic samples with multiple components at concentrations as low as those found in rainwater (approximately 10 micromol L(-1)). It was possible to resolve and quantify mixtures containing a strong acid, formic acid, acetic acid, ammonium ion, bicarbonate and inert electrolyte with accuracy of 5% or better. 1. Relationships between the structure of wheat gluten and ACE inhibitory activity of hydrolysate: stepwise multiple linear regression analysis. Science.gov (United States) Zhang, Yanyan; Ma, Haile; Wang, Bei; Qu, Wenjuan; Wali, Asif; Zhou, Cunshan 2016-08-01 Ultrasound pretreatment of wheat gluten (WG) before enzymolysis can improve the angiotensin converting enzyme (ACE) inhibitory activity of the hydrolysates by alerting the structure of substrate proteins. Establishment of a relationship between the structure of WG and ACE inhibitory activity of the hydrolysates to judge the end point of the ultrasonic pretreatment is vital. The results of stepwise multiple linear regression (MLR) showed that the contents of free sulfhydryl, α-helix, disulfide bond, surface hydrophobicity and random coil were significantly correlated to ACE Inhibitory activity of the hydrolysate, with the standard partial regression coefficients were 3.729, -0.676, -0.252, 0.022 and 0.156, respectively. The R(2) of this model was 0.970. External validation showed that the stepwise MLR model could well predict the ACE inhibitory activity of hydrolysate based on the content of free sulfhydryl, α-helix, disulfide bond, surface hydrophobicity and random coil of WG before hydrolysis. A stepwise multiple linear regression model describing the quantitative relationships between the structure of WG and the ACE Inhibitory activity of the hydrolysates was established. This model can be used to predict the endpoint of the ultrasonic pretreatment. © 2015 Society of Chemical Industry. © 2015 Society of Chemical Industry. 2. Bayesian quantile regression-based partially linear mixed-effects joint models for longitudinal data with multiple features. Science.gov (United States) Zhang, Hanze; Huang, Yangxin; Wang, Wei; Chen, Henian; Langland-Orban, Barbara 2017-01-01 In longitudinal AIDS studies, it is of interest to investigate the relationship between HIV viral load and CD4 cell counts, as well as the complicated time effect. Most of common models to analyze such complex longitudinal data are based on mean-regression, which fails to provide efficient estimates due to outliers and/or heavy tails. Quantile regression-based partially linear mixed-effects models, a special case of semiparametric models enjoying benefits of both parametric and nonparametric models, have the flexibility to monitor the viral dynamics nonparametrically and detect the varying CD4 effects parametrically at different quantiles of viral load. Meanwhile, it is critical to consider various data features of repeated measurements, including left-censoring due to a limit of detection, covariate measurement error, and asymmetric distribution. In this research, we first establish a Bayesian joint models that accounts for all these data features simultaneously in the framework of quantile regression-based partially linear mixed-effects models. The proposed models are applied to analyze the Multicenter AIDS Cohort Study (MACS) data. Simulation studies are also conducted to assess the performance of the proposed methods under different scenarios. 3. Implementasi Data Mining Estimasi Ketersediaan Lahan Pembuangan Sampah menggunakan Algoritma Simple Linear Regression Directory of Open Access Journals (Sweden) Robi Yanto 2018-04-01 4. A componential model of human interaction with graphs: 1. Linear regression modeling Science.gov (United States) Gillan, Douglas J.; Lewis, Robert 1994-01-01 Task analyses served as the basis for developing the Mixed Arithmetic-Perceptual (MA-P) model, which proposes (1) that people interacting with common graphs to answer common questions apply a set of component processes-searching for indicators, encoding the value of indicators, performing arithmetic operations on the values, making spatial comparisons among indicators, and repsonding; and (2) that the type of graph and user's task determine the combination and order of the components applied (i.e., the processing steps). Two experiments investigated the prediction that response time will be linearly related to the number of processing steps according to the MA-P model. Subjects used line graphs, scatter plots, and stacked bar graphs to answer comparison questions and questions requiring arithmetic calculations. A one-parameter version of the model (with equal weights for all components) and a two-parameter version (with different weights for arithmetic and nonarithmetic processes) accounted for 76%-85% of individual subjects' variance in response time and 61%-68% of the variance taken across all subjects. The discussion addresses possible modifications in the MA-P model, alternative models, and design implications from the MA-P model. 5. Effects of measurement errors on psychometric measurements in ergonomics studies: Implications for correlations, ANOVA, linear regression, factor analysis, and linear discriminant analysis. Science.gov (United States) Liu, Yan; Salvendy, Gavriel 2009-05-01 This paper aims to demonstrate the effects of measurement errors on psychometric measurements in ergonomics studies. A variety of sources can cause random measurement errors in ergonomics studies and these errors can distort virtually every statistic computed and lead investigators to erroneous conclusions. The effects of measurement errors on five most widely used statistical analysis tools have been discussed and illustrated: correlation; ANOVA; linear regression; factor analysis; linear discriminant analysis. It has been shown that measurement errors can greatly attenuate correlations between variables, reduce statistical power of ANOVA, distort (overestimate, underestimate or even change the sign of) regression coefficients, underrate the explanation contributions of the most important factors in factor analysis and depreciate the significance of discriminant function and discrimination abilities of individual variables in discrimination analysis. The discussions will be restricted to subjective scales and survey methods and their reliability estimates. Other methods applied in ergonomics research, such as physical and electrophysiological measurements and chemical and biomedical analysis methods, also have issues of measurement errors, but they are beyond the scope of this paper. As there has been increasing interest in the development and testing of theories in ergonomics research, it has become very important for ergonomics researchers to understand the effects of measurement errors on their experiment results, which the authors believe is very critical to research progress in theory development and cumulative knowledge in the ergonomics field. 6. Genomic-Enabled Prediction Based on Molecular Markers and Pedigree Using the Bayesian Linear Regression Package in R Directory of Open Access Journals (Sweden) Paulino Pérez 2010-09-01 Full Text Available The availability of dense molecular markers has made possible the use of genomic selection in plant and animal breeding. However, models for genomic selection pose several computational and statistical challenges and require specialized computer programs, not always available to the end user and not implemented in standard statistical software yet. The R-package BLR (Bayesian Linear Regression implements several statistical procedures (e.g., Bayesian Ridge Regression, Bayesian LASSO in a unified framework that allows including marker genotypes and pedigree data jointly. This article describes the classes of models implemented in the BLR package and illustrates their use through examples. Some challenges faced when applying genomic-enabled selection, such as model choice, evaluation of predictive ability through cross-validation, and choice of hyper-parameters, are also addressed. 7. pKa prediction for acidic phosphorus-containing compounds using multiple linear regression with computational descriptors. Science.gov (United States) Yu, Donghai; Du, Ruobing; Xiao, Ji-Chang 2016-07-05 Ninety-six acidic phosphorus-containing molecules with pKa 1.88 to 6.26 were collected and divided into training and test sets by random sampling. Structural parameters were obtained by density functional theory calculation of the molecules. The relationship between the experimental pKa values and structural parameters was obtained by multiple linear regression fitting for the training set, and tested with the test set; the R(2) values were 0.974 and 0.966 for the training and test sets, respectively. This regression equation, which quantitatively describes the influence of structural parameters on pKa , and can be used to predict pKa values of similar structures, is significant for the design of new acidic phosphorus-containing extractants. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc. 8. Is it the intervention or the students? using linear regression to control for student characteristics in undergraduate STEM education research. Science.gov (United States) Theobald, Roddy; Freeman, Scott 2014-01-01 Although researchers in undergraduate science, technology, engineering, and mathematics education are currently using several methods to analyze learning gains from pre- and posttest data, the most commonly used approaches have significant shortcomings. Chief among these is the inability to distinguish whether differences in learning gains are due to the effect of an instructional intervention or to differences in student characteristics when students cannot be assigned to control and treatment groups at random. Using pre- and posttest scores from an introductory biology course, we illustrate how the methods currently in wide use can lead to erroneous conclusions, and how multiple linear regression offers an effective framework for distinguishing the impact of an instructional intervention from the impact of student characteristics on test score gains. In general, we recommend that researchers always use student-level regression models that control for possible differences in student ability and preparation to estimate the effect of any nonrandomized instructional intervention on student performance. 9. Two-Stage Method Based on Local Polynomial Fitting for a Linear Heteroscedastic Regression Model and Its Application in Economics Directory of Open Access Journals (Sweden) Liyun Su 2012-01-01 Full Text Available We introduce the extension of local polynomial fitting to the linear heteroscedastic regression model. Firstly, the local polynomial fitting is applied to estimate heteroscedastic function, then the coefficients of regression model are obtained by using generalized least squares method. One noteworthy feature of our approach is that we avoid the testing for heteroscedasticity by improving the traditional two-stage method. Due to nonparametric technique of local polynomial estimation, we do not need to know the heteroscedastic function. Therefore, we can improve the estimation precision, when the heteroscedastic function is unknown. Furthermore, we focus on comparison of parameters and reach an optimal fitting. Besides, we verify the asymptotic normality of parameters based on numerical simulations. Finally, this approach is applied to a case of economics, and it indicates that our method is surely effective in finite-sample situations. 10. New insights into the nature of cerebellar-dependent eyeblink conditioning deficits in schizophrenia: A hierarchical linear modeling approach Directory of Open Access Journals (Sweden) Amanda R Bolbecker 2016-01-01 Full Text Available Evidence of cerebellar dysfunction in schizophrenia has mounted over the past several decades, emerging from neuroimaging, neuropathological, and behavioral studies. Consistent with these findings, cerebellar-dependent delay eyeblink conditioning (dEBC deficits have been identified in schizophrenia. While repeated measures analysis of variance (ANOVA is traditionally used to analyze dEBC data, hierarchical linear modeling (HLM more reliably describes change over time by accounting for the dependence in repeated measures data. This analysis approach is well suited to dEBC data analysis because it has less restrictive assumptions and allows unequal variances. The current study examined dEBC measured with electromyography in a single-cue tone paradigm in an age-matched sample of schizophrenia participants and healthy controls (N=56 per group using HLM. Subjects participated in 90 trials (10 blocks of dEBC, during which a 400 ms tone co-terminated with a 50 ms air puff delivered to the left eye. Each block also contained 1 tone-alone trial. The resulting block averages of dEBC data were fitted to a 3-parameter logistic model in HLM, revealing significant differences between schizophrenia and control groups on asymptote and inflection point, but not slope. These findings suggest that while the learning rate is not significantly different compared to controls, associative learning begins to level off later and a lower ultimate level of associative learning is achieved in schizophrenia. Given the large sample size in the present study, HLM may provide a more nuanced and definitive analysis of differences between schizophrenia and controls on dEBC. 11. Comparison of multiple linear regression and artificial neural network in developing the objective functions of the orthopaedic screws. Science.gov (United States) Hsu, Ching-Chi; Lin, Jinn; Chao, Ching-Kong 2011-12-01 Optimizing the orthopaedic screws can greatly improve their biomechanical performances. However, a methodical design optimization approach requires a long time to search the best design. Thus, the surrogate objective functions of the orthopaedic screws should be accurately developed. To our knowledge, there is no study to evaluate the strengths and limitations of the surrogate methods in developing the objective functions of the orthopaedic screws. Three-dimensional finite element models for both the tibial locking screws and the spinal pedicle screws were constructed and analyzed. Then, the learning data were prepared according to the arrangement of the Taguchi orthogonal array, and the verification data were selected with use of a randomized selection. Finally, the surrogate objective functions were developed by using either the multiple linear regression or the artificial neural network. The applicability and accuracy of those surrogate methods were evaluated and discussed. The multiple linear regression method could successfully construct the objective function of the tibial locking screws, but it failed to develop the objective function of the spinal pedicle screws. The artificial neural network method showed a greater capacity of prediction in developing the objective functions for the tibial locking screws and the spinal pedicle screws than the multiple linear regression method. The artificial neural network method may be a useful option for developing the objective functions of the orthopaedic screws with a greater structural complexity. The surrogate objective functions of the orthopaedic screws could effectively decrease the time and effort required for the design optimization process. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved. 12. Construction of multiple linear regression models using blood biomarkers for selecting against abdominal fat traits in broilers. Science.gov (United States) Dong, J Q; Zhang, X Y; Wang, S Z; Jiang, X F; Zhang, K; Ma, G W; Wu, M Q; Li, H; Zhang, H 2018-01-01 Plasma very low-density lipoprotein (VLDL) can be used to select for low body fat or abdominal fat (AF) in broilers, but its correlation with AF is limited. We investigated whether any other biochemical indicator can be used in combination with VLDL for a better selective effect. Nineteen plasma biochemical indicators were measured in male chickens from the Northeast Agricultural University broiler lines divergently selected for AF content (NEAUHLF) in the fed state at 46 and 48 d of age. The average concentration of every parameter for the 2 d was used for statistical analysis. Levels of these 19 plasma biochemical parameters were compared between the lean and fat lines. The phenotypic correlations between these plasma biochemical indicators and AF traits were analyzed. Then, multiple linear regression models were constructed to select the best model used for selecting against AF content. and the heritabilities of plasma indicators contained in the best models were estimated. The results showed that 11 plasma biochemical indicators (triglycerides, total bile acid, total protein, globulin, albumin/globulin, aspartate transaminase, alanine transaminase, gamma-glutamyl transpeptidase, uric acid, creatinine, and VLDL) differed significantly between the lean and fat lines (P linear regression models based on albumin/globulin, VLDL, triglycerides, globulin, total bile acid, and uric acid, had higher R2 (0.73) than the model based only on VLDL (0.21). The plasma parameters included in the best models had moderate heritability estimates (0.21 ≤ h2 ≤ 0.43). These results indicate that these multiple linear regression models can be used to select for lean broiler chickens. © 2017 Poultry Science Association Inc. 13. SU-G-BRA-08: Diaphragm Motion Tracking Based On KV CBCT Projections with a Constrained Linear Regression Optimization Energy Technology Data Exchange (ETDEWEB) Wei, J [City College of New York, New York, NY (United States); Chao, M [The Mount Sinai Medical Center, New York, NY (United States) 2016-06-15 Purpose: To develop a novel strategy to extract the respiratory motion of the thoracic diaphragm from kilovoltage cone beam computed tomography (CBCT) projections by a constrained linear regression optimization technique. Methods: A parabolic function was identified as the geometric model and was employed to fit the shape of the diaphragm on the CBCT projections. The search was initialized by five manually placed seeds on a pre-selected projection image. Temporal redundancies, the enabling phenomenology in video compression and encoding techniques, inherent in the dynamic properties of the diaphragm motion together with the geometrical shape of the diaphragm boundary and the associated algebraic constraint that significantly reduced the searching space of viable parabolic parameters was integrated, which can be effectively optimized by a constrained linear regression approach on the subsequent projections. The innovative algebraic constraints stipulating the kinetic range of the motion and the spatial constraint preventing any unphysical deviations was able to obtain the optimal contour of the diaphragm with minimal initialization. The algorithm was assessed by a fluoroscopic movie acquired at anteriorposterior fixed direction and kilovoltage CBCT projection image sets from four lung and two liver patients. The automatic tracing by the proposed algorithm and manual tracking by a human operator were compared in both space and frequency domains. Results: The error between the estimated and manual detections for the fluoroscopic movie was 0.54mm with standard deviation (SD) of 0.45mm, while the average error for the CBCT projections was 0.79mm with SD of 0.64mm for all enrolled patients. The submillimeter accuracy outcome exhibits the promise of the proposed constrained linear regression approach to track the diaphragm motion on rotational projection images. Conclusion: The new algorithm will provide a potential solution to rendering diaphragm motion and ultimately 14. SU-G-BRA-08: Diaphragm Motion Tracking Based On KV CBCT Projections with a Constrained Linear Regression Optimization International Nuclear Information System (INIS) Wei, J; Chao, M 2016-01-01 Purpose: To develop a novel strategy to extract the respiratory motion of the thoracic diaphragm from kilovoltage cone beam computed tomography (CBCT) projections by a constrained linear regression optimization technique. Methods: A parabolic function was identified as the geometric model and was employed to fit the shape of the diaphragm on the CBCT projections. The search was initialized by five manually placed seeds on a pre-selected projection image. Temporal redundancies, the enabling phenomenology in video compression and encoding techniques, inherent in the dynamic properties of the diaphragm motion together with the geometrical shape of the diaphragm boundary and the associated algebraic constraint that significantly reduced the searching space of viable parabolic parameters was integrated, which can be effectively optimized by a constrained linear regression approach on the subsequent projections. The innovative algebraic constraints stipulating the kinetic range of the motion and the spatial constraint preventing any unphysical deviations was able to obtain the optimal contour of the diaphragm with minimal initialization. The algorithm was assessed by a fluoroscopic movie acquired at anteriorposterior fixed direction and kilovoltage CBCT projection image sets from four lung and two liver patients. The automatic tracing by the proposed algorithm and manual tracking by a human operator were compared in both space and frequency domains. Results: The error between the estimated and manual detections for the fluoroscopic movie was 0.54mm with standard deviation (SD) of 0.45mm, while the average error for the CBCT projections was 0.79mm with SD of 0.64mm for all enrolled patients. The submillimeter accuracy outcome exhibits the promise of the proposed constrained linear regression approach to track the diaphragm motion on rotational projection images. Conclusion: The new algorithm will provide a potential solution to rendering diaphragm motion and ultimately 15. An Investigation of the Fit of Linear Regression Models to Data from an SAT[R] Validity Study. Research Report 2011-3 Science.gov (United States) Kobrin, Jennifer L.; Sinharay, Sandip; Haberman, Shelby J.; Chajewski, Michael 2011-01-01 This study examined the adequacy of a multiple linear regression model for predicting first-year college grade point average (FYGPA) using SAT[R] scores and high school grade point average (HSGPA). A variety of techniques, both graphical and statistical, were used to examine if it is possible to improve on the linear regression model. The results… 16. U.S. Army Armament Research, Development and Engineering Center Grain Evaluation Software to Numerically Predict Linear Burn Regression for Solid Propellant Grain Geometries Science.gov (United States) 2017-10-01 ENGINEERING CENTER GRAIN EVALUATION SOFTWARE TO NUMERICALLY PREDICT LINEAR BURN REGRESSION FOR SOLID PROPELLANT GRAIN GEOMETRIES Brian...distribution is unlimited. AD U.S. ARMY ARMAMENT RESEARCH, DEVELOPMENT AND ENGINEERING CENTER Munitions Engineering Technology Center Picatinny...U.S. ARMY ARMAMENT RESEARCH, DEVELOPMENT AND ENGINEERING CENTER GRAIN EVALUATION SOFTWARE TO NUMERICALLY PREDICT LINEAR BURN REGRESSION FOR SOLID 17. Relationship between rice yield and climate variables in southwest Nigeria using multiple linear regression and support vector machine analysis Science.gov (United States) Oguntunde, Philip G.; Lischeid, Gunnar; Dietrich, Ottfried 2018-03-01 This study examines the variations of climate variables and rice yield and quantifies the relationships among them using multiple linear regression, principal component analysis, and support vector machine (SVM) analysis in southwest Nigeria. The climate and yield data used was for a period of 36 years between 1980 and 2015. Similar to the observed decrease ( P 1 and explained 83.1% of the total variance of predictor variables. The SVM regression function using the scores of the first principal component explained about 75% of the variance in rice yield data and linear regression about 64%. SVM regression between annual solar radiation values and yield explained 67% of the variance. Only the first component of the principal component analysis (PCA) exhibited a clear long-term trend and sometimes short-term variance similar to that of rice yield. Short-term fluctuations of the scores of the PC1 are closely coupled to those of rice yield during the 1986-1993 and the 2006-2013 periods thereby revealing the inter-annual sensitivity of rice production to climate variability. Solar radiation stands out as the climate variable of highest influence on rice yield, and the influence was especially strong during monsoon and post-monsoon periods, which correspond to the vegetative, booting, flowering, and grain filling stages in the study area. The outcome is expected to provide more in-depth regional-specific climate-rice linkage for screening of better cultivars that can positively respond to future climate fluctuations as well as providing information that may help optimized planting dates for improved radiation use efficiency in the study area. 18. Relationship between rice yield and climate variables in southwest Nigeria using multiple linear regression and support vector machine analysis. Science.gov (United States) Oguntunde, Philip G; Lischeid, Gunnar; Dietrich, Ottfried 2018-03-01 This study examines the variations of climate variables and rice yield and quantifies the relationships among them using multiple linear regression, principal component analysis, and support vector machine (SVM) analysis in southwest Nigeria. The climate and yield data used was for a period of 36 years between 1980 and 2015. Similar to the observed decrease (P  1 and explained 83.1% of the total variance of predictor variables. The SVM regression function using the scores of the first principal component explained about 75% of the variance in rice yield data and linear regression about 64%. SVM regression between annual solar radiation values and yield explained 67% of the variance. Only the first component of the principal component analysis (PCA) exhibited a clear long-term trend and sometimes short-term variance similar to that of rice yield. Short-term fluctuations of the scores of the PC1 are closely coupled to those of rice yield during the 1986-1993 and the 2006-2013 periods thereby revealing the inter-annual sensitivity of rice production to climate variability. Solar radiation stands out as the climate variable of highest influence on rice yield, and the influence was especially strong during monsoon and post-monsoon periods, which correspond to the vegetative, booting, flowering, and grain filling stages in the study area. The outcome is expected to provide more in-depth regional-specific climate-rice linkage for screening of better cultivars that can positively respond to future climate fluctuations as well as providing information that may help optimized planting dates for improved radiation use efficiency in the study area. 19. Precision Interval Estimation of the Response Surface by Means of an Integrated Algorithm of Neural Network and Linear Regression Science.gov (United States) Lo, Ching F. 1999-01-01 The integration of Radial Basis Function Networks and Back Propagation Neural Networks with the Multiple Linear Regression has been accomplished to map nonlinear response surfaces over a wide range of independent variables in the process of the Modem Design of Experiments. The integrated method is capable to estimate the precision intervals including confidence and predicted intervals. The power of the innovative method has been demonstrated by applying to a set of wind tunnel test data in construction of response surface and estimation of precision interval. 20. Fragility estimation for seismically isolated nuclear structures by high confidence low probability of failure values and bi-linear regression International Nuclear Information System (INIS) Carausu, A. 1996-01-01 A method for the fragility estimation of seismically isolated nuclear power plant structure is proposed. The relationship between the ground motion intensity parameter (e.g. peak ground velocity or peak ground acceleration) and the response of isolated structures is expressed in terms of a bi-linear regression line, whose coefficients are estimated by the least-square method in terms of available data on seismic input and structural response. The notion of high confidence low probability of failure (HCLPF) value is also used for deriving compound fragility curves for coupled subsystems. (orig.) 1. A multiple linear regression analysis of hot corrosion attack on a series of nickel base turbine alloys Science.gov (United States) Barrett, C. A. 1985-01-01 Multiple linear regression analysis was used to determine an equation for estimating hot corrosion attack for a series of Ni base cast turbine alloys. The U transform (i.e., 1/sin (% A/100) to the 1/2) was shown to give the best estimate of the dependent variable, y. A complete second degree equation is described for the centered" weight chemistries for the elements Cr, Al, Ti, Mo, W, Cb, Ta, and Co. In addition linear terms for the minor elements C, B, and Zr were added for a basic 47 term equation. The best reduced equation was determined by the stepwise selection method with essentially 13 terms. The Cr term was found to be the most important accounting for 60 percent of the explained variability hot corrosion attack. 2. Effective Surfactants Blend Concentration Determination for O/W Emulsion Stabilization by Two Nonionic Surfactants by Simple Linear Regression. Science.gov (United States) Hassan, A K 2015-01-01 In this work, O/W emulsion sets were prepared by using different concentrations of two nonionic surfactants. The two surfactants, tween 80(HLB=15.0) and span 80(HLB=4.3) were used in a fixed proportions equal to 0.55:0.45 respectively. HLB value of the surfactants blends were fixed at 10.185. The surfactants blend concentration is starting from 3% up to 19%. For each O/W emulsion set the conductivity was measured at room temperature (25±2°), 40, 50, 60, 70 and 80°. Applying the simple linear regression least squares method statistical analysis to the temperature-conductivity obtained data determines the effective surfactants blend concentration required for preparing the most stable O/W emulsion. These results were confirmed by applying the physical stability centrifugation testing and the phase inversion temperature range measurements. The results indicated that, the relation which represents the most stable O/W emulsion has the strongest direct linear relationship between temperature and conductivity. This relationship is linear up to 80°. This work proves that, the most stable O/W emulsion is determined via the determination of the maximum R² value by applying of the simple linear regression least squares method to the temperature-conductivity obtained data up to 80°, in addition to, the true maximum slope is represented by the equation which has the maximum R² value. Because the conditions would be changed in a more complex formulation, the method of the determination of the effective surfactants blend concentration was verified by applying it for more complex formulations of 2% O/W miconazole nitrate cream and the results indicate its reproducibility. 3. Combined genetic algorithm and multiple linear regression (GA-MLR) optimizer: Application to multi-exponential fluorescence decay surface. Science.gov (United States) Fisz, Jacek J 2006-12-07 The optimization approach based on the genetic algorithm (GA) combined with multiple linear regression (MLR) method, is discussed. The GA-MLR optimizer is designed for the nonlinear least-squares problems in which the model functions are linear combinations of nonlinear functions. GA optimizes the nonlinear parameters, and the linear parameters are calculated from MLR. GA-MLR is an intuitive optimization approach and it exploits all advantages of the genetic algorithm technique. This optimization method results from an appropriate combination of two well-known optimization methods. The MLR method is embedded in the GA optimizer and linear and nonlinear model parameters are optimized in parallel. The MLR method is the only one strictly mathematical "tool" involved in GA-MLR. The GA-MLR approach simplifies and accelerates considerably the optimization process because the linear parameters are not the fitted ones. Its properties are exemplified by the analysis of the kinetic biexponential fluorescence decay surface corresponding to a two-excited-state interconversion process. A short discussion of the variable projection (VP) algorithm, designed for the same class of the optimization problems, is presented. VP is a very advanced mathematical formalism that involves the methods of nonlinear functionals, algebra of linear projectors, and the formalism of Fréchet derivatives and pseudo-inverses. Additional explanatory comments are added on the application of recently introduced the GA-NR optimizer to simultaneous recovery of linear and weakly nonlinear parameters occurring in the same optimization problem together with nonlinear parameters. The GA-NR optimizer combines the GA method with the NR method, in which the minimum-value condition for the quadratic approximation to chi(2), obtained from the Taylor series expansion of chi(2), is recovered by means of the Newton-Raphson algorithm. The application of the GA-NR optimizer to model functions which are multi-linear 4. Plateletpheresis efficiency and mathematical correction of software-derived platelet yield prediction: A linear regression and ROC modeling approach. Science.gov (United States) Jaime-Pérez, José Carlos; Jiménez-Castillo, Raúl Alberto; Vázquez-Hernández, Karina Elizabeth; Salazar-Riojas, Rosario; Méndez-Ramírez, Nereida; Gómez-Almaguer, David 2017-10-01 Advances in automated cell separators have improved the efficiency of plateletpheresis and the possibility of obtaining double products (DP). We assessed cell processor accuracy of predicted platelet (PLT) yields with the goal of a better prediction of DP collections. This retrospective proof-of-concept study included 302 plateletpheresis procedures performed on a Trima Accel v6.0 at the apheresis unit of a hematology department. Donor variables, software predicted yield and actual PLT yield were statistically evaluated. Software prediction was optimized by linear regression analysis and its optimal cut-off to obtain a DP assessed by receiver operating characteristic curve (ROC) modeling. Three hundred and two plateletpheresis procedures were performed; in 271 (89.7%) occasions, donors were men and in 31 (10.3%) women. Pre-donation PLT count had the best direct correlation with actual PLT yield (r = 0.486. P Simple correction derived from linear regression analysis accurately corrected this underestimation and ROC analysis identified a precise cut-off to reliably predict a DP. © 2016 Wiley Periodicals, Inc. 5. Performance of an Axisymmetric Rocket Based Combined Cycle Engine During Rocket Only Operation Using Linear Regression Analysis Science.gov (United States) Smith, Timothy D.; Steffen, Christopher J., Jr.; Yungster, Shaye; Keller, Dennis J. 1998-01-01 The all rocket mode of operation is shown to be a critical factor in the overall performance of a rocket based combined cycle (RBCC) vehicle. An axisymmetric RBCC engine was used to determine specific impulse efficiency values based upon both full flow and gas generator configurations. Design of experiments methodology was used to construct a test matrix and multiple linear regression analysis was used to build parametric models. The main parameters investigated in this study were: rocket chamber pressure, rocket exit area ratio, injected secondary flow, mixer-ejector inlet area, mixer-ejector area ratio, and mixer-ejector length-to-inlet diameter ratio. A perfect gas computational fluid dynamics analysis, using both the Spalart-Allmaras and k-omega turbulence models, was performed with the NPARC code to obtain values of vacuum specific impulse. Results from the multiple linear regression analysis showed that for both the full flow and gas generator configurations increasing mixer-ejector area ratio and rocket area ratio increase performance, while increasing mixer-ejector inlet area ratio and mixer-ejector length-to-diameter ratio decrease performance. Increasing injected secondary flow increased performance for the gas generator analysis, but was not statistically significant for the full flow analysis. Chamber pressure was found to be not statistically significant. 6. pulver: an R package for parallel ultra-rapid p-value computation for linear regression interaction terms. Science.gov (United States) Molnos, Sophie; Baumbach, Clemens; Wahl, Simone; Müller-Nurasyid, Martina; Strauch, Konstantin; Wang-Sattler, Rui; Waldenberger, Melanie; Meitinger, Thomas; Adamski, Jerzy; Kastenmüller, Gabi; Suhre, Karsten; Peters, Annette; Grallert, Harald; Theis, Fabian J; Gieger, Christian 2017-09-29 Genome-wide association studies allow us to understand the genetics of complex diseases. Human metabolism provides information about the disease-causing mechanisms, so it is usual to investigate the associations between genetic variants and metabolite levels. However, only considering genetic variants and their effects on one trait ignores the possible interplay between different "omics" layers. Existing tools only consider single-nucleotide polymorphism (SNP)-SNP interactions, and no practical tool is available for large-scale investigations of the interactions between pairs of arbitrary quantitative variables. We developed an R package called pulver to compute p-values for the interaction term in a very large number of linear regression models. Comparisons based on simulated data showed that pulver is much faster than the existing tools. This is achieved by using the correlation coefficient to test the null-hypothesis, which avoids the costly computation of inversions. Additional tricks are a rearrangement of the order, when iterating through the different "omics" layers, and implementing this algorithm in the fast programming language C++. Furthermore, we applied our algorithm to data from the German KORA study to investigate a real-world problem involving the interplay among DNA methylation, genetic variants, and metabolite levels. The pulver package is a convenient and rapid tool for screening huge numbers of linear regression models for significant interaction terms in arbitrary pairs of quantitative variables. pulver is written in R and C++, and can be downloaded freely from CRAN at https://cran.r-project.org/web/packages/pulver/ . 7. Early Parallel Activation of Semantics and Phonology in Picture Naming: Evidence from a Multiple Linear Regression MEG Study. Science.gov (United States) Miozzo, Michele; Pulvermüller, Friedemann; Hauk, Olaf 2015-10-01 The time course of brain activation during word production has become an area of increasingly intense investigation in cognitive neuroscience. The predominant view has been that semantic and phonological processes are activated sequentially, at about 150 and 200-400 ms after picture onset. Although evidence from prior studies has been interpreted as supporting this view, these studies were arguably not ideally suited to detect early brain activation of semantic and phonological processes. We here used a multiple linear regression approach to magnetoencephalography (MEG) analysis of picture naming in order to investigate early effects of variables specifically related to visual, semantic, and phonological processing. This was combined with distributed minimum-norm source estimation and region-of-interest analysis. Brain activation associated with visual image complexity appeared in occipital cortex at about 100 ms after picture presentation onset. At about 150 ms, semantic variables became physiologically manifest in left frontotemporal regions. In the same latency range, we found an effect of phonological variables in the left middle temporal gyrus. Our results demonstrate that multiple linear regression analysis is sensitive to early effects of multiple psycholinguistic variables in picture naming. Crucially, our results suggest that access to phonological information might begin in parallel with semantic processing around 150 ms after picture onset. © The Author 2014. Published by Oxford University Press. 8. Multiple linear stepwise regression of liver lipid levels: proton MR spectroscopy study in vivo at 3.0 T International Nuclear Information System (INIS) Xu Li; Liang Changhong; Xiao Yuanqiu; Zhang Zhonglin 2010-01-01 Objective: To analyze the correlations between liver lipid level determined by liver 3.0 T 1 H-MRS in vivo and influencing factors using multiple linear stepwise regression. Methods: The prospective study of liver 1 H-MRS was performed with 3.0 T system and eight-channel torso phased-array coils using PRESS sequence. Forty-four volunteers were enrolled in this study. Liver spectra were collected with a TR of 1500 ms, TE of 30 ms, volume of interest of 2 cm×2 cm×2 cm, NSA of 64 times. The acquired raw proton MRS data were processed by using a software program SAGE. For each MRS measurement, using water as the internal reference, the amplitude of the lipid signal was normalized to the sum of the signal from lipid and water to obtain percentage lipid within the liver. The statistical description of height, weight, age and BMI, Line width and water suppression were recorded, and Pearson analysis was applied to test their relationships. Multiple linear stepwise regression was used to set the statistical model for the prediction of Liver lipid content. Results: Age (39.1±12.6) years, body weight (64.4±10.4) kg, BMI (23.3±3.1) kg/m 2 , linewidth (18.9±4.4) and the water suppression (90.7±6.5)% had significant correlation with liver lipid content (0.00 to 0.96%, median 0.02%), r were 0.11, 0.44, 0.40, 0.52, -0.73 respectively (P<0.05). But only age, BMI, line width, and the water suppression entered into the multiple linear regression equation. Liver lipid content prediction equation was as follows: Y= 1.395 - (0.021×water suppression) + (0.022×BMI) + (0.014×line width) - (0.004×age), and the coefficient of determination was 0. 613, corrected coefficient of determination was 0.59. Conclusion: The regression model fitted well, since the variables of age, BMI, width, and water suppression can explain about 60% of liver lipid content changes. (authors) 9. The importance of trait emotional intelligence and feelings in the prediction of perceived and biological stress in adolescents: hierarchical regressions and fsQCA models. Science.gov (United States) 2017-07-01 The purpose of this study is to analyze the combined effects of trait emotional intelligence (EI) and feelings on healthy adolescents' stress. Identifying the extent to which adolescent stress varies with trait emotional differences and the feelings of adolescents is of considerable interest in the development of intervention programs for fostering youth well-being. To attain this goal, self-reported questionnaires (perceived stress, trait EI, and positive/negative feelings) and biological measures of stress (hair cortisol concentrations, HCC) were collected from 170 adolescents (12-14 years old). Two different methodologies were conducted, which included hierarchical regression models and a fuzzy-set qualitative comparative analysis (fsQCA). The results support trait EI as a protective factor against stress in healthy adolescents and suggest that feelings reinforce this relation. However, the debate continues regarding the possibility of optimal levels of trait EI for effective and adaptive emotional management, particularly in the emotional attention and clarity dimensions and for female adolescents. 10. Substituting random forest for multiple linear regression improves binding affinity prediction of scoring functions: Cyscore as a case study. Science.gov (United States) Li, Hongjian; Leung, Kwong-Sak; Wong, Man-Hon; Ballester, Pedro J 2014-08-27 State-of-the-art protein-ligand docking methods are generally limited by the traditionally low accuracy of their scoring functions, which are used to predict binding affinity and thus vital for discriminating between active and inactive compounds. Despite intensive research over the years, classical scoring functions have reached a plateau in their predictive performance. These assume a predetermined additive functional form for some sophisticated numerical features, and use standard multivariate linear regression (MLR) on experimental data to derive the coefficients. In this study we show that such a simple functional form is detrimental for the prediction performance of a scoring function, and replacing linear regression by machine learning techniques like random forest (RF) can improve prediction performance. We investigate the conditions of applying RF under various contexts and find that given sufficient training samples RF manages to comprehensively capture the non-linearity between structural features and measured binding affinities. Incorporating more structural features and training with more samples can both boost RF performance. In addition, we analyze the importance of structural features to binding affinity prediction using the RF variable importance tool. Lastly, we use Cyscore, a top performing empirical scoring function, as a baseline for comparison study. Machine-learning scoring functions are fundamentally different from classical scoring functions because the former circumvents the fixed functional form relating structural features with binding affinities. RF, but not MLR, can effectively exploit more structural features and more training samples, leading to higher prediction performance. The future availability of more X-ray crystal structures will further widen the performance gap between RF-based and MLR-based scoring functions. This further stresses the importance of substituting RF for MLR in scoring function development. 11. The use of artificial neural networks and multiple linear regression to predict rate of medical waste generation International Nuclear Information System (INIS) 2009-01-01 Prediction of the amount of hospital waste production will be helpful in the storage, transportation and disposal of hospital waste management. Based on this fact, two predictor models including artificial neural networks (ANNs) and multiple linear regression (MLR) were applied to predict the rate of medical waste generation totally and in different types of sharp, infectious and general. In this study, a 5-fold cross-validation procedure on a database containing total of 50 hospitals of Fars province (Iran) were used to verify the performance of the models. Three performance measures including MAR, RMSE and R 2 were used to evaluate performance of models. The MLR as a conventional model obtained poor prediction performance measure values. However, MLR distinguished hospital capacity and bed occupancy as more significant parameters. On the other hand, ANNs as a more powerful model, which has not been introduced in predicting rate of medical waste generation, showed high performance measure values, especially 0.99 value of R 2 confirming the good fit of the data. Such satisfactory results could be attributed to the non-linear nature of ANNs in problem solving which provides the opportunity for relating independent variables to dependent ones non-linearly. In conclusion, the obtained results showed that our ANN-based model approach is very promising and may play a useful role in developing a better cost-effective strategy for waste management in future. 12. Determination of DPPH Radical Oxidation Caused by Methanolic Extracts of Some Microalgal Species by Linear Regression Analysis of Spectrophotometric Measurements Directory of Open Access Journals (Sweden) Ulf-Peter Hansen 2007-10-01 Full Text Available The demonstrated modified spectrophotometric method makes use of the 2,2-diphenyl-1-picrylhydrazyl (DPPH radical and its specific absorbance properties. Theabsorbance decreases when the radical is reduced by antioxidants. In contrast to otherinvestigations, the absorbance was measured at a wavelength of 550 nm. This wavelengthenabled the measurements of the stable free DPPH radical without interference frommicroalgal pigments. This approach was applied to methanolic microalgae extracts for twodifferent DPPH concentrations. The changes in absorbance measured vs. the concentrationof the methanolic extract resulted in curves with a linear decrease ending in a saturationregion. Linear regression analysis of the linear part of DPPH reduction versus extractconcentration enabled the determination of the microalgae’s methanolic extractsantioxidative potentials which was independent to the employed DPPH concentrations. Theresulting slopes showed significant differences (6 - 34 μmol DPPH g-1 extractconcentration between the single different species of microalgae (Anabaena sp.,Isochrysis galbana, Phaeodactylum tricornutum, Porphyridium purpureum, Synechocystissp. PCC6803 in their ability to reduce the DPPH radical. The independency of the signal on the DPPH concentration is a valuable advantage over the determination of the EC50 value. 13. A step-by-step guide to non-linear regression analysis of experimental data using a Microsoft Excel spreadsheet. Science.gov (United States) Brown, A M 2001-06-01 The objective of this present study was to introduce a simple, easily understood method for carrying out non-linear regression analysis based on user input functions. While it is relatively straightforward to fit data with simple functions such as linear or logarithmic functions, fitting data with more complicated non-linear functions is more difficult. Commercial specialist programmes are available that will carry out this analysis, but these programmes are expensive and are not intuitive to learn. An alternative method described here is to use the SOLVER function of the ubiquitous spreadsheet programme Microsoft Excel, which employs an iterative least squares fitting routine to produce the optimal goodness of fit between data and function. The intent of this paper is to lead the reader through an easily understood step-by-step guide to implementing this method, which can be applied to any function in the form y=f(x), and is well suited to fast, reliable analysis of data in all fields of biology. 14. Robust Multiple Linear Regression. Science.gov (United States) 1982-12-01 difficulty, but it might have more solutions corresponding to local minima. Influence Function of M-Estimates The influence function describes the effect...distributionn n function. In case of M-Estimates the influence function was found to be pro- portional to and given as T(X F)) " C(xpF,T) = .(X.T(F) F(dx...where the inverse of any distribution function F is defined in the usual way as F- (s) = inf{x IF(x) > s) 0<sə Influence Function of L-Estimates In a 15. Multiple linear regressions Abstract. The predictive analysis based on quantitative structure activity relationships (QSAR) on benzim- ... could lead to treatment of obesity, diabetes and related conditions. ..... After discussing the physical and chemical mean- ing of the ... 16. (Non) linear regression modelling NARCIS (Netherlands) Cizek, P.; Gentle, J.E.; Hardle, W.K.; Mori, Y. 2012-01-01 We will study causal relationships of a known form between random variables. Given a model, we distinguish one or more dependent (endogenous) variables Y = (Y1,…,Yl), l ∈ N, which are explained by a model, and independent (exogenous, explanatory) variables X = (X1,…,Xp),p ∈ N, which explain or 17. Modeling the kinetics of essential oil hydrodistillation from juniper berries (Juniperus communis L. using non-linear regression Directory of Open Access Journals (Sweden) 2017-01-01 Full Text Available This paper presents kinetics modeling of essential oil hydrodistillation from juniper berries (Juniperus communis L. by using a non-linear regression methodology. The proposed model has the polynomial-logarithmic form. The initial equation of the proposed non-linear model is q = q∞•(a•(logt2 + b•logt + c and by substituting a1=q∞•a, b1 = q∞•b and c1 = q∞•c, the final equation is obtained as q = a1•(logt2 + b1•logt + c1. In this equation q is the quantity of the obtained oil at time t, while a1, b1 and c1 are parameters to be determined for each sample. From the final equation it can be seen that the key parameter q∞, which presents the maximal oil quantity obtained after infinite time, is already included in parameters a1, b1 and c1. In this way, experimental determination of this parameter is avoided. Using the proposed model with parameters obtained by regression, the values of oil hydrodistillation in time are calculated for each sample and compared to the experimental values. In addition, two kinetic models previously proposed in literature were applied to the same experimental results. The developed model provided better agreements with the experimental values than the two, generally accepted kinetic models of this process. The average values of error measures (RSS, RSE, AIC and MRPD obtained for our model (0.005; 0.017; –84.33; 1.65 were generally lower than the corresponding values of the other two models (0.025; 0.041; –53.20; 3.89 and (0.0035; 0.015; –86.83; 1.59. Also, parameter estimation for the proposed model was significantly simpler (maximum 2 iterations per sample using the non-linear regression than that for the existing models (maximum 9 iterations per sample. [Project of the Serbian Ministry of Education, Science and Technological Development, Grant no. TR-35026 18. Testing the macroeconomic impact of the budget deficit in EU Member States using linear regression with fixed effects Directory of Open Access Journals (Sweden) Dalian Marius DORAN 2017-11-01 Full Text Available The article aims to research impact of budget balance, whether surplus or deficit, on the main indicator characterizing the economic growth of a country, namely GDP and the inflation rate in the 27 European Union Member States and the United Kingdom. For this analysis was used panel data, taking into account the period from 2001 to 2015. The method used for the analysis is the linear regression with fixed effects and with Driscoll-Kraay standard errors. The dependent variables are the growth rate of real GDP and the inflation rate, and the independent variable is the budget balance (surplus or deficit. The results obtained after using econometric software Stata shows a positive impact of budget balance on growth in the European Union for the analyzed period. 19. Efficient Determination of Free Energy Landscapes in Multiple Dimensions from Biased Umbrella Sampling Simulations Using Linear Regression. Science.gov (United States) Meng, Yilin; Roux, Benoît 2015-08-11 The weighted histogram analysis method (WHAM) is a standard protocol for postprocessing the information from biased umbrella sampling simulations to construct the potential of mean force with respect to a set of order parameters. By virtue of the WHAM equations, the unbiased density of state is determined by satisfying a self-consistent condition through an iterative procedure. While the method works very effectively when the number of order parameters is small, its computational cost grows rapidly in higher dimension. Here, we present a simple and efficient alternative strategy, which avoids solving the self-consistent WHAM equations iteratively. An efficient multivariate linear regression framework is utilized to link the biased probability densities of individual umbrella windows and yield an unbiased global free energy landscape in the space of order parameters. It is demonstrated with practical examples that free energy landscapes that are comparable in accuracy to WHAM can be generated at a small fraction of the cost. 20. Time Series Analysis of Soil Radon Data Using Multiple Linear Regression and Artificial Neural Network in Seismic Precursory Studies Science.gov (United States) Singh, S.; Jaishi, H. P.; Tiwari, R. P.; Tiwari, R. C. 2017-07-01 This paper reports the analysis of soil radon data recorded in the seismic zone-V, located in the northeastern part of India (latitude 23.73N, longitude 92.73E). Continuous measurements of soil-gas emission along Chite fault in Mizoram (India) were carried out with the replacement of solid-state nuclear track detectors at weekly interval. The present study was done for the period from March 2013 to May 2015 using LR-115 Type II detectors, manufactured by Kodak Pathe, France. In order to reduce the influence of meteorological parameters, statistical analysis tools such as multiple linear regression and artificial neural network have been used. Decrease in radon concentration was recorded prior to some earthquakes that occurred during the observation period. Some false anomalies were also recorded which may be attributed to the ongoing crustal deformation which was not major enough to produce an earthquake. 1. QSAR Modeling of COX -2 Inhibitory Activity of Some Dihydropyridine and Hydroquinoline Derivatives Using Multiple Linear Regression (MLR) Method. Science.gov (United States) Akbari, Somaye; Zebardast, Tannaz; Zarghi, Afshin; Hajimahdi, Zahra 2017-01-01 COX-2 inhibitory activities of some 1,4-dihydropyridine and 5-oxo-1,4,5,6,7,8-hexahydroquinoline derivatives were modeled by quantitative structure-activity relationship (QSAR) using stepwise-multiple linear regression (SW-MLR) method. The built model was robust and predictive with correlation coefficient (R 2 ) of 0.972 and 0.531 for training and test groups, respectively. The quality of the model was evaluated by leave-one-out (LOO) cross validation (LOO correlation coefficient (Q 2 ) of 0.943) and Y-randomization. We also employed a leverage approach for the defining of applicability domain of model. Based on QSAR models results, COX-2 inhibitory activity of selected data set had correlation with BEHm6 (highest eigenvalue n. 6 of Burden matrix/weighted by atomic masses), Mor03u (signal 03/unweighted) and IVDE (Mean information content on the vertex degree equality) descriptors which derived from their structures. 2. Comparison of a neural network with multiple linear regression for quantitative analysis in ICP-atomic emission spectroscopy International Nuclear Information System (INIS) Schierle, C.; Otto, M. 1992-01-01 A two layer perceptron with backpropagation of error is used for quantitative analysis in ICP-AES. The network was trained by emission spectra of two interfering lines of Cd and As and the concentrations of both elements were subsequently estimated from mixture spectra. The spectra of the Cd and As lines were also used to perform multiple linear regression (MLR) via the calculation of the pseudoinverse S + of the sensitivity matrix S. In the present paper it is shown that there exist close relations between the operation of the perceptron and the MLR procedure. These are most clearly apparent in the correlation between the weights of the backpropagation network and the elements of the pseudoinverse. Using MLR, the confidence intervals over the predictions are exploited to correct for the optical device of the wavelength shift. (orig.) 3. Research on refugees and immigrants social integration in Yunnan Border Area: An empirical analysis on the multivariable linear regression model Directory of Open Access Journals (Sweden) Peng Nai 2016-03-01 Full Text Available A great number of immigration populations resident permanently in Yunnan Border Area of China. To some extent, these people belong to refugees or immigrants in accordance with International Rules, which significantly features the social diversity of this area. However, this kind of social diversity always impairs the social order. Therefore, there will be a positive influence to the local society governance by a research on local immigration integration. This essay hereby attempts to acquire the data of the living situation of these border area immigration and refugees. The analysis of the social integration of refugees and immigration in Yunnan border area in China will be deployed through the modeling of multivariable linear regression based on these data in order to propose some more achievable resolutions. 4. Modelos de regressão não linear aplicados a grupos de acessos de alho OpenAIRE Reis, Renata M; Cecon, Paulo R; Puiatti, Mário; Finger, Fernando L; Nascimento, Moysés; Silva, Fabyano F; Carneiro, Antônio PS; Silva, Anderson R 2014-01-01 O principal objetivo deste estudo foi comparar modelos de regressão não linear aptos a descreverem o acúmulo de massa seca de diferentes partes da planta do alho ao longo do tempo (60, 90, 120 e 150 dias após plantio). Objetivou-se também identificar acessos semelhantes em relação às características avaliadas por meio de análises de agrupamento. Foram utilizados 20 acessos de alho pertencentes ao Banco de Germoplasma de Hortaliças da Universidade Federal de Viçosa (BGH/UFV). O teor de massa s... 5. Linear-regression convolutional neural network for fully automated coronary lumen segmentation in intravascular optical coherence tomography Science.gov (United States) Yong, Yan Ling; Tan, Li Kuo; McLaughlin, Robert A.; Chee, Kok Han; Liew, Yih Miin 2017-12-01 Intravascular optical coherence tomography (OCT) is an optical imaging modality commonly used in the assessment of coronary artery diseases during percutaneous coronary intervention. Manual segmentation to assess luminal stenosis from OCT pullback scans is challenging and time consuming. We propose a linear-regression convolutional neural network to automatically perform vessel lumen segmentation, parameterized in terms of radial distances from the catheter centroid in polar space. Benchmarked against gold-standard manual segmentation, our proposed algorithm achieves average locational accuracy of the vessel wall of 22 microns, and 0.985 and 0.970 in Dice coefficient and Jaccard similarity index, respectively. The average absolute error of luminal area estimation is 1.38%. The processing rate is 40.6 ms per image, suggesting the potential to be incorporated into a clinical workflow and to provide quantitative assessment of vessel lumen in an intraoperative time frame. 6. QSAR study on the histamine (H3 receptor antagonists using the genetic algorithm: Multi parameter linear regression Directory of Open Access Journals (Sweden) 2012-01-01 Full Text Available A quantitative structure activity relationship (QSAR model has been produced for predicting antagonist potency of biphenyl derivatives as human histamine (H3 receptors. The molecular structures of the compounds are numerically represented by various kinds of molecular descriptors. The whole data set was divided into training and test sets. Genetic algorithm based multiple linear regression is used to select most statistically effective descriptors. The final QSAR model (N =24, R2=0.916, F = 51.771, Q2 LOO = 0.872, Q2 LGO = 0.847, Q2 BOOT = 0.857 was fully validated employing leaveone- out (LOO cross-validation approach, Fischer statistics (F, Yrandomisation test, and predictions based on the test data set. The test set presented an external prediction power of R2 test=0.855. In conclusion, the QSAR model generated can be used as a valuable tool for designing similar groups of new antagonists of histamine (H3 receptors. 7. Linear-regression convolutional neural network for fully automated coronary lumen segmentation in intravascular optical coherence tomography. Science.gov (United States) Yong, Yan Ling; Tan, Li Kuo; McLaughlin, Robert A; Chee, Kok Han; Liew, Yih Miin 2017-12-01 Intravascular optical coherence tomography (OCT) is an optical imaging modality commonly used in the assessment of coronary artery diseases during percutaneous coronary intervention. Manual segmentation to assess luminal stenosis from OCT pullback scans is challenging and time consuming. We propose a linear-regression convolutional neural network to automatically perform vessel lumen segmentation, parameterized in terms of radial distances from the catheter centroid in polar space. Benchmarked against gold-standard manual segmentation, our proposed algorithm achieves average locational accuracy of the vessel wall of 22 microns, and 0.985 and 0.970 in Dice coefficient and Jaccard similarity index, respectively. The average absolute error of luminal area estimation is 1.38%. The processing rate is 40.6 ms per image, suggesting the potential to be incorporated into a clinical workflow and to provide quantitative assessment of vessel lumen in an intraoperative time frame. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). 8. Multiple linear regression to develop strength scaled equations for knee and elbow joints based on age, gender and segment mass DEFF Research Database (Denmark) D'Souza, Sonia; Rasmussen, John; Schwirtz, Ansgar 2012-01-01 and valuable ergonomic tool. Objective: To investigate age and gender effects on the torque-producing ability in the knee and elbow in older adults. To create strength scaled equations based on age, gender, upper/lower limb lengths and masses using multiple linear regression. To reduce the number of dependent...... flexors. Results: Males were signifantly stronger than females across all age groups. Elbow peak torque (EPT) was better preserved from 60s to 70s whereas knee peak torque (KPT) reduced significantly (PGender, thigh mass and age best...... predicted KPT (R2=0.60). Gender, forearm mass and age best predicted EPT (R2=0.75). Good crossvalidation was established for both elbow and knee models. Conclusion: This cross-sectional study of muscle strength created and validated strength scaled equations of EPT and KPT using only gender, segment mass... 9. Estimating severity of sideways fall using a generic multi linear regression model based on kinematic input variables. Science.gov (United States) van der Zijden, A M; Groen, B E; Tanck, E; Nienhuis, B; Verdonschot, N; Weerdesteyn, V 2017-03-21 Many research groups have studied fall impact mechanics to understand how fall severity can be reduced to prevent hip fractures. Yet, direct impact force measurements with force plates are restricted to a very limited repertoire of experimental falls. The purpose of this study was to develop a generic model for estimating hip impact forces (i.e. fall severity) in in vivo sideways falls without the use of force plates. Twelve experienced judokas performed sideways Martial Arts (MA) and Block ('natural') falls on a force plate, both with and without a mat on top. Data were analyzed to determine the hip impact force and to derive 11 selected (subject-specific and kinematic) variables. Falls from kneeling height were used to perform a stepwise regression procedure to assess the effects of these input variables and build the model. The final model includes four input variables, involving one subject-specific measure and three kinematic variables: maximum upper body deceleration, body mass, shoulder angle at the instant of 'maximum impact' and maximum hip deceleration. The results showed that estimated and measured hip impact forces were linearly related (explained variances ranging from 46 to 63%). Hip impact forces of MA falls onto the mat from a standing position (3650±916N) estimated by the final model were comparable with measured values (3698±689N), even though these data were not used for training the model. In conclusion, a generic linear regression model was developed that enables the assessment of fall severity through kinematic measures of sideways falls, without using force plates. Copyright © 2017 Elsevier Ltd. All rights reserved. 10. Predictive modelling of chromium removal using multiple linear and nonlinear regression with special emphasis on operating parameters of bioelectrochemical reactor. Science.gov (United States) More, Anand Govind; Gupta, Sunil Kumar 2018-03-24 Bioelectrochemical system (BES) is a novel, self-sustaining metal removal technology functioning on the utilization of chemical energy of organic matter with the help of microorganisms. Experimental trials of two chambered BES reactor were conducted with varying substrate concentration using sodium acetate (500 mg/L to 2000 mg/L COD) and different initial chromium concentration (Cr i ) (10-100 mg/L) at different cathode pH (pH 1-7). In the current study mathematical models based on multiple linear regression (MLR) and non-linear regression (NLR) approach were developed using laboratory experimental data for determining chromium removal efficiency (CRE) in the cathode chamber of BES. Substrate concentration, rate of substrate consumption, Cr i , pH, temperature and hydraulic retention time (HRT) were the operating process parameters of the reactor considered for development of the proposed models. MLR showed a better correlation coefficient (0.972) as compared to NLR (0.952). Validation of the models using t-test analysis revealed unbiasedness of both the models, with t critical value (2.04) greater than t-calculated values for MLR (-0.708) and NLR (-0.86). The root-mean-square error (RMSE) for MLR and NLR were 5.06 % and 7.45 %, respectively. Comparison between both models suggested MLR to be best suited model for predicting the chromium removal behavior using the BES technology to specify a set of operating conditions for BES. Modelling the behavior of CRE will be helpful for scale up of BES technology at industrial level. Copyright © 2018 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved. 11. Comparison of Multiple Linear Regressions and Neural Networks based QSAR models for the design of new antitubercular compounds. Science.gov (United States) Ventura, Cristina; Latino, Diogo A R S; Martins, Filomena 2013-01-01 The performance of two QSAR methodologies, namely Multiple Linear Regressions (MLR) and Neural Networks (NN), towards the modeling and prediction of antitubercular activity was evaluated and compared. A data set of 173 potentially active compounds belonging to the hydrazide family and represented by 96 descriptors was analyzed. Models were built with Multiple Linear Regressions (MLR), single Feed-Forward Neural Networks (FFNNs), ensembles of FFNNs and Associative Neural Networks (AsNNs) using four different data sets and different types of descriptors. The predictive ability of the different techniques used were assessed and discussed on the basis of different validation criteria and results show in general a better performance of AsNNs in terms of learning ability and prediction of antitubercular behaviors when compared with all other methods. MLR have, however, the advantage of pinpointing the most relevant molecular characteristics responsible for the behavior of these compounds against Mycobacterium tuberculosis. The best results for the larger data set (94 compounds in training set and 18 in test set) were obtained with AsNNs using seven descriptors (R(2) of 0.874 and RMSE of 0.437 against R(2) of 0.845 and RMSE of 0.472 in MLRs, for test set). Counter-Propagation Neural Networks (CPNNs) were trained with the same data sets and descriptors. From the scrutiny of the weight levels in each CPNN and the information retrieved from MLRs, a rational design of potentially active compounds was attempted. Two new compounds were synthesized and tested against M. tuberculosis showing an activity close to that predicted by the majority of the models. Copyright © 2013 Elsevier Masson SAS. All rights reserved. 12. Soil organic carbon distribution in Mediterranean areas under a climate change scenario via multiple linear regression analysis. Science.gov (United States) Olaya-Abril, Alfonso; Parras-Alcántara, Luis; Lozano-García, Beatriz; Obregón-Romero, Rafael 2017-08-15 Over time, the interest on soil studies has increased due to its role in carbon sequestration in terrestrial ecosystems, which could contribute to decreasing atmospheric CO 2 rates. In many studies, independent variables were related to soil organic carbon (SOC) alone, however, the contribution degree of each variable with the experimentally determined SOC content were not considered. In this study, samples from 612 soil profiles were obtained in a natural protected (Red Natura 2000) of Sierra Morena (Mediterranean area, South Spain), considering only the topsoil 0-25cm, for better comparison between results. 24 independent variables were used to define it relationship with SOC content. Subsequently, using a multiple linear regression analysis, the effects of these variables on the SOC correlation was considered. Finally, the best parameters determined with the regression analysis were used in a climatic change scenario. The model indicated that SOC in a future scenario of climate change depends on average temperature of coldest quarter (41.9%), average temperature of warmest quarter (34.5%), annual precipitation (22.2%) and annual average temperature (1.3%). When the current and future situations were compared, the SOC content in the study area was reduced a 35.4%, and a trend towards migration to higher latitude and altitude was observed. Copyright © 2017 Elsevier B.V. All rights reserved. 13. QSAR study of HCV NS5B polymerase inhibitors using the genetic algorithm-multiple linear regression (GA-MLR). Science.gov (United States) 2016-01-01 Quantitative structure-activity relationship (QSAR) study has been employed for predicting the inhibitory activities of the Hepatitis C virus (HCV) NS5B polymerase inhibitors . A data set consisted of 72 compounds was selected, and then different types of molecular descriptors were calculated. The whole data set was split into a training set (80 % of the dataset) and a test set (20 % of the dataset) using principle component analysis. The stepwise (SW) and the genetic algorithm (GA) techniques were used as variable selection tools. Multiple linear regression method was then used to linearly correlate the selected descriptors with inhibitory activities. Several validation technique including leave-one-out and leave-group-out cross-validation, Y-randomization method were used to evaluate the internal capability of the derived models. The external prediction ability of the derived models was further analyzed using modified r(2), concordance correlation coefficient values and Golbraikh and Tropsha acceptable model criteria's. Based on the derived results (GA-MLR), some new insights toward molecular structural requirements for obtaining better inhibitory activity were obtained. 14. A non-linear regression analysis program for describing electrophysiological data with multiple functions using Microsoft Excel. Science.gov (United States) Brown, Angus M 2006-04-01 The objective of this present study was to demonstrate a method for fitting complex electrophysiological data with multiple functions using the SOLVER add-in of the ubiquitous spreadsheet Microsoft Excel. SOLVER minimizes the difference between the sum of the squares of the data to be fit and the function(s) describing the data using an iterative generalized reduced gradient method. While it is a straightforward procedure to fit data with linear functions, and we have previously demonstrated a method of non-linear regression analysis of experimental data based upon a single function, it is more complex to fit data with multiple functions, usually requiring specialized expensive computer software. In this paper we describe an easily understood program for fitting experimentally acquired data, in this case the stimulus-evoked compound action potential from the mouse optic nerve, with multiple Gaussian functions. The program is flexible and can be applied to describe data with a wide variety of user-input functions. 15. Least median of squares and iteratively re-weighted least squares as robust linear regression methods for fluorimetric determination of α-lipoic acid in capsules in ideal and non-ideal cases of linearity. Science.gov (United States) Korany, Mohamed A; Gazy, Azza A; Khamis, Essam F; Ragab, Marwa A A; Kamal, Miranda F 2018-03-26 This study outlines two robust regression approaches, namely least median of squares (LMS) and iteratively re-weighted least squares (IRLS) to investigate their application in instrument analysis of nutraceuticals (that is, fluorescence quenching of merbromin reagent upon lipoic acid addition). These robust regression methods were used to calculate calibration data from the fluorescence quenching reaction (∆F and F-ratio) under ideal or non-ideal linearity conditions. For each condition, data were treated using three regression fittings: Ordinary Least Squares (OLS), LMS and IRLS. Assessment of linearity, limits of detection (LOD) and quantitation (LOQ), accuracy and precision were carefully studied for each condition. LMS and IRLS regression line fittings showed significant improvement in correlation coefficients and all regression parameters for both methods and both conditions. In the ideal linearity condition, the intercept and slope changed insignificantly, but a dramatic change was observed for the non-ideal condition and linearity intercept. Under both linearity conditions, LOD and LOQ values after the robust regression line fitting of data were lower than those obtained before data treatment. The results obtained after statistical treatment indicated that the linearity ranges for drug determination could be expanded to lower limits of quantitation by enhancing the regression equation parameters after data treatment. Analysis results for lipoic acid in capsules, using both fluorimetric methods, treated by parametric OLS and after treatment by robust LMS and IRLS were compared for both linearity conditions. Copyright © 2018 John Wiley & Sons, Ltd. 16. Multi-stratified multiple regression tests of the linear/no-threshold theory of radon-induced lung cancer International Nuclear Information System (INIS) Cohen, B.L. 1992-01-01 A plot of lung-cancer rates versus radon exposures in 965 US counties, or in all US states, has a strong negative slope, b, in sharp contrast to the strong positive slope predicted by linear/no-threshold theory. The discrepancy between these slopes exceeds 20 standard deviations (SD). Including smoking frequency in the analysis substantially improves fits to a linear relationship but has little effect on the discrepancy in b, because correlations between smoking frequency and radon levels are quite weak. Including 17 socioeconomic variables (SEV) in multiple regression analysis reduces the discrepancy to 15 SD. Data were divided into segments by stratifying on each SEV in turn, and on geography, and on both simultaneously, giving over 300 data sets to be analyzed individually, but negative slopes predominated. The slope is negative whether one considers only the most urban counties or only the most rural; only the richest or only the poorest; only the richest in the South Atlantic region or only the poorest in that region, etc., etc.,; and for all the strata in between. Since this is an ecological study, the well-known problems with ecological studies were investigated and found not to be applicable here. The open-quotes ecological fallacyclose quotes was shown not to apply in testing a linear/no-threshold theory, and the vulnerability to confounding is greatly reduced when confounding factors are only weakly correlated with radon levels, as is generally the case here. All confounding factors known to correlate with radon and with lung cancer were investigated quantitatively and found to have little effect on the discrepancy 17. An Introduction to the Hybrid Approach of Neural Networks and the Linear Regression Model : An Illustration in the Hedonic Pricing Model of Building Costs OpenAIRE 浅野, 美代子; マーコ, ユー K.W. 2007-01-01 This paper introduces the hybrid approach of neural networks and linear regression model proposed by Asano and Tsubaki (2003). Neural networks are often credited with its superiority in data consistency whereas the linear regression model provides simple interpretation of the data enabling researchers to verify their hypotheses. The hybrid approach aims at combing the strengths of these two well-established statistical methods. A step-by-step procedure for performing the hybrid approach is pr... 18. Evapotranspiration Modeling by Linear, Nonlinear Regression and Artificial Neural Network in Greenhouse (Case study Reference Crop, Cucumber and Tomato Directory of Open Access Journals (Sweden) 2017-01-01 important models to estimate ETc in greenhouse. The inputs of these models are net radiation, temperature, day after planting and air vapour pressure deficit (or relative humidity. Materials and Methods: In this study, daily ETc of reference crop, greenhouse tomato and cucumber crops were measured using lysimeter method in Urmia region. Several linear, nonlinear regressions and artificial neural networks were considered for ETc modelling in greenhouse. For this purpose, the effective meteorological parameters on ETc process includes: air temperature (T, air humidity (RH, air pressure (P, air vapour pressure deficit (VPD, day after planting (N and greenhouse net radiation (SR were considered and measured. According to the goodness of fit, different models of artificial neural networks and regression were compared and evaluated. Furthermore, based on partial derivatives of regression models, sensitivity analysis was conducted. The accuracy and performance of the employed models was judged by ten statistical indices namely root mean square error (RMSE, normalized root mean square error (NRMSE and coefficient of determination (R2. Results and Discussion: Based on the results, the most accurate regression model to reference ETc prediction was obtained three variables exponential function of VPD, RH and SR with RMSE=0.378 mm day-1. The RMSE of optimal artificial neural network to reference ET prediction for train and test data sets were obtained 0.089 and 0.365 mm day-1, respectively. The performance of logarithmic and exponential functions to prediction of cucumber ETc were proper, with high dependent variables especially, and the most accurate regression model to cucumber ET prediction was obtained for exponential function of five variables: VPD, N, T, RH and SR with RMSE=0.353 mm day-1. In addition, for tomato ET prediction, the most accurate regression model was obtained for exponential function of four variables: VPD, N, RH and SR with RMSE= 0.329 mm day-1. The best 19. Assessment of triglyceride and cholesterol in overweight people based on multiple linear regression and artificial intelligence model. Science.gov (United States) Ma, Jing; Yu, Jiong; Hao, Guangshu; Wang, Dan; Sun, Yanni; Lu, Jianxin; Cao, Hongcui; Lin, Feiyan 2017-02-20 The prevalence of high hyperlipemia is increasing around the world. Our aims are to analyze the relationship of triglyceride (TG) and cholesterol (TC) with indexes of liver function and kidney function, and to develop a prediction model of TG, TC in overweight people. A total of 302 adult healthy subjects and 273 overweight subjects were enrolled in this study. The levels of fasting indexes of TG (fs-TG), TC (fs-TC), blood glucose, liver function, and kidney function were measured and analyzed by correlation analysis and multiple linear regression (MRL). The back propagation artificial neural network (BP-ANN) was applied to develop prediction models of fs-TG and fs-TC. The results showed there was significant difference in biochemical indexes between healthy people and overweight people. The correlation analysis showed fs-TG was related to weight, height, blood glucose, and indexes of liver and kidney function; while fs-TC was correlated with age, indexes of liver function (P < 0.01). The MRL analysis indicated regression equations of fs-TG and fs-TC both had statistic significant (P < 0.01) when included independent indexes. The BP-ANN model of fs-TG reached training goal at 59 epoch, while fs-TC model achieved high prediction accuracy after training 1000 epoch. In conclusions, there was high relationship of fs-TG and fs-TC with weight, height, age, blood glucose, indexes of liver function and kidney function. Based on related variables, the indexes of fs-TG and fs-TC can be predicted by BP-ANN models in overweight people. 20. The review of the achieved degree of sustainable development in South Eastern Europe - The use of linear regression method Energy Technology Data Exchange (ETDEWEB) Golusin, Mirjana [Educons University, Vojvode Putnika st. bb, 21013 Sremska Kamnica (RS); Ivanovic, Olja Munitlak [Faculty of Business in Services, Vojvode Putnik st. bb, 21013 Sremska Kamenica (RS); Teodorovic, Natasa [Faculty of Entrepreneurial Management, Modene st. 5, 21000 Novi Sad (RS) 2011-01-15 The need for preservation and adequate management of the quality of environment requires the development of new methods and techniques by which the achieved degree of sustainable development can be defined as well as the laws regarding the relationship among its subsystems. Main objective of research is to point to a strong contradiction between the development of ecological and economic subsystems. In order to improve previous research, this study suggests the use of linear evaluation, by which it is possible to determine the exact degree of contradiction between these two subsystems and to define the regularities as well as the deviations. Authors present the essential steps that were used. Conducted by the method of linear regression this research shows a significant negative correlation between ecological and economic subsystem indicators, whereas its value R{sup 2} 0.58 proves the expected contradiction that exists between the two previously mentioned subsystems. By observing the sustainable development as a two-dimensional system that includes ecological and economic indicators, the authors suggest the methodology to modelling the relationship between economic and ecological development as an orthogonal distance between the degree of the current state measured by the relation between economic and ecological indicators of sustainable development and the degree which was obtained in a traditional way. The method used in this research proved to be extremely suitable for modelling the relationship between ecological and economic subsystems of sustainable development. This research was conducted on a repeated sample of countries of South East Europe by including the data for France and Germany, being two countries on the highest level of development in the European Union. (author) 1. Price promotions on healthier compared with less healthy foods: a hierarchical regression analysis of the impact on sales and social patterning of responses to promotions in Great Britain. Science.gov (United States) Nakamura, Ryota; Suhrcke, Marc; Jebb, Susan A; Pechey, Rachel; Almiron-Roig, Eva; Marteau, Theresa M 2015-04-01 There is a growing concern, but limited evidence, that price promotions contribute to a poor diet and the social patterning of diet-related disease. We examined the following questions: 1) Are less-healthy foods more likely to be promoted than healthier foods? 2) Are consumers more responsive to promotions on less-healthy products? 3) Are there socioeconomic differences in food purchases in response to price promotions? With the use of hierarchical regression, we analyzed data on purchases of 11,323 products within 135 food and beverage categories from 26,986 households in Great Britain during 2010. Major supermarkets operated the same price promotions in all branches. The number of stores that offered price promotions on each product for each week was used to measure the frequency of price promotions. We assessed the healthiness of each product by using a nutrient profiling (NP) model. A total of 6788 products (60%) were in healthier categories and 4535 products (40%) were in less-healthy categories. There was no significant gap in the frequency of promotion by the healthiness of products neither within nor between categories. However, after we controlled for the reference price, price discount rate, and brand-specific effects, the sales uplift arising from price promotions was larger in less-healthy than in healthier categories; a 1-SD point increase in the category mean NP score, implying the category becomes less healthy, was associated with an additional 7.7-percentage point increase in sales (from 27.3% to 35.0%; P sales uplift from promotions was larger for higher-socioeconomic status (SES) groups than for lower ones (34.6% for the high-SES group, 28.1% for the middle-SES group, and 23.1% for the low-SES group). Finally, there was no significant SES gap in the absolute volume of purchases of less-healthy foods made on promotion. Attempts to limit promotions on less-healthy foods could improve the population diet but would be unlikely to reduce health 2. Ranking contributing areas of salt and selenium in the Lower Gunnison River Basin, Colorado, using multiple linear regression models Science.gov (United States) Linard, Joshua I. 2013-01-01 Mitigating the effects of salt and selenium on water quality in the Grand Valley and lower Gunnison River Basin in western Colorado is a major concern for land managers. Previous modeling indicated means to improve the models by including more detailed geospatial data and a more rigorous method for developing the models. After evaluating all possible combinations of geospatial variables, four multiple linear regression models resulted that could estimate irrigation-season salt yield, nonirrigation-season salt yield, irrigation-season selenium yield, and nonirrigation-season selenium yield. The adjusted r-squared and the residual standard error (in units of log-transformed yield) of the models were, respectively, 0.87 and 2.03 for the irrigation-season salt model, 0.90 and 1.25 for the nonirrigation-season salt model, 0.85 and 2.94 for the irrigation-season selenium model, and 0.93 and 1.75 for the nonirrigation-season selenium model. The four models were used to estimate yields and loads from contributing areas corresponding to 12-digit hydrologic unit codes in the lower Gunnison River Basin study area. Each of the 175 contributing areas was ranked according to its estimated mean seasonal yield of salt and selenium. 3. Hourly predictive Levenberg-Marquardt ANN and multi linear regression models for predicting of dew point temperature Science.gov (United States) 2012-08-01 In this study, the ability of two models of multi linear regression (MLR) and Levenberg-Marquardt (LM) feed-forward neural network was examined to estimate the hourly dew point temperature. Dew point temperature is the temperature at which water vapor in the air condenses into liquid. This temperature can be useful in estimating meteorological variables such as fog, rain, snow, dew, and evapotranspiration and in investigating agronomical issues as stomatal closure in plants. The availability of hourly records of climatic data (air temperature, relative humidity and pressure) which could be used to predict dew point temperature initiated the practice of modeling. Additionally, the wind vector (wind speed magnitude and direction) and conceptual input of weather condition were employed as other input variables. The three quantitative standard statistical performance evaluation measures, i.e. the root mean squared error, mean absolute error, and absolute logarithmic Nash-Sutcliffe efficiency coefficient ( {| {{{Log}}({{NS}})} |} ) were employed to evaluate the performances of the developed models. The results showed that applying wind vector and weather condition as input vectors along with meteorological variables could slightly increase the ANN and MLR predictive accuracy. The results also revealed that LM-NN was superior to MLR model and the best performance was obtained by considering all potential input variables in terms of different evaluation criteria. 4. Relationships between each part of the spinal curves and upright posture using Multiple stepwise linear regression analysis. Science.gov (United States) Boulet, Sebastien; Boudot, Elsa; Houel, Nicolas 2016-05-03 Back pain is a common reason for consultation in primary healthcare clinical practice, and has effects on daily activities and posture. Relationships between the whole spine and upright posture, however, remain unknown. The aim of this study was to identify the relationship between each spinal curve and centre of pressure position as well as velocity for healthy subjects. Twenty-one male subjects performed quiet stance in natural position. Each upright posture was then recorded using an optoelectronics system (Vicon Nexus) synchronized with two force plates. At each moment, polynomial interpolations of markers attached on the spine segment were used to compute cervical lordosis, thoracic kyphosis and lumbar lordosis angle curves. Mean of centre of pressure position and velocity was then computed. Multiple stepwise linear regression analysis showed that the position and velocity of centre of pressure associated with each part of the spinal curves were defined as best predictors of the lumbar lordosis angle (R(2)=0.45; p=1.65*10-10) and the thoracic kyphosis angle (R(2)=0.54; p=4.89*10-13) of healthy subjects in quiet stance. This study showed the relationships between each of cervical, thoracic, lumbar curvatures, and centre of pressure's fluctuation during free quiet standing using non-invasive full spinal curve exploration. Copyright © 2016 Elsevier Ltd. All rights reserved. 5. Multiple linear regression and artificial neural networks for delta-endotoxin and protease yields modelling of Bacillus thuringiensis. Science.gov (United States) Ennouri, Karim; Ben Ayed, Rayda; Triki, Mohamed Ali; Ottaviani, Ennio; Mazzarello, Maura; Hertelli, Fathi; Zouari, Nabil 2017-07-01 The aim of the present work was to develop a model that supplies accurate predictions of the yields of delta-endotoxins and proteases produced by B. thuringiensis var. kurstaki HD-1. Using available medium ingredients as variables, a mathematical method, based on Plackett-Burman design (PB), was employed to analyze and compare data generated by the Bootstrap method and processed by multiple linear regressions (MLR) and artificial neural networks (ANN) including multilayer perceptron (MLP) and radial basis function (RBF) models. The predictive ability of these models was evaluated by comparison of output data through the determination of coefficient (R 2 ) and mean square error (MSE) values. The results demonstrate that the prediction of the yields of delta-endotoxin and protease was more accurate by ANN technique (87 and 89% for delta-endotoxin and protease determination coefficients, respectively) when compared with MLR method (73.1 and 77.2% for delta-endotoxin and protease determination coefficients, respectively), suggesting that the proposed ANNs, especially MLP, is a suitable new approach for determining yields of bacterial products that allow us to make more appropriate predictions in a shorter time and with less engineering effort. 6. Crude Oil Price Forecasting Based on Hybridizing Wavelet Multiple Linear Regression Model, Particle Swarm Optimization Techniques, and Principal Component Analysis Science.gov (United States) Shabri, Ani; Samsudin, Ruhaidah 2014-01-01 Crude oil prices do play significant role in the global economy and are a key input into option pricing formulas, portfolio allocation, and risk measurement. In this paper, a hybrid model integrating wavelet and multiple linear regressions (MLR) is proposed for crude oil price forecasting. In this model, Mallat wavelet transform is first selected to decompose an original time series into several subseries with different scale. Then, the principal component analysis (PCA) is used in processing subseries data in MLR for crude oil price forecasting. The particle swarm optimization (PSO) is used to adopt the optimal parameters of the MLR model. To assess the effectiveness of this model, daily crude oil market, West Texas Intermediate (WTI), has been used as the case study. Time series prediction capability performance of the WMLR model is compared with the MLR, ARIMA, and GARCH models using various statistics measures. The experimental results show that the proposed model outperforms the individual models in forecasting of the crude oil prices series. PMID:24895666 7. Modeling ionospheric foF 2 response during geomagnetic storms using neural network and linear regression techniques Science.gov (United States) Tshisaphungo, Mpho; Habarulema, John Bosco; McKinnell, Lee-Anne 2018-06-01 In this paper, the modeling of the ionospheric foF 2 changes during geomagnetic storms by means of neural network (NN) and linear regression (LR) techniques is presented. The results will lead to a valuable tool to model the complex ionospheric changes during disturbed days in an operational space weather monitoring and forecasting environment. The storm-time foF 2 data during 1996-2014 from Grahamstown (33.3°S, 26.5°E), South Africa ionosonde station was used in modeling. In this paper, six storms were reserved to validate the models and hence not used in the modeling process. We found that the performance of both NN and LR models is comparable during selected storms which fell within the data period (1996-2014) used in modeling. However, when validated on storm periods beyond 1996-2014, the NN model gives a better performance (R = 0.62) compared to LR model (R = 0.56) for a storm that reached a minimum Dst index of -155 nT during 19-23 December 2015. We also found that both NN and LR models are capable of capturing the ionospheric foF 2 responses during two great geomagnetic storms (28 October-1 November 2003 and 6-12 November 2004) which have been demonstrated to be difficult storms to model in previous studies. 8. Spatial measurement error and correction by spatial SIMEX in linear regression models when using predicted air pollution exposures. Science.gov (United States) Alexeeff, Stacey E; Carroll, Raymond J; Coull, Brent 2016-04-01 Spatial modeling of air pollution exposures is widespread in air pollution epidemiology research as a way to improve exposure assessment. However, there are key sources of exposure model uncertainty when air pollution is modeled, including estimation error and model misspecification. We examine the use of predicted air pollution levels in linear health effect models under a measurement error framework. For the prediction of air pollution exposures, we consider a universal Kriging framework, which may include land-use regression terms in the mean function and a spatial covariance structure for the residuals. We derive the bias induced by estimation error and by model misspecification in the exposure model, and we find that a misspecified exposure model can induce asymptotic bias in the effect estimate of air pollution on health. We propose a new spatial simulation extrapolation (SIMEX) procedure, and we demonstrate that the procedure has good performance in correcting this asymptotic bias. We illustrate spatial SIMEX in a study of air pollution and birthweight in Massachusetts. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com. 9. Association between resting-state brain network topological organization and creative ability: Evidence from a multiple linear regression model. Science.gov (United States) Jiao, Bingqing; Zhang, Delong; Liang, Aiying; Liang, Bishan; Wang, Zengjian; Li, Junchao; Cai, Yuxuan; Gao, Mengxia; Gao, Zhenni; Chang, Song; Huang, Ruiwang; Liu, Ming 2017-10-01 Previous studies have indicated a tight linkage between resting-state functional connectivity of the human brain and creative ability. This study aimed to further investigate the association between the topological organization of resting-state brain networks and creativity. Therefore, we acquired resting-state fMRI data from 22 high-creativity participants and 22 low-creativity participants (as determined by their Torrance Tests of Creative Thinking scores). We then constructed functional brain networks for each participant and assessed group differences in network topological properties before exploring the relationships between respective network topological properties and creative ability. We identified an optimized organization of intrinsic brain networks in both groups. However, compared with low-creativity participants, high-creativity participants exhibited increased global efficiency and substantially decreased path length, suggesting increased efficiency of information transmission across brain networks in creative individuals. Using a multiple linear regression model, we further demonstrated that regional functional integration properties (i.e., the betweenness centrality and global efficiency) of brain networks, particularly the default mode network (DMN) and sensorimotor network (SMN), significantly predicted the individual differences in creative ability. Furthermore, the associations between network regional properties and creative performance were creativity-level dependent, where the difference in the resource control component may be important in explaining individual difference in creative performance. These findings provide novel insights into the neural substrate of creativity and may facilitate objective identification of creative ability. Copyright © 2017 Elsevier B.V. All rights reserved. 10. 2D Quantitative Structure-Property Relationship Study of Mycotoxins by Multiple Linear Regression and Support Vector Machine Directory of Open Access Journals (Sweden) Fereshteh Shiri 2010-08-01 Full Text Available In the present work, support vector machines (SVMs and multiple linear regression (MLR techniques were used for quantitative structure–property relationship (QSPR studies of retention time (tR in standardized liquid chromatography–UV–mass spectrometry of 67 mycotoxins (aflatoxins, trichothecenes, roquefortines and ochratoxins based on molecular descriptors calculated from the optimized 3D structures. By applying missing value, zero and multicollinearity tests with a cutoff value of 0.95, and genetic algorithm method of variable selection, the most relevant descriptors were selected to build QSPR models. MLRand SVMs methods were employed to build QSPR models. The robustness of the QSPR models was characterized by the statistical validation and applicability domain (AD. The prediction results from the MLR and SVM models are in good agreement with the experimental values. The correlation and predictability measure by r2 and q2 are 0.931 and 0.932, repectively, for SVM and 0.923 and 0.915, respectively, for MLR. The applicability domain of the model was investigated using William’s plot. The effects of different descriptors on the retention times are described. 11. Crude Oil Price Forecasting Based on Hybridizing Wavelet Multiple Linear Regression Model, Particle Swarm Optimization Techniques, and Principal Component Analysis Directory of Open Access Journals (Sweden) Ani Shabri 2014-01-01 Full Text Available Crude oil prices do play significant role in the global economy and are a key input into option pricing formulas, portfolio allocation, and risk measurement. In this paper, a hybrid model integrating wavelet and multiple linear regressions (MLR is proposed for crude oil price forecasting. In this model, Mallat wavelet transform is first selected to decompose an original time series into several subseries with different scale. Then, the principal component analysis (PCA is used in processing subseries data in MLR for crude oil price forecasting. The particle swarm optimization (PSO is used to adopt the optimal parameters of the MLR model. To assess the effectiveness of this model, daily crude oil market, West Texas Intermediate (WTI, has been used as the case study. Time series prediction capability performance of the WMLR model is compared with the MLR, ARIMA, and GARCH models using various statistics measures. The experimental results show that the proposed model outperforms the individual models in forecasting of the crude oil prices series. 12. Crude oil price forecasting based on hybridizing wavelet multiple linear regression model, particle swarm optimization techniques, and principal component analysis. Science.gov (United States) Shabri, Ani; Samsudin, Ruhaidah 2014-01-01 Crude oil prices do play significant role in the global economy and are a key input into option pricing formulas, portfolio allocation, and risk measurement. In this paper, a hybrid model integrating wavelet and multiple linear regressions (MLR) is proposed for crude oil price forecasting. In this model, Mallat wavelet transform is first selected to decompose an original time series into several subseries with different scale. Then, the principal component analysis (PCA) is used in processing subseries data in MLR for crude oil price forecasting. The particle swarm optimization (PSO) is used to adopt the optimal parameters of the MLR model. To assess the effectiveness of this model, daily crude oil market, West Texas Intermediate (WTI), has been used as the case study. Time series prediction capability performance of the WMLR model is compared with the MLR, ARIMA, and GARCH models using various statistics measures. The experimental results show that the proposed model outperforms the individual models in forecasting of the crude oil prices series. 13. Multiple linear regression models for predicting chronic aluminum toxicity to freshwater aquatic organisms and developing water quality guidelines. Science.gov (United States) DeForest, David K; Brix, Kevin V; Tear, Lucinda M; Adams, William J 2018-01-01 The bioavailability of aluminum (Al) to freshwater aquatic organisms varies as a function of several water chemistry parameters, including pH, dissolved organic carbon (DOC), and water hardness. We evaluated the ability of multiple linear regression (MLR) models to predict chronic Al toxicity to a green alga (Pseudokirchneriella subcapitata), a cladoceran (Ceriodaphnia dubia), and a fish (Pimephales promelas) as a function of varying DOC, pH, and hardness conditions. The MLR models predicted toxicity values that were within a factor of 2 of observed values in 100% of the cases for P. subcapitata (10 and 20% effective concentrations [EC10s and EC20s]), 91% of the cases for C. dubia (EC10s and EC20s), and 95% (EC10s) and 91% (EC20s) of the cases for P. promelas. The MLR models were then applied to all species with Al toxicity data to derive species and genus sensitivity distributions that could be adjusted as a function of varying DOC, pH, and hardness conditions (the P. subcapitata model was applied to algae and macrophytes, the C. dubia model was applied to invertebrates, and the P. promelas model was applied to fish). Hazardous concentrations to 5% of the species or genera were then derived in 2 ways: 1) fitting a log-normal distribution to species-mean EC10s for all species (following the European Union methodology), and 2) fitting a triangular distribution to genus-mean EC20s for animals only (following the US Environmental Protection Agency methodology). Overall, MLR-based models provide a viable approach for deriving Al water quality guidelines that vary as a function of DOC, pH, and hardness conditions and are a significant improvement over bioavailability corrections based on single parameters. Environ Toxicol Chem 2018;37:80-90. © 2017 SETAC. © 2017 SETAC. 14. Comparison of linear and zero-inflated negative binomial regression models for appraisal of risk factors associated with dental caries. Science.gov (United States) Batra, Manu; Shah, Aasim Farooq; Rajput, Prashant; Shah, Ishrat Aasim 2016-01-01 Dental caries among children has been described as a pandemic disease with a multifactorial nature. Various sociodemographic factors and oral hygiene practices are commonly tested for their influence on dental caries. In recent years, a recent statistical model that allows for covariate adjustment has been developed and is commonly referred zero-inflated negative binomial (ZINB) models. To compare the fit of the two models, the conventional linear regression (LR) model and ZINB model to assess the risk factors associated with dental caries. A cross-sectional survey was conducted on 1138 12-year-old school children in Moradabad Town, Uttar Pradesh during months of February-August 2014. Selected participants were interviewed using a questionnaire. Dental caries was assessed by recording decayed, missing, or filled teeth (DMFT) index. To assess the risk factor associated with dental caries in children, two approaches have been applied - LR model and ZINB model. The prevalence of caries-free subjects was 24.1%, and mean DMFT was 3.4 ± 1.8. In LR model, all the variables were statistically significant. Whereas in ZINB model, negative binomial part showed place of residence, father's education level, tooth brushing frequency, and dental visit statistically significant implying that the degree of being caries-free (DMFT = 0) increases for group of children who are living in urban, whose father is university pass out, who brushes twice a day and if have ever visited a dentist. The current study report that the LR model is a poorly fitted model and may lead to spurious conclusions whereas ZINB model has shown better goodness of fit (Akaike information criterion values - LR: 3.94; ZINB: 2.39) and can be preferred if high variance and number of an excess of zeroes are present. 15. Recursive and non-linear logistic regression: moving on from the original EuroSCORE and EuroSCORE II methodologies. Science.gov (United States) Poullis, Michael 2014-11-01 EuroSCORE II, despite improving on the original EuroSCORE system, has not solved all the calibration and predictability issues. Recursive, non-linear and mixed recursive and non-linear regression analysis were assessed with regard to sensitivity, specificity and predictability of the original EuroSCORE and EuroSCORE II systems. The original logistic EuroSCORE, EuroSCORE II and recursive, non-linear and mixed recursive and non-linear regression analyses of these risk models were assessed via receiver operator characteristic curves (ROC) and Hosmer-Lemeshow statistic analysis with regard to the accuracy of predicting in-hospital mortality. Analysis was performed for isolated coronary artery bypass grafts (CABGs) (n = 2913), aortic valve replacement (AVR) (n = 814), mitral valve surgery (n = 340), combined AVR and CABG (n = 517), aortic (n = 350), miscellaneous cases (n = 642), and combinations of the above cases (n = 5576). The original EuroSCORE had an ROC below 0.7 for isolated AVR and combined AVR and CABG. None of the methods described increased the ROC above 0.7. The EuroSCORE II risk model had an ROC below 0.7 for isolated AVR only. Recursive regression, non-linear regression, and mixed recursive and non-linear regression all increased the ROC above 0.7 for isolated AVR. The original EuroSCORE had a Hosmer-Lemeshow statistic that was above 0.05 for all patients and the subgroups analysed. All of the techniques markedly increased the Hosmer-Lemeshow statistic. The EuroSCORE II risk model had a Hosmer-Lemeshow statistic that was significant for all patients (P linear regression failed to improve on the original Hosmer-Lemeshow statistic. The mixed recursive and non-linear regression using the EuroSCORE II risk model was the only model that produced an ROC of 0.7 or above for all patients and procedures and had a Hosmer-Lemeshow statistic that was highly non-significant. The original EuroSCORE and the EuroSCORE II risk models do not have adequate ROC and Hosmer 16. Stochastic lumping analysis for linear kinetics and its application to the fluctuation relations between hierarchical kinetic networks Energy Technology Data Exchange (ETDEWEB) Deng, De-Ming; Chang, Cheng-Hung [Institute of Physics, National Chiao Tung University, Hsinchu 300, Taiwan (China) 2015-05-14 Conventional studies of biomolecular behaviors rely largely on the construction of kinetic schemes. Since the selection of these networks is not unique, a concern is raised whether and under which conditions hierarchical schemes can reveal the same experimentally measured fluctuating behaviors and unique fluctuation related physical properties. To clarify these questions, we introduce stochasticity into the traditional lumping analysis, generalize it from rate equations to chemical master equations and stochastic differential equations, and extract the fluctuation relations between kinetically and thermodynamically equivalent networks under intrinsic and extrinsic noises. The results provide a theoretical basis for the legitimate use of low-dimensional models in the studies of macromolecular fluctuations and, more generally, for exploring stochastic features in different levels of contracted networks in chemical and biological kinetic systems. 17. Stochastic lumping analysis for linear kinetics and its application to the fluctuation relations between hierarchical kinetic networks. Science.gov (United States) Deng, De-Ming; Chang, Cheng-Hung 2015-05-14 Conventional studies of biomolecular behaviors rely largely on the construction of kinetic schemes. Since the selection of these networks is not unique, a concern is raised whether and under which conditions hierarchical schemes can reveal the same experimentally measured fluctuating behaviors and unique fluctuation related physical properties. To clarify these questions, we introduce stochasticity into the traditional lumping analysis, generalize it from rate equations to chemical master equations and stochastic differential equations, and extract the fluctuation relations between kinetically and thermodynamically equivalent networks under intrinsic and extrinsic noises. The results provide a theoretical basis for the legitimate use of low-dimensional models in the studies of macromolecular fluctuations and, more generally, for exploring stochastic features in different levels of contracted networks in chemical and biological kinetic systems. 18. Improving validation methods for molecular diagnostics: application of Bland-Altman, Deming and simple linear regression analyses in assay comparison and evaluation for next-generation sequencing. Science.gov (United States) Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L 2018-02-01 A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R 2 ), using R 2 as the primary metric of assay agreement. However, the use of R 2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted. 19. Comparison of two-concentration with multi-concentration linear regressions: Retrospective data analysis of multiple regulated LC-MS bioanalytical projects. Science.gov (United States) Musuku, Adrien; Tan, Aimin; Awaiye, Kayode; Trabelsi, Fethi 2013-09-01 Linear calibration is usually performed using eight to ten calibration concentration levels in regulated LC-MS bioanalysis because a minimum of six are specified in regulatory guidelines. However, we have previously reported that two-concentration linear calibration is as reliable as or even better than using multiple concentrations. The purpose of this research is to compare two-concentration with multiple-concentration linear calibration through retrospective data analysis of multiple bioanalytical projects that were conducted in an independent regulated bioanalytical laboratory. A total of 12 bioanalytical projects were randomly selected: two validations and two studies for each of the three most commonly used types of sample extraction methods (protein precipitation, liquid-liquid extraction, solid-phase extraction). When the existing data were retrospectively linearly regressed using only the lowest and the highest concentration levels, no extra batch failure/QC rejection was observed and the differences in accuracy and precision between the original multi-concentration regression and the new two-concentration linear regression are negligible. Specifically, the differences in overall mean apparent bias (square root of mean individual bias squares) are within the ranges of -0.3% to 0.7% and 0.1-0.7% for the validations and studies, respectively. The differences in mean QC concentrations are within the ranges of -0.6% to 1.8% and -0.8% to 2.5% for the validations and studies, respectively. The differences in %CV are within the ranges of -0.7% to 0.9% and -0.3% to 0.6% for the validations and studies, respectively. The average differences in study sample concentrations are within the range of -0.8% to 2.3%. With two-concentration linear regression, an average of 13% of time and cost could have been saved for each batch together with 53% of saving in the lead-in for each project (the preparation of working standard solutions, spiking, and aliquoting). Furthermore 20. Linear-scaling density-functional simulations of charged point defects in Al2O3 using hierarchical sparse matrix algebra. Science.gov (United States) Hine, N D M; Haynes, P D; Mostofi, A A; Payne, M C 2010-09-21 We present calculations of formation energies of defects in an ionic solid (Al(2)O(3)) extrapolated to the dilute limit, corresponding to a simulation cell of infinite size. The large-scale calculations required for this extrapolation are enabled by developments in the approach to parallel sparse matrix algebra operations, which are central to linear-scaling density-functional theory calculations. The computational cost of manipulating sparse matrices, whose sizes are determined by the large number of basis functions present, is greatly improved with this new approach. We present details of the sparse algebra scheme implemented in the ONETEP code using hierarchical sparsity patterns, and demonstrate its use in calculations on a wide range of systems, involving thousands of atoms on hundreds to thousands of parallel processes. 1. Using hierarchical linear models to test differences in Swedish results from OECD’s PISA 2003: Integrated and subject-specific science education Directory of Open Access Journals (Sweden) Maria Åström 2012-06-01 Full Text Available The possible effects of different organisations of the science curriculum in schools participating in PISA 2003 are tested with a hierarchical linear model (HLM of two levels. The analysis is based on science results. Swedish schools are free to choose how they organise the science curriculum. They may choose to work subject-specifically (with Biology, Chemistry and Physics, integrated (with Science or to mix these two. In this study, all three ways of organising science classes in compulsory school are present to some degree. None of the different ways of organising science education displayed statistically significant better student results in scientific literacy as measured in PISA 2003. The HLM model used variables of gender, country of birth, home language, preschool attendance, an economic, social and cultural index as well as the teaching organisation. 2. COVAR: Computer Program for Multifactor Relative Risks and Tests of Hypotheses Using a Variance-Covariance Matrix from Linear and Log-Linear Regression Directory of Open Access Journals (Sweden) Leif E. Peterson 1997-11-01 Full Text Available A computer program for multifactor relative risks, confidence limits, and tests of hypotheses using regression coefficients and a variance-covariance matrix obtained from a previous additive or multiplicative regression analysis is described in detail. Data used by the program can be stored and input from an external disk-file or entered via the keyboard. The output contains a list of the input data, point estimates of single or joint effects, confidence intervals and tests of hypotheses based on a minimum modified chi-square statistic. Availability of the program is also discussed. 3. Comparing lagged linear correlation, lagged regression, Granger causality, and vector autoregression for uncovering associations in EHR data. Science.gov (United States) Levine, Matthew E; Albers, David J; Hripcsak, George 2016-01-01 Time series analysis methods have been shown to reveal clinical and biological associations in data collected in the electronic health record. We wish to develop reliable high-throughput methods for identifying adverse drug effects that are easy to implement and produce readily interpretable results. To move toward this goal, we used univariate and multivariate lagged regression models to investigate associations between twenty pairs of drug orders and laboratory measurements. Multivariate lagged regression models exhibited higher sensitivity and specificity than univariate lagged regression in the 20 examples, and incorporating autoregressive terms for labs and drugs produced more robust signals in cases of known associations among the 20 example pairings. Moreover, including inpatient admission terms in the model attenuated the signals for some cases of unlikely associations, demonstrating how multivariate lagged regression models' explicit handling of context-based variables can provide a simple way to probe for health-care processes that confound analyses of EHR data. 4. Multiple linear regression analysis of bacterial deposition to polyurethane coatings after conditioning film formation in the marine environment NARCIS (Netherlands) Bakker, D.P.; Busscher, H.J.; Zanten, J. van; Vries, J. de; Klijnstra, J.W.; Mei, H.C. van der 2004-01-01 Many studies have shown relationships of substratum hydrophobicity, charge or roughness with bacterial adhesion, although bacterial adhesion is governed by interplay of different physico-chemical properties and multiple regression analysis would be more suitable to reveal mechanisms of bacterial 5. Multiple linear regression analysis of bacterial deposition to polyurethane coating after conditioning film formation in the marine environment NARCIS (Netherlands) Bakker, Dewi P; Busscher, Henk J; van Zanten, Joyce; de Vries, Jacob; Klijnstra, Job W; van der Mei, Henny C Many studies have shown relationships of substratum hydrophobicity, charge or roughness with bacterial adhesion, although bacterial adhesion is governed by interplay of different physico-chemical properties and multiple regression analysis would be more suitable to reveal mechanisms of bacterial 6. Introduction into Hierarchical Matrices KAUST Repository Litvinenko, Alexander 2013-12-05 Hierarchical matrices allow us to reduce computational storage and cost from cubic to almost linear. This technique can be applied for solving PDEs, integral equations, matrix equations and approximation of large covariance and precision matrices. 7. Introduction into Hierarchical Matrices KAUST Repository Litvinenko, Alexander 2013-01-01 Hierarchical matrices allow us to reduce computational storage and cost from cubic to almost linear. This technique can be applied for solving PDEs, integral equations, matrix equations and approximation of large covariance and precision matrices. 8. Partial F-tests with multiply imputed data in the linear regression framework via coefficient of determination. Science.gov (United States) Chaurasia, Ashok; Harel, Ofer 2015-02-10 Tests for regression coefficients such as global, local, and partial F-tests are common in applied research. In the framework of multiple imputation, there are several papers addressing tests for regression coefficients. However, for simultaneous hypothesis testing, the existing methods are computationally intensive because they involve calculation with vectors and (inversion of) matrices. In this paper, we propose a simple method based on the scalar entity, coefficient of determination, to perform (global, local, and partial) F-tests with multiply imputed data. The proposed method is evaluated using simulated data and applied to suicide prevention data. Copyright © 2014 John Wiley & Sons, Ltd. 9. Multiple Linear Regression Analysis of Factors Affecting Real Property Price Index From Case Study Research In Istanbul/Turkey Science.gov (United States) Denli, H. H.; Koc, Z. 2015-12-01 Estimation of real properties depending on standards is difficult to apply in time and location. Regression analysis construct mathematical models which describe or explain relationships that may exist between variables. The problem of identifying price differences of properties to obtain a price index can be converted into a regression problem, and standard techniques of regression analysis can be used to estimate the index. Considering regression analysis for real estate valuation, which are presented in real marketing process with its current characteristics and quantifiers, the method will help us to find the effective factors or variables in the formation of the value. In this study, prices of housing for sale in Zeytinburnu, a district in Istanbul, are associated with its characteristics to find a price index, based on information received from a real estate web page. The associated variables used for the analysis are age, size in m2, number of floors having the house, floor number of the estate and number of rooms. The price of the estate represents the dependent variable, whereas the rest are independent variables. Prices from 60 real estates have been used for the analysis. Same price valued locations have been found and plotted on the map and equivalence curves have been drawn identifying the same valued zones as lines. 10. Improving sensitivity of linear regression-based cell type-specific differential expression deconvolution with per-gene vs. global significance threshold. Science.gov (United States) Glass, Edmund R; Dozmorov, Mikhail G 2016-10-06 The goal of many human disease-oriented studies is to detect molecular mechanisms different between healthy controls and patients. Yet, commonly used gene expression measurements from blood samples suffer from variability of cell composition. This variability hinders the detection of differentially expressed genes and is often ignored. Combined with cell counts, heterogeneous gene expression may provide deeper insights into the gene expression differences on the cell type-specific level. Published computational methods use linear regression to estimate cell type-specific differential expression, and a global cutoff to judge significance, such as False Discovery Rate (FDR). Yet, they do not consider many artifacts hidden in high-dimensional gene expression data that may negatively affect linear regression. In this paper we quantify the parameter space affecting the performance of linear regression (sensitivity of cell type-specific differential expression detection) on a per-gene basis. We evaluated the effect of sample sizes, cell type-specific proportion variability, and mean squared error on sensitivity of cell type-specific differential expression detection using linear regression. Each parameter affected variability of cell type-specific expression estimates and, subsequently, the sensitivity of differential expression detection. We provide the R package, LRCDE, which performs linear regression-based cell type-specific differential expression (deconvolution) detection on a gene-by-gene basis. Accounting for variability around cell type-specific gene expression estimates, it computes per-gene t-statistics of differential detection, p-values, t-statistic-based sensitivity, group-specific mean squared error, and several gene-specific diagnostic metrics. The sensitivity of linear regression-based cell type-specific differential expression detection differed for each gene as a function of mean squared error, per group sample sizes, and variability of the proportions 11. Penalized linear regression for discrete ill-posed problems: A hybrid least-squares and mean-squared error approach KAUST Repository Suliman, Mohamed Abdalla Elhag; Ballal, Tarig; Kammoun, Abla; Al-Naffouri, Tareq Y. 2016-01-01 This paper proposes a new approach to find the regularization parameter for linear least-squares discrete ill-posed problems. In the proposed approach, an artificial perturbation matrix with a bounded norm is forced into the discrete ill-posed model 12. Genome-scale regression analysis reveals a linear relationship for promoters and enhancers after combinatorial drug treatment KAUST Repository Rapakoulia, Trisevgeni 2017-08-09 Motivation: Drug combination therapy for treatment of cancers and other multifactorial diseases has the potential of increasing the therapeutic effect, while reducing the likelihood of drug resistance. In order to reduce time and cost spent in comprehensive screens, methods are needed which can model additive effects of possible drug combinations. Results: We here show that the transcriptional response to combinatorial drug treatment at promoters, as measured by single molecule CAGE technology, is accurately described by a linear combination of the responses of the individual drugs at a genome wide scale. We also find that the same linear relationship holds for transcription at enhancer elements. We conclude that the described approach is promising for eliciting the transcriptional response to multidrug treatment at promoters and enhancers in an unbiased genome wide way, which may minimize the need for exhaustive combinatorial screens. 13. Comparison of multiple linear regression, partial least squares and artificial neural networks for prediction of gas chromatographic relative retention times of trimethylsilylated anabolic androgenic steroids. Science.gov (United States) Fragkaki, A G; Farmaki, E; Thomaidis, N; Tsantili-Kakoulidou, A; Angelis, Y S; Koupparis, M; Georgakopoulos, C 2012-09-21 The comparison among different modelling techniques, such as multiple linear regression, partial least squares and artificial neural networks, has been performed in order to construct and evaluate models for prediction of gas chromatographic relative retention times of trimethylsilylated anabolic androgenic steroids. The performance of the quantitative structure-retention relationship study, using the multiple linear regression and partial least squares techniques, has been previously conducted. In the present study, artificial neural networks models were constructed and used for the prediction of relative retention times of anabolic androgenic steroids, while their efficiency is compared with that of the models derived from the multiple linear regression and partial least squares techniques. For overall ranking of the models, a novel procedure [Trends Anal. Chem. 29 (2010) 101-109] based on sum of ranking differences was applied, which permits the best model to be selected. The suggested models are considered useful for the estimation of relative retention times of designer steroids for which no analytical data are available. Copyright © 2012 Elsevier B.V. All rights reserved. 14. Evaluation of heat transfer mathematical models and multiple linear regression to predict the inside variables in semi-solar greenhouse Directory of Open Access Journals (Sweden) M Taki 2017-05-01 Full Text Available Introduction Controlling greenhouse microclimate not only influences the growth of plants, but also is critical in the spread of diseases inside the greenhouse. The microclimate parameters were inside air, greenhouse roof and soil temperature, relative humidity and solar radiation intensity. Predicting the microclimate conditions inside a greenhouse and enabling the use of automatic control systems are the two main objectives of greenhouse climate model. The microclimate inside a greenhouse can be predicted by conducting experiments or by using simulation. Static and dynamic models are used for this purpose as a function of the metrological conditions and the parameters of the greenhouse components. Some works were done in past to 2015 year to simulation and predict the inside variables in different greenhouse structures. Usually simulation has a lot of problems to predict the inside climate of greenhouse and the error of simulation is higher in literature. The main objective of this paper is comparison between heat transfer and regression models to evaluate them to predict inside air and roof temperature in a semi-solar greenhouse in Tabriz University. Materials and Methods In this study, a semi-solar greenhouse was designed and constructed at the North-West of Iran in Azerbaijan Province (geographical location of 38°10′ N and 46°18′ E with elevation of 1364 m above the sea level. In this research, shape and orientation of the greenhouse, selected between some greenhouses common shapes and according to receive maximum solar radiation whole the year. Also internal thermal screen and cement north wall was used to store and prevent of heat lost during the cold period of year. So we called this structure, ‘semi-solar’ greenhouse. It was covered with glass (4 mm thickness. It occupies a surface of approximately 15.36 m2 and 26.4 m3. The orientation of this greenhouse was East–West and perpendicular to the direction of the wind prevailing 15. Analyzing Economic Attainment Patterns of Foreign Born Latin American Male Immigrants to The United States: an Example Using Hierarchical Linear Modeling Directory of Open Access Journals (Sweden) David J. Gotcher 2001-09-01 Full Text Available The paper presents the research which examines and endeavors to account for variation in the economic attainments of immigrants to the United States from Latin America, through the use of Hierarchical Linear Modeling. When analyzing this variation, researchers typically choose between two competing explanations. Human capital theory contends that variation in economic attainment is a product of different characteristics of individuals. Social capital theory contends that variation in economic attainment is a product of differences in characteristics of the societies from which the workers come. The author's central thesis is that we need not choose between human and social capital theories, that we can rely on both theoretical approaches, that it is an empirical and not a theoretical question how much variation can be explained by one set of factors versus the other. The real problem then is to build an appropriate methodology that allows us to partition the variation in economic attainments, identifying how much is explained by individual and how much by group characteristics. Using a multi-level modeling technique, this research presents such a methodology. 16. Preoperative factors affecting cost and length of stay for isolated off-pump coronary artery bypass grafting: hierarchical linear model analysis. Science.gov (United States) Shinjo, Daisuke; Fushimi, Kiyohide 2015-11-17 To determine the effect of preoperative patient and hospital factors on resource use, cost and length of stay (LOS) among patients undergoing off-pump coronary artery bypass grafting (OPCAB). Observational retrospective study. Data from the Japanese Administrative Database. Patients who underwent isolated, elective OPCAB between April 2011 and March 2012. The primary outcomes of this study were inpatient cost and LOS associated with OPCAB. A two-level hierarchical linear model was used to examine the effects of patient and hospital characteristics on inpatient costs and LOS. The independent variables were patient and hospital factors. We identified 2491 patients who underwent OPCAB at 268 hospitals. The mean cost of OPCAB was \$40 665 ±7774, and the mean LOS was 23.4±8.2 days. The study found that select patient factors and certain comorbidities were associated with a high cost and long LOS. A high hospital OPCAB volume was associated with a low cost (-6.6%; p=0.024) as well as a short LOS (-17.6%, pcost and LOS. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/ 17. Penalized linear regression for discrete ill-posed problems: A hybrid least-squares and mean-squared error approach KAUST Repository Suliman, Mohamed Abdalla Elhag 2016-12-19 This paper proposes a new approach to find the regularization parameter for linear least-squares discrete ill-posed problems. In the proposed approach, an artificial perturbation matrix with a bounded norm is forced into the discrete ill-posed model matrix. This perturbation is introduced to enhance the singular-value (SV) structure of the matrix and hence to provide a better solution. The proposed approach is derived to select the regularization parameter in a way that minimizes the mean-squared error (MSE) of the estimator. Numerical results demonstrate that the proposed approach outperforms a set of benchmark methods in most cases when applied to different scenarios of discrete ill-posed problems. Jointly, the proposed approach enjoys the lowest run-time and offers the highest level of robustness amongst all the tested methods. 18. Robust Bayesian linear regression with application to an analysis of the CODATA values for the Planck constant Science.gov (United States) Wübbeler, Gerd; Bodnar, Olha; Elster, Clemens 2018-02-01 Weighted least-squares estimation is commonly applied in metrology to fit models to measurements that are accompanied with quoted uncertainties. The weights are chosen in dependence on the quoted uncertainties. However, when data and model are inconsistent in view of the quoted uncertainties, this procedure does not yield adequate results. When it can be assumed that all uncertainties ought to be rescaled by a common factor, weighted least-squares estimation may still be used, provided that a simple correction of the uncertainty obtained for the estimated model is applied. We show that these uncertainties and credible intervals are robust, as they do not rely on the assumption of a Gaussian distribution of the data. Hence, common software for weighted least-squares estimation may still safely be employed in such a case, followed by a simple modification of the uncertainties obtained by that software. We also provide means of checking the assumptions of such an approach. The Bayesian regression procedure is applied to analyze the CODATA values for the Planck constant published over the past decades in terms of three different models: a constant model, a straight line model and a spline model. Our results indicate that the CODATA values may not have yet stabilized. 19. Application of single-step genomic best linear unbiased prediction with a multiple-lactation random regression test-day model for Japanese Holsteins. Science.gov (United States) Baba, Toshimi; Gotoh, Yusaku; Yamaguchi, Satoshi; Nakagawa, Satoshi; Abe, Hayato; Masuda, Yutaka; Kawahara, Takayoshi 2017-08-01 This study aimed to evaluate a validation reliability of single-step genomic best linear unbiased prediction (ssGBLUP) with a multiple-lactation random regression test-day model and investigate an effect of adding genotyped cows on the reliability. Two data sets for test-day records from the first three lactations were used: full data from February 1975 to December 2015 (60 850 534 records from 2 853 810 cows) and reduced data cut off in 2011 (53 091 066 records from 2 502 307 cows). We used marker genotypes of 4480 bulls and 608 cows. Genomic enhanced breeding values (GEBV) of 305-day milk yield in all the lactations were estimated for at least 535 young bulls using two marker data sets: bull genotypes only and both bulls and cows genotypes. The realized reliability (R 2 ) from linear regression analysis was used as an indicator of validation reliability. Using only genotyped bulls, R 2 was ranged from 0.41 to 0.46 and it was always higher than parent averages. The very similar R 2 were observed when genotyped cows were added. An application of ssGBLUP to a multiple-lactation random regression model is feasible and adding a limited number of genotyped cows has no significant effect on reliability of GEBV for genotyped bulls. © 2016 Japanese Society of Animal Science. 20. Monte Carlo simulation of parameter confidence intervals for non-linear regression analysis of biological data using Microsoft Excel. Science.gov (United States) Lambert, Ronald J W; Mytilinaios, Ioannis; Maitland, Luke; Brown, Angus M 2012-08-01 This study describes a method to obtain parameter confidence intervals from the fitting of non-linear functions to experimental data, using the SOLVER and Analysis ToolPaK Add-In of the Microsoft Excel spreadsheet. Previously we have shown that Excel can fit complex multiple functions to biological data, obtaining values equivalent to those returned by more specialized statistical or mathematical software. However, a disadvantage of using the Excel method was the inability to return confidence intervals for the computed parameters or the correlations between them. Using a simple Monte-Carlo procedure within the Excel spreadsheet (without recourse to programming), SOLVER can provide parameter estimates (up to 200 at a time) for multiple 'virtual' data sets, from which the required confidence intervals and correlation coefficients can be obtained. The general utility of the method is exemplified by applying it to the analysis of the growth of Listeria monocytogenes, the growth inhibition of Pseudomonas aeruginosa by chlorhexidine and the further analysis of the electrophysiological data from the compound action potential of the rodent optic nerve. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved. 1. Reconstruction of Local Sea Levels at South West Pacific Islands—A Multiple Linear Regression Approach (1988-2014) Science.gov (United States) Kumar, V.; Melet, A.; Meyssignac, B.; Ganachaud, A.; Kessler, W. S.; Singh, A.; Aucan, J. 2018-02-01 Rising sea levels are a critical concern in small island nations. The problem is especially serious in the western south Pacific, where the total sea level rise over the last 60 years has been up to 3 times the global average. In this study, we aim at reconstructing sea levels at selected sites in the region (Suva, Lautoka—Fiji, and Nouméa—New Caledonia) as a multilinear regression (MLR) of atmospheric and oceanic variables. We focus on sea level variability at interannual-to-interdecadal time scales, and trend over the 1988-2014 period. Local sea levels are first expressed as a sum of steric and mass changes. Then a dynamical approach is used based on wind stress curl as a proxy for the thermosteric component, as wind stress curl anomalies can modulate the thermocline depth and resultant sea levels via Rossby wave propagation. Statistically significant predictors among wind stress curl, halosteric sea level, zonal/meridional wind stress components, and sea surface temperature are used to construct a MLR model simulating local sea levels. Although we are focusing on the local scale, the global mean sea level needs to be adjusted for. Our reconstructions provide insights on key drivers of sea level variability at the selected sites, showing that while local dynamics and the global signal modulate sea level to a given extent, most of the variance is driven by regional factors. On average, the MLR model is able to reproduce 82% of the variance in island sea level, and could be used to derive local sea level projections via downscaling of climate models. 2. Development of a predictive model for distribution coefficient (Kd) of 13'7Cs and 60Co in marine sediments using multiple linear regression analysis International Nuclear Information System (INIS) Kumar, Ajay; Ravi, P.M.; Guneshwar, S.L.; Rout, Sabyasachi; Mishra, Manish K.; Pulhani, Vandana; Tripathi, R.M. 2018-01-01 Numerous common methods (batch laboratory, the column laboratory, field-batch method, field modeling and K 0c method) are used frequently for determination of K d values. Recently, multiple regression models are considered as new best estimates for predicting the K d of radionuclides in the environment. It is also well known fact that the K d value is highly influenced by physico-chemical properties of sediment. Due to the significant variability in influencing parameters, the measured K d values can range over several orders of magnitude under different environmental conditions. The aim of this study is to develop a predictive model for K d values of 137 Cs and 60 Co based on the sediment properties using multiple linear regression analysis 3. Comparison of height-diameter models based on geographically weighted regressions and linear mixed modelling applied to large scale forest inventory data Energy Technology Data Exchange (ETDEWEB) Quirós Segovia, M.; Condés Ruiz, S.; Drápela, K. 2016-07-01 Aim of the study: The main objective of this study was to test Geographically Weighted Regression (GWR) for developing height-diameter curves for forests on a large scale and to compare it with Linear Mixed Models (LMM). Area of study: Monospecific stands of Pinus halepensis Mill. located in the region of Murcia (Southeast Spain). Materials and Methods: The dataset consisted of 230 sample plots (2582 trees) from the Third Spanish National Forest Inventory (SNFI) randomly split into training data (152 plots) and validation data (78 plots). Two different methodologies were used for modelling local (Petterson) and generalized height-diameter relationships (Cañadas I): GWR, with different bandwidths, and linear mixed models. Finally, the quality of the estimated models was compared throughout statistical analysis. Main results: In general, both LMM and GWR provide better prediction capability when applied to a generalized height-diameter function than when applied to a local one, with R2 values increasing from around 0.6 to 0.7 in the model validation. Bias and RMSE were also lower for the generalized function. However, error analysis showed that there were no large differences between these two methodologies, evidencing that GWR provides results which are as good as the more frequently used LMM methodology, at least when no additional measurements are available for calibrating. Research highlights: GWR is a type of spatial analysis for exploring spatially heterogeneous processes. GWR can model spatial variation in tree height-diameter relationship and its regression quality is comparable to LMM. The advantage of GWR over LMM is the possibility to determine the spatial location of every parameter without additional measurements. Abbreviations: GWR (Geographically Weighted Regression); LMM (Linear Mixed Model); SNFI (Spanish National Forest Inventory). (Author) 4. Reliability of the Load-Velocity Relationship Obtained Through Linear and Polynomial Regression Models to Predict the One-Repetition Maximum Load. Science.gov (United States) Pestaña-Melero, Francisco Luis; Haff, G Gregory; Rojas, Francisco Javier; Pérez-Castilla, Alejandro; García-Ramos, Amador 2017-12-18 This study aimed to compare the between-session reliability of the load-velocity relationship between (1) linear vs. polynomial regression models, (2) concentric-only vs. eccentric-concentric bench press variants, as well as (3) the within-participants vs. the between-participants variability of the velocity attained at each percentage of the one-repetition maximum (%1RM). The load-velocity relationship of 30 men (age: 21.2±3.8 y; height: 1.78±0.07 m, body mass: 72.3±7.3 kg; bench press 1RM: 78.8±13.2 kg) were evaluated by means of linear and polynomial regression models in the concentric-only and eccentric-concentric bench press variants in a Smith Machine. Two sessions were performed with each bench press variant. The main findings were: (1) first-order-polynomials (CV: 4.39%-4.70%) provided the load-velocity relationship with higher reliability than second-order-polynomials (CV: 4.68%-5.04%); (2) the reliability of the load-velocity relationship did not differ between the concentric-only and eccentric-concentric bench press variants; (3) the within-participants variability of the velocity attained at each %1RM was markedly lower than the between-participants variability. Taken together, these results highlight that, regardless of the bench press variant considered, the individual determination of the load-velocity relationship by a linear regression model could be recommended to monitor and prescribe the relative load in the Smith machine bench press exercise. 5. Predicting hyperketonemia by logistic and linear regression using test-day milk and performance variables in early-lactation Holstein and Jersey cows. Science.gov (United States) Chandler, T L; Pralle, R S; Dórea, J R R; Poock, S E; Oetzel, G R; Fourdraine, R H; White, H M 2018-03-01 Although cowside testing strategies for diagnosing hyperketonemia (HYK) are available, many are labor intensive and costly, and some lack sufficient accuracy. Predicting milk ketone bodies by Fourier transform infrared spectrometry during routine milk sampling may offer a more practical monitoring strategy. The objectives of this study were to (1) develop linear and logistic regression models using all available test-day milk and performance variables for predicting HYK and (2) compare prediction methods (Fourier transform infrared milk ketone bodies, linear regression models, and logistic regression models) to determine which is the most predictive of HYK. Given the data available, a secondary objective was to evaluate differences in test-day milk and performance variables (continuous measurements) between Holsteins and Jerseys and between cows with or without HYK within breed. Blood samples were collected on the same day as milk sampling from 658 Holstein and 468 Jersey cows between 5 and 20 d in milk (DIM). Diagnosis of HYK was at a serum β-hydroxybutyrate (BHB) concentration ≥1.2 mmol/L. Concentrations of milk BHB and acetone were predicted by Fourier transform infrared spectrometry (Foss Analytical, Hillerød, Denmark). Thresholds of milk BHB and acetone were tested for diagnostic accuracy, and logistic models were built from continuous variables to predict HYK in primiparous and multiparous cows within breed. Linear models were constructed from continuous variables for primiparous and multiparous cows within breed that were 5 to 11 DIM or 12 to 20 DIM. Milk ketone body thresholds diagnosed HYK with 64.0 to 92.9% accuracy in Holsteins and 59.1 to 86.6% accuracy in Jerseys. Logistic models predicted HYK with 82.6 to 97.3% accuracy. Internally cross-validated multiple linear regression models diagnosed HYK of Holstein cows with 97.8% accuracy for primiparous and 83.3% accuracy for multiparous cows. Accuracy of Jersey models was 81.3% in primiparous and 83 6. Estimation of perceptible water vapor of atmosphere using artificial neural network, support vector machine and multiple linear regression algorithm and their comparative study Science.gov (United States) Shastri, Niket; Pathak, Kamlesh 2018-05-01 The water vapor content in atmosphere plays very important role in climate. In this paper the application of GPS signal in meteorology is discussed, which is useful technique that is used to estimate the perceptible water vapor of atmosphere. In this paper various algorithms like artificial neural network, support vector machine and multiple linear regression are use to predict perceptible water vapor. The comparative studies in terms of root mean square error and mean absolute errors are also carried out for all the algorithms. 7. Use of non-linear mixed-effects modelling and regression analysis to predict the number of somatic coliphages by plaque enumeration after 3 hours of incubation. Science.gov (United States) Mendez, Javier; Monleon-Getino, Antonio; Jofre, Juan; Lucena, Francisco 2017-10-01 The present study aimed to establish the kinetics of the appearance of coliphage plaques using the double agar layer titration technique to evaluate the feasibility of using traditional coliphage plaque forming unit (PFU) enumeration as a rapid quantification method. Repeated measurements of the appearance of plaques of coliphages titrated according to ISO 10705-2 at different times were analysed using non-linear mixed-effects regression to determine the most suitable model of their appearance kinetics. Although this model is adequate, to simplify its applicability two linear models were developed to predict the numbers of coliphages reliably, using the PFU counts as determined by the ISO after only 3 hours of incubation. One linear model, when the number of plaques detected was between 4 and 26 PFU after 3 hours, had a linear fit of: (1.48 × Counts 3 h + 1.97); and the other, values >26 PFU, had a fit of (1.18 × Counts 3 h + 2.95). If the number of plaques detected was PFU after 3 hours, we recommend incubation for (18 ± 3) hours. The study indicates that the traditional coliphage plating technique has a reasonable potential to provide results in a single working day without the need to invest in additional laboratory equipment. 8. Non-Linear Relationship between Economic Growth and CO₂ Emissions in China: An Empirical Study Based on Panel Smooth Transition Regression Models. Science.gov (United States) Wang, Zheng-Xin; Hao, Peng; Yao, Pei-Yi 2017-12-13 The non-linear relationship between provincial economic growth and carbon emissions is investigated by using panel smooth transition regression (PSTR) models. The research indicates that, on the condition of separately taking Gross Domestic Product per capita (GDPpc), energy structure (Es), and urbanisation level (Ul) as transition variables, three models all reject the null hypothesis of a linear relationship, i.e., a non-linear relationship exists. The results show that the three models all contain only one transition function but different numbers of location parameters. The model taking GDPpc as the transition variable has two location parameters, while the other two models separately considering Es and Ul as the transition variables both contain one location parameter. The three models applied in the study all favourably describe the non-linear relationship between economic growth and CO₂ emissions in China. It also can be seen that the conversion rate of the influence of Ul on per capita CO₂ emissions is significantly higher than those of GDPpc and Es on per capita CO₂ emissions. 9. Area under the curve predictions of dalbavancin, a new lipoglycopeptide agent, using the end of intravenous infusion concentration data point by regression analyses such as linear, log-linear and power models. Science.gov (United States) Bhamidipati, Ravi Kanth; Syed, Muzeeb; Mullangi, Ramesh; Srinivas, Nuggehally 2018-02-01 1. Dalbavancin, a lipoglycopeptide, is approved for treating gram-positive bacterial infections. Area under plasma concentration versus time curve (AUC inf ) of dalbavancin is a key parameter and AUC inf /MIC ratio is a critical pharmacodynamic marker. 2. Using end of intravenous infusion concentration (i.e. C max ) C max versus AUC inf relationship for dalbavancin was established by regression analyses (i.e. linear, log-log, log-linear and power models) using 21 pairs of subject data. 3. The predictions of the AUC inf were performed using published C max data by application of regression equations. The quotient of observed/predicted values rendered fold difference. The mean absolute error (MAE)/root mean square error (RMSE) and correlation coefficient (r) were used in the assessment. 4. MAE and RMSE values for the various models were comparable. The C max versus AUC inf exhibited excellent correlation (r > 0.9488). The internal data evaluation showed narrow confinement (0.84-1.14-fold difference) with a RMSE models predicted AUC inf with a RMSE of 3.02-27.46% with fold difference largely contained within 0.64-1.48. 5. Regardless of the regression models, a single time point strategy of using C max (i.e. end of 30-min infusion) is amenable as a prospective tool for predicting AUC inf of dalbavancin in patients. 10. Estimating the input function non-invasively for FDG-PET quantification with multiple linear regression analysis: simulation and verification with in vivo data International Nuclear Information System (INIS) Fang, Yu-Hua; Kao, Tsair; Liu, Ren-Shyan; Wu, Liang-Chih 2004-01-01 A novel statistical method, namely Regression-Estimated Input Function (REIF), is proposed in this study for the purpose of non-invasive estimation of the input function for fluorine-18 2-fluoro-2-deoxy-d-glucose positron emission tomography (FDG-PET) quantitative analysis. We collected 44 patients who had undergone a blood sampling procedure during their FDG-PET scans. First, we generated tissue time-activity curves of the grey matter and the whole brain with a segmentation technique for every subject. Summations of different intervals of these two curves were used as a feature vector, which also included the net injection dose. Multiple linear regression analysis was then applied to find the correlation between the input function and the feature vector. After a simulation study with in vivo data, the data of 29 patients were applied to calculate the regression coefficients, which were then used to estimate the input functions of the other 15 subjects. Comparing the estimated input functions with the corresponding real input functions, the averaged error percentages of the area under the curve and the cerebral metabolic rate of glucose (CMRGlc) were 12.13±8.85 and 16.60±9.61, respectively. Regression analysis of the CMRGlc values derived from the real and estimated input functions revealed a high correlation (r=0.91). No significant difference was found between the real CMRGlc and that derived from our regression-estimated input function (Student's t test, P>0.05). The proposed REIF method demonstrated good abilities for input function and CMRGlc estimation, and represents a reliable replacement for the blood sampling procedures in FDG-PET quantification. (orig.) 11. A comparative study between the use of artificial neural networks and multiple linear regression for caustic concentration prediction in a stage of alumina production Directory of Open Access Journals (Sweden) Giovanni Leopoldo Rozza 2015-09-01 Full Text Available With world becoming each day a global village, enterprises continuously seek to optimize their internal processes to hold or improve their competitiveness and make better use of natural resources. In this context, decision support tools are an underlying requirement. Such tools are helpful on predicting operational issues, avoiding cost risings, loss of productivity, work-related accident leaves or environmental disasters. This paper has its focus on the prediction of spent liquor caustic concentration of Bayer process for alumina production. Caustic concentration measuring is essential to keep it at expected levels, otherwise quality issues might arise. The organization requests caustic concentration by chemical analysis laboratory once a day, such information is not enough to issue preventive actions to handle process inefficiencies that will be known only after new measurement on the next day. Thereby, this paper proposes using Multiple Linear Regression and Artificial Neural Networks techniques a mathematical model to predict the spent liquor´s caustic concentration. Hence preventive actions will occur in real time. Such models were built using software tool for numerical computation (MATLAB and a statistical analysis software package (SPSS. The models output (predicted caustic concentration were compared with the real lab data. We found evidence suggesting superior results with use of Artificial Neural Networks over Multiple Linear Regression model. The results demonstrate that replacing laboratorial analysis by the forecasting model to support technical staff on decision making could be feasible. 12. Quantification of endocrine disruptors and pesticides in water by gas chromatography-tandem mass spectrometry. Method validation using weighted linear regression schemes. Science.gov (United States) Mansilha, C; Melo, A; Rebelo, H; Ferreira, I M P L V O; Pinho, O; Domingues, V; Pinho, C; Gameiro, P 2010-10-22 A multi-residue methodology based on a solid phase extraction followed by gas chromatography-tandem mass spectrometry was developed for trace analysis of 32 compounds in water matrices, including estrogens and several pesticides from different chemical families, some of them with endocrine disrupting properties. Matrix standard calibration solutions were prepared by adding known amounts of the analytes to a residue-free sample to compensate matrix-induced chromatographic response enhancement observed for certain pesticides. Validation was done mainly according to the International Conference on Harmonisation recommendations, as well as some European and American validation guidelines with specifications for pesticides analysis and/or GC-MS methodology. As the assumption of homoscedasticity was not met for analytical data, weighted least squares linear regression procedure was applied as a simple and effective way to counteract the greater influence of the greater concentrations on the fitted regression line, improving accuracy at the lower end of the calibration curve. The method was considered validated for 31 compounds after consistent evaluation of the key analytical parameters: specificity, linearity, limit of detection and quantification, range, precision, accuracy, extraction efficiency, stability and robustness. Copyright © 2010 Elsevier B.V. All rights reserved. 13. Multiple Linear Regression Modeling To Predict the Stability of Polymer-Drug Solid Dispersions: Comparison of the Effects of Polymers and Manufacturing Methods on Solid Dispersion Stability. Science.gov (United States) Fridgeirsdottir, Gudrun A; Harris, Robert J; Dryden, Ian L; Fischer, Peter M; Roberts, Clive J 2018-03-29 Solid dispersions can be a successful way to enhance the bioavailability of poorly soluble drugs. Here 60 solid dispersion formulations were produced using ten chemically diverse, neutral, poorly soluble drugs, three commonly used polymers, and two manufacturing techniques, spray-drying and melt extrusion. Each formulation underwent a six-month stability study at accelerated conditions, 40 °C and 75% relative humidity (RH). Significant differences in times to crystallization (onset of crystallization) were observed between both the different polymers and the two processing methods. Stability from zero days to over one year was observed. The extensive experimental data set obtained from this stability study was used to build multiple linear regression models to correlate physicochemical properties of the active pharmaceutical ingredients (API) with the stability data. The purpose of these models is to indicate which combination of processing method and polymer carrier is most likely to give a stable solid dispersion. Six quantitative mathematical multiple linear regression-based models were produced based on selection of the most influential independent physical and chemical parameters from a set of 33 possible factors, one model for each combination of polymer and processing method, with good predictability of stability. Three general rules are proposed from these models for the formulation development of suitably stable solid dispersions. Namely, increased stability is correlated with increased glass transition temperature ( T g ) of solid dispersions, as well as decreased number of H-bond donors and increased molecular flexibility (such as rotatable bonds and ring count) of the drug molecule. 14. Multiple Linear Regression Analysis Indicates Association of P-Glycoprotein Substrate or Inhibitor Character with Bitterness Intensity, Measured with a Sensor. Science.gov (United States) Yano, Kentaro; Mita, Suzune; Morimoto, Kaori; Haraguchi, Tamami; Arakawa, Hiroshi; Yoshida, Miyako; Yamashita, Fumiyoshi; Uchida, Takahiro; Ogihara, Takuo 2015-09-01 P-glycoprotein (P-gp) regulates absorption of many drugs in the gastrointestinal tract and their accumulation in tumor tissues, but the basis of substrate recognition by P-gp remains unclear. Bitter-tasting phenylthiocarbamide, which stimulates taste receptor 2 member 38 (T2R38), increases P-gp activity and is a substrate of P-gp. This led us to hypothesize that bitterness intensity might be a predictor of P-gp-inhibitor/substrate status. Here, we measured the bitterness intensity of a panel of P-gp substrates and nonsubstrates with various taste sensors, and used multiple linear regression analysis to examine the relationship between P-gp-inhibitor/substrate status and various physical properties, including intensity of bitter taste measured with the taste sensor. We calculated the first principal component analysis score (PC1) as the representative value of bitterness, as all taste sensor's outputs shared significant correlation. The P-gp substrates showed remarkably greater mean bitterness intensity than non-P-gp substrates. We found that Km value of P-gp substrates were correlated with molecular weight, log P, and PC1 value, and the coefficient of determination (R(2) ) of the linear regression equation was 0.63. This relationship might be useful as an aid to predict P-gp substrate status at an early stage of drug discovery. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association. 15. Applying Least Absolute Shrinkage Selection Operator and Akaike Information Criterion Analysis to Find the Best Multiple Linear Regression Models between Climate Indices and Components of Cow's Milk. Science.gov (United States) Marami Milani, Mohammad Reza; Hense, Andreas; Rahmani, Elham; Ploeger, Angelika 2016-07-23 This study focuses on multiple linear regression models relating six climate indices (temperature humidity THI, environmental stress ESI, equivalent temperature index ETI, heat load HLI, modified HLI (HLI new ), and respiratory rate predictor RRP) with three main components of cow's milk (yield, fat, and protein) for cows in Iran. The least absolute shrinkage selection operator (LASSO) and the Akaike information criterion (AIC) techniques are applied to select the best model for milk predictands with the smallest number of climate predictors. Uncertainty estimation is employed by applying bootstrapping through resampling. Cross validation is used to avoid over-fitting. Climatic parameters are calculated from the NASA-MERRA global atmospheric reanalysis. Milk data for the months from April to September, 2002 to 2010 are used. The best linear regression models are found in spring between milk yield as the predictand and THI, ESI, ETI, HLI, and RRP as predictors with p -value < 0.001 and R ² (0.50, 0.49) respectively. In summer, milk yield with independent variables of THI, ETI, and ESI show the highest relation ( p -value < 0.001) with R ² (0.69). For fat and protein the results are only marginal. This method is suggested for the impact studies of climate variability/change on agriculture and food science fields when short-time series or data with large uncertainty are available. 16. Prediction of retention indices for frequently reported compounds of plant essential oils using multiple linear regression, partial least squares, and support vector machine. Science.gov (United States) Yan, Jun; Huang, Jian-Hua; He, Min; Lu, Hong-Bing; Yang, Rui; Kong, Bo; Xu, Qing-Song; Liang, Yi-Zeng 2013-08-01 Retention indices for frequently reported compounds of plant essential oils on three different stationary phases were investigated. Multivariate linear regression, partial least squares, and support vector machine combined with a new variable selection approach called random-frog recently proposed by our group, were employed to model quantitative structure-retention relationships. Internal and external validations were performed to ensure the stability and predictive ability. All the three methods could obtain an acceptable model, and the optimal results by support vector machine based on a small number of informative descriptors with the square of correlation coefficient for cross validation, values of 0.9726, 0.9759, and 0.9331 on the dimethylsilicone stationary phase, the dimethylsilicone phase with 5% phenyl groups, and the PEG stationary phase, respectively. The performances of two variable selection approaches, random-frog and genetic algorithm, are compared. The importance of the variables was found to be consistent when estimated from correlation coefficients in multivariate linear regression equations and selection probability in model spaces. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. 17. A note on the relationships between multiple imputation, maximum likelihood and fully Bayesian methods for missing responses in linear regression models. Science.gov (United States) Chen, Qingxia; Ibrahim, Joseph G 2014-07-01 Multiple Imputation, Maximum Likelihood and Fully Bayesian methods are the three most commonly used model-based approaches in missing data problems. Although it is easy to show that when the responses are missing at random (MAR), the complete case analysis is unbiased and efficient, the aforementioned methods are still commonly used in practice for this setting. To examine the performance of and relationships between these three methods in this setting, we derive and investigate small sample and asymptotic expressions of the estimates and standard errors, and fully examine how these estimates are related for the three approaches in the linear regression model when the responses are MAR. We show that when the responses are MAR in the linear model, the estimates of the regression coefficients using these three methods are asymptotically equivalent to the complete case estimates under general conditions. One simulation and a real data set from a liver cancer clinical trial are given to compare the properties of these methods when the responses are MAR. 18. Application of semi-empirical modeling and non-linear regression to unfolding fast neutron spectra from integral reaction rate data International Nuclear Information System (INIS) Harker, Y.D. 1976-01-01 A semi-empirical analytical expression representing a fast reactor neutron spectrum has been developed. This expression was used in a non-linear regression computer routine to obtain from measured multiple foil integral reaction data the neutron spectrum inside the Coupled Fast Reactivity Measurement Facility. In this application six parameters in the analytical expression for neutron spectrum were adjusted in the non-linear fitting process to maximize consistency between calculated and measured integral reaction rates for a set of 15 dosimetry detector foils. In two-thirds of the observations the calculated integral agreed with its respective measured value to within the experimental standard deviation, and in all but one case agreement within two standard deviations was obtained. Based on this quality of fit the estimated 70 to 75 percent confidence intervals for the derived spectrum are 10 to 20 percent for the energy range 100 eV to 1 MeV, 10 to 50 percent for 1 MeV to 10 MeV and 50 to 90 percent for 10 MeV to 18 MeV. The analytical model has demonstrated a flexibility to describe salient features of neutron spectra of the fast reactor type. The use of regression analysis with this model has produced a stable method to derive neutron spectra from a limited amount of integral data 19. Downscaling of surface moisture flux and precipitation in the Ebro Valley (Spain using analogues and analogues followed by random forests and multiple linear regression Directory of Open Access Journals (Sweden) G. Ibarra-Berastegi 2011-06-01 Full Text Available In this paper, reanalysis fields from the ECMWF have been statistically downscaled to predict from large-scale atmospheric fields, surface moisture flux and daily precipitation at two observatories (Zaragoza and Tortosa, Ebro Valley, Spain during the 1961–2001 period. Three types of downscaling models have been built: (i analogues, (ii analogues followed by random forests and (iii analogues followed by multiple linear regression. The inputs consist of data (predictor fields taken from the ERA-40 reanalysis. The predicted fields are precipitation and surface moisture flux as measured at the two observatories. With the aim to reduce the dimensionality of the problem, the ERA-40 fields have been decomposed using empirical orthogonal functions. Available daily data has been divided into two parts: a training period used to find a group of about 300 analogues to build the downscaling model (1961–1996 and a test period (1997–2001, where models' performance has been assessed using independent data. In the case of surface moisture flux, the models based on analogues followed by random forests do not clearly outperform those built on analogues plus multiple linear regression, while simple averages calculated from the nearest analogues found in the training period, yielded only slightly worse results. In the case of precipitation, the three types of model performed equally. These results suggest that most of the models' downscaling capabilities can be attributed to the analogues-calculation stage. 20. Enzyme replacement therapy for Anderson-Fabry disease: A complementary overview of a Cochrane publication through a linear regression and a pooled analysis of proportions from cohort studies. Science.gov (United States) El Dib, Regina; Gomaa, Huda; Ortiz, Alberto; Politei, Juan; Kapoor, Anil; Barreto, Fellype 2017-01-01 Anderson-Fabry disease (AFD) is an X-linked recessive inborn error of glycosphingolipid metabolism caused by a deficiency of alpha-galactosidase A. Renal failure, heart and cerebrovascular involvement reduce survival. A Cochrane review provided little evidence on the use of enzyme replacement therapy (ERT). We now complement this review through a linear regression and a pooled analysis of proportions from cohort studies. To evaluate the efficacy and safety of ERT for AFD. For the systematic review, a literature search was performed, from inception to March 2016, using Medline, EMBASE and LILACS. Inclusion criteria were cohort studies, patients with AFD on ERT or natural history, and at least one patient-important outcome (all-cause mortality, renal, cardiovascular or cerebrovascular events, and adverse events) reported. The pooled proportion and the confidence interval (CI) are shown for each outcome. Simple linear regressions for composite endpoints were performed. 77 cohort studies involving 15,305 participants proved eligible. The pooled proportions were as follows: a) for renal complications, agalsidase alfa 15.3% [95% CI 0.048, 0.303; I2 = 77.2%, p = 0.0005]; agalsidase beta 6% [95% CI 0.04, 0.07; I2 = not applicable]; and untreated patients 21.4% [95% CI 0.1522, 0.2835; I2 = 89.6%, plinear regression showed that Fabry patients receiving agalsidase alfa are more likely to have higher rates of composite endpoints compared to those receiving agalsidase beta. Agalsidase beta is associated to a significantly lower incidence of renal, cardiovascular and cerebrovascular events than no ERT, and to a significantly lower incidence of cerebrovascular events than agalsidase alfa. In view of these results, the use of agalsidase beta for preventing major organ complications related to AFD can be recommended.
126,674
629,394
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.53125
3
CC-MAIN-2018-30
latest
en
0.849542
http://www.convertit.com/Go/SalvageSale/Measurement/Converter.ASP?From=stadium&To=width
1,618,274,916,000,000,000
text/html
crawl-data/CC-MAIN-2021-17/segments/1618038071212.27/warc/CC-MAIN-20210413000853-20210413030853-00636.warc.gz
122,086,760
3,403
New Online Book! Handbook of Mathematical Functions (AMS55) Conversion & Calculation Home >> Measurement Conversion Measurement Converter Convert From: (required) Click here to Convert To: (optional) Examples: 5 kilometers, 12 feet/sec^2, 1/5 gallon, 9.5 Joules, or 0 dF. Help, Frequently Asked Questions, Use Currencies in Conversions, Measurements & Currencies Recognized Examples: miles, meters/s^2, liters, kilowatt*hours, or dC. Conversion Result: ```Roman stadium = 184.7088 length (length) ``` Related Measurements: Try converting from "stadium" to arpentcan, Biblical cubit, cable length, cloth quarter, en (typography en), engineers chain, finger, foot, Greek cubit, Greek fathom, Greek palm, hand, ken (Japanese ken), li (Chinese li), light yr (light year), Roman cubit, shaku (Japanese shaku), skein, span (cloth span), verst (Russian verst), or any combination of units which equate to "length" and represent depth, fl head, height, length, wavelength, or width. Sample Conversions: stadium = 1,847,088,000,000 angstrom, 808 cloth quarter, 9,961.64 digitus (Roman digitus), 1.85E+17 fermi, 8,310.86 finger, 606 foot, .91818182 furlong (surveyors furlong), 99.78 Greek fathom, 798.24 Greek span, 7,272 inch, 87.19 ken (Japanese ken), .28652482 li (Chinese li), 87,264 line, 7,272,000 mil, 2,424 palm, 5.99E-15 parsec, 43,632 pica (typography pica), 1.68 skein, 606 survey foot, .1147725 UK mile (British mile). Feedback, suggestions, or additional measurement definitions? Please read our Help Page and FAQ Page then post a message or send e-mail. Thanks!
451
1,572
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.96875
3
CC-MAIN-2021-17
latest
en
0.66956
https://aniruddhadeb.com/articles/2020/icosahedron-resistor-network.html
1,669,684,043,000,000,000
text/html
crawl-data/CC-MAIN-2022-49/segments/1669446710684.84/warc/CC-MAIN-20221128235805-20221129025805-00423.warc.gz
136,732,808
4,196
# Icosahedron Resistor Network Posted on Tue 09 June 2020 in Electronics Consider the icosahedral resistance network shown in the following figure. Each edge is made of a rod of resistance $r$. Select the option(s) that is(are) correct: A) The resistance between points A and B is $\frac{r}{2}$ B) The resistance between points A and B is $\frac{r}{3}$ C) The resistance between any two adjacent vertices is $\frac{11r}{30}$ D) The resistance between any two adjacent vertices is $\frac{11r}{20}$ This question looks really tricky. This is a variation of the problem of finding the resistance of a wire cube. This can be solved in a similar way, using symmetry and equipotential reduction. Let's start off by finding the resistance between A and B, since this is easier than finding the resistance between adjacent vertices. On connecting a battery, the points marked in the same colour will be at the same potential. You can clearly see the symmetry now. If we connect all the points with the same potential, we end up with the following resistor network (each resistance is $r$). This is simple to solve. It's resistance is $$R_{AB} = \frac r5 + \frac r{10} + \frac r5 = \boxed{\frac r2}$$ which corresponds to option (A). Solving the network between adjacent vertices is a lot trickier. The first trick is to pick a pair of vertices from which symmetry is easily identifiable. If I connect my battery as shown in the following figure, it is easy to identify which points are equipotential and how the currents are flowing. The dotted line shows the plane of symmetry of this icosahedron perpendicular to $CD$ whereas the blue dots show equipotential nodes. We can see that no current flows through $R_{AJ}$ and $R_{EB}$ since the ends are at the same potential. Also, at all equipotential points, no mixing of currents occurs. This means that $I_{AC} = I_{AD}$ and $I_{CE} = I_{ED}$, along with a few others. Performing some modifications on this circuit in line with the above observations reduces the complexity quite drastically as shown below: Simplifying these resistors, we obtain a cubical network with a resistor across two opposite face diagonals It is easy to see that $V_F = V_L$ and $V_G = V_K$. Connecting these two points and getting rid of the resistances $R_{GK}$ and $R_{LF}$, we get the following resistor network: It is now simple to calculate the resistance across CD by using the formulae for resistors in parallel and series. $$\begin{gather} \frac 1 {R_{FG}} = \frac 2 r + \frac 2 {3r} \\ R_{FG} = \frac{3r}{8} \\ \frac{1}{R_{CD}} = \frac{8}{11r} + \frac{2}{r} \\ \boxed{R_{CD} = \frac{11r}{30}} \end{gather}$$ Which corresponds to option (C) A good problem that I came up with on my own; the next step is to do the same for a dodecahedric resistor network. Stay tuned for that :)
721
2,821
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.0625
4
CC-MAIN-2022-49
latest
en
0.921686
https://www.knowpia.com/knowpedia/Nominal_impedance
1,708,835,318,000,000,000
text/html
crawl-data/CC-MAIN-2024-10/segments/1707947474581.68/warc/CC-MAIN-20240225035809-20240225065809-00846.warc.gz
859,667,673
24,039
BREAKING NEWS Nominal impedance ## Summary Nominal impedance in electrical engineering and audio engineering refers to the approximate designed impedance of an electrical circuit or device. The term is applied in a number of different fields, most often being encountered in respect of: The actual impedance may vary quite considerably from the nominal figure with changes in frequency. In the case of cables and other transmission lines, there is also variation along the length of the cable, if it is not properly terminated. It is usual practice to speak of nominal impedance as if it were a constant resistance,[1] that is, it is invariant with frequency and has a zero reactive component, despite this often being far from the case. Depending on the field of application, nominal impedance is implicitly referring to a specific point on the frequency response of the circuit under consideration. This may be at low-frequency, mid-band or some other point and specific applications are discussed in the sections below.[2] In most applications, there are a number of values of nominal impedance that are recognised as being standard. The nominal impedance of a component or circuit is often assigned one of these standard values, regardless of whether the measured impedance exactly corresponds to it. The item is assigned the nearest standard value. ## 600 Ω Nominal impedance first started to be specified in the early days of telecommunications. At first amplifiers were not available and when they did become available they were expensive. It was consequently necessary to achieve maximum power transfer from the cable at the receiving end in order to maximize the lengths of cables that could be installed. It also became apparent that reflections on the transmission line would severely limit the bandwidth that could be used or the distance that it was practicable to transmit. Matching equipment impedance to the characteristic impedance of the cable reduces reflections (and they are eliminated altogether if the match is perfect) and power transfer is maximised. To this end, all cables and equipment started to be specified to a standard nominal impedance. The earliest, and still the most widespread, standard is 600 Ω, originally used for telephony. It has to be said that the choice of this figure had more to do with the way telephones were interfaced into the local exchange than any characteristic of the local telephone cable. Telephones (old style analogue telephones) connect to the exchange through twisted pair cabling. Each leg of the pair is connected to a relay coil which detect the signalling on the line (dialling, handset off-hook etc.). The other end of one coil is connected to a supply voltage and the second coil is connected to ground. A telephone exchange relay coil is around 300 Ω so the two of them together are terminating the line in 600 Ω.[3] The wiring to the subscriber in telephone networks is generally done in twisted pair cable. Its impedance at audio frequencies, and especially at the more restricted telephone band frequencies, is far from constant. It is possible to manufacture this kind of cable to have a 600 Ω characteristic impedance but it will only be this value at one specific frequency. This might be quoted as a nominal 600 Ω impedance at 800 Hz or 1 kHz. Below this frequency the characteristic impedance rapidly rises and becomes more and more dominated by the ohmic resistance of the cable as the frequency falls. At the bottom of the audio band the impedance can be several tens of kilohms. On the other hand, at high frequency in the MHz region, the characteristic impedance flattens out to something almost constant. The reason for this response is explained at primary line constants.[4] Local area networks (LANs) commonly use a similar kind of twisted pair cable, but screened and manufactured to tighter tolerances than is necessary for telephony. Even though it has a very similar impedance to telephone cable, the nominal impedance is rated at 100 Ω. This is because the LAN data is in a higher frequency band where the characteristic impedance is substantially flat and mostly resistive.[4] Standardisation of line nominal impedance led to two-port networks such as filters being designed to a matching nominal impedance. The nominal impedance of low-pass symmetrical T- or Pi-filter sections (or more generally, image filter sections) is defined as the limit of the filter image impedance as the frequency approaches zero and is given by, ${\displaystyle Z_{\mathrm {nom} }={\sqrt {\frac {L}{C}}}}$ where L and C are as defined in constant k filter. As can be seen from the expression, this impedance is purely resistive. This filter transformed to a band-pass filter will have an impedance equal to the nominal impedance at resonance rather than low frequency. This nominal impedance of filters will generally be the same as the nominal impedance of the circuit or cable that the filter is working into.[5] While 600 Ω is an almost universal standard in telephony for local presentation at customer's premises from the exchange, for long distance transmission on trunk lines between exchanges other standard nominal impedances are used and are usually lower, such as 150 Ω.[6] ## 50 Ω and 75 Ω In the field of radio frequency (RF) and microwave engineering, by far and away the most common transmission line standard is 50 Ω coaxial cable (coax), which is an unbalanced line. 50 Ω first arose as a nominal impedance during World War II work on radar and is a compromise between two requirements. This standard was the work of the wartime US joint Army-Navy RF Cable Coordinating Committee. The first requirement is for minimum loss. The loss of coaxial cable is given by, ${\displaystyle \alpha \approx {\frac {R}{2Z_{0}}}}$  nepers/metre where R is the loop resistance per metre and Z0 is the characteristic impedance. Making the diameter of the inner conductor larger will decrease R and decreasing R decreases the loss. On the other hand, Z0 depends on the ratio of the diameters of outer and inner conductors (Dr) and will decrease with increasing inner conductor diameter thus increasing the loss. There is a specific value of Dr for which the loss is a minimum and this turns out to be 3.6. For an air dielectric coax this corresponds to a characteristic impedance of 77 Ω. The coax produced during the war was rigid air-insulated pipe, and this remained the case for some time afterwards. The second requirement is for maximum power handling and was an important requirement for radar. This is not the same condition as minimum loss because power handling is usually limited by the breakdown voltage of the dielectric. However, there is a similar compromise in terms of the ratio of conductor diameters. Making the inner conductor too large results in a thin insulator which breaks down at a lower voltage. On the other hand, making the inner conductor too small results in higher electric field strength near the inner conductor (because the same field energy is accumulated around smaller conductor surface) and again reduces the breakdown voltage. The ideal ratio, Dr, for maximum power handling turns out to be 1.65 and corresponds to a characteristic impedance of 30 Ω in air. The 50 Ω impedance is the geometric mean of these two figures; ${\displaystyle 50\approx {\sqrt {30\times 77}}\mathrm {\ } \Omega }$ and then rounding to a convenient whole number.[7][8] Wartime production of coax, and for a period afterwards, tended to use standard plumbing pipe sizes for the outer conductor and standard AWG sizes for the inner conductor. This resulted in coax that was nearly, but not quite, 50 Ω. Matching is a much more critical requirement at RF than it is at voice frequencies, so when cable started to become available that was truly 50 Ω a need arose for matching circuits to interface between the new cables and legacy equipment, such as the rather strange 51.5 Ω to 50 Ω matching network.[8][9] While 30 Ω cable is highly desirable for its power handling capabilities, it has never been in commercial production because the large size of inner conductor makes it difficult to manufacture. This is not the case with 77 Ω cable. Cable with 75 Ω nominal impedance has been in use from an early period in telecommunications for its low loss characteristic. According to Stephen Lampen of Belden Wire & Cable 75 Ω was chosen as the nominal impedance rather than 77 Ω because it corresponded to a standard AWG wire size for the inner conductor. For coax video cables and interfaces 75 Ω is now the near universal standard nominal impedance.[8][10] ## Radio antennae The widespread idea that 50 Ω and 75 Ω cable nominal impedances arose in connection with the input impedance of various antennae is a myth. It is true, however, that several common antennae are easily matched to cables with these nominal impedances.[7] A quarter wavelength monopole in free space has an impedance of 36.5 Ω,[11] and a half wavelength dipole in free space has an impedance of 72 Ω.[12] A half-wavelength folded dipole, commonly seen on television antennae, on the other hand, has a 288 Ω impedance – four times that of a straight-line dipole. The ½ λ dipole and the ½ λ folded dipole are commonly taken as having nominal impedances of 75 Ω and 300 Ω, respectively.[13] An installed antenna’s feed-point impedance varies above and below the quoted value, depending on its installation height above the ground and the electrical properties of the surrounding earth.[14][15] ## Cable quality One measure of cable manufacturing and installation quality is how closely the characteristic impedance adheres to the nominal impedance along its length. Impedance changes can be caused by variations in geometry along the cable length. In turn, these can be caused by a faulty manufacturing process or by faulty installation (such as not observing limits on bend radii). Unfortunately, there is no easy, non-destructive method of directly measuring impedance along a cable's length. It can, however, be indicated indirectly by measuring reflections, that is, return loss. Return loss by itself does not reveal much, since the cable design will have some intrinsic return loss anyway due to not having a purely resistive characteristic impedance. The technique used is to carefully adjust the cable termination to obtain as close a match as possible and then to measure the variation of return loss with frequency. The minimum return loss so measured is called the structural return loss (SRL). SRL is a measure of a cables' adherence to its nominal impedance but it is not a direct correspondence, errors further from the generator have less effect on SRL than those close to it. The measurement must also be carried out at all in-band frequencies to be significant. The reason for this is that equally spaced errors introduced by the manufacturing process will cancel and be invisible, or at least much reduced, at certain frequencies due to quarter wave impedance transformer action.[16][17] ## Audio systems For the most part, audio systems both professional and domestic, have their components interconnected with low impedance outputs connected to high impedance inputs. These impedances are poorly defined and nominal impedances are not usually assigned for this kind of connection. The exact impedances make little difference to performance as long as the latter is many times larger than the former.[18] This is a common interconnection scheme, not just for audio, but for electronic units in general which form part of a larger equipment or are only connected over a short distance. Where audio needs to be transmitted over large distances, which is often the case in broadcast engineering, considerations of matching and reflections dictate that a telecommunications standard is used, which would normally mean using 600 Ω nominal impedance (although other standards are sometimes encountered, such as sending at 75 Ω and receiving at 600 Ω which has bandwidth advantages). The nominal impedance of the transmission line and of the amplifiers and equalisers in the transmission chain will all be the same value.[6] Nominal impedance is used, however, to characterise the transducers of an audio system, such as its microphones and loudspeakers. It is important that these are connected to a circuit capable of dealing with impedances in the appropriate range and assigning a nominal impedance is a convenient way of quickly determining likely incompatibilities. Loudspeakers and microphones are dealt with in separate sections below. ### Loudspeakers Loudspeaker impedances are kept relatively low compared with other audio components so that the required audio power can be transmitted without using inconveniently (and dangerously) high voltages. The most common nominal impedance for loudspeakers is 8 Ω. Also used are 4 Ω and 16 Ω.[20] The once common 16 Ω is now mostly reserved for high frequency compression drivers since the high frequency end of the audio spectrum does not usually require so much power to reproduce.[21] The impedance of a loudspeaker is not constant across all frequencies. In a typical loudspeaker the impedance will rise with increasing frequency from its DC value, as shown in the diagram, until it reaches a point of its mechanical resonance. Following resonance, the impedance falls to a minimum and then begins to rise again.[22] Speakers are usually designed to operate at frequencies above their resonance, and for this reason it is the usual practice to define nominal impedance at this minimum and then round to the nearest standard value.[23][24] The ratio of the peak resonant frequency to the nominal impedance can be as much as 4:1.[25] It is, however, still perfectly possible for the low frequency impedance to actually be lower than the nominal impedance.[19] A given audio amplifier may not be capable of driving this low frequency impedance even though it is capable of driving the nominal impedance, a problem that can be solved either with the use of crossover filters or underrating the amplifier supplied.[26] In the days of valves (vacuum tubes), most loudspeakers had a nominal impedance of 16 Ω. Valve outputs require an output transformer to match the very high output impedance and voltage of the output valves to this lower impedance. These transformers were commonly tapped to allow matching of the output to a multiple loudspeaker setup. For example, two 16 Ω loudspeakers in parallel will give an impedance of 8 Ω. Since the advent of solid-state amplifiers whose outputs require no transformer, the once-common multiple-impedance outputs have become rare, and lower impedance loudspeakers more common. The most common nominal impedance for a single loudspeaker is now 8 Ω. Most solid-state amplifiers are designed to work with loudspeaker combinations of anything from 4 Ω to 8 Ω.[27] ### Microphones There are a large number of different types of microphone and there are correspondingly large differences in impedance between them. They range from the very low impedance of ribbon microphones (can be less than one ohm) to the very large impedance of piezoelectric microphones which are measured in megohms. The Electronic Industries Alliance (EIA) has defined[28] a number of standard microphone nominal impedances to aid categorisation of microphones.[29] Range (Ω) EIA nominal impedance (Ω) 20–80 38 80–300 150 300–1250 600 1250–4500 2400 4500-20,000 9600 20,000–70,000 40,000 The International Electrotechnical Commission defines a similar set of nominal impedances, but also has a coarser classification of low (less than 600 Ω), medium (600 Ω to 10 kΩ) and high (more than 10 kΩ) impedances.[30][failed verification] ## Oscilloscopes Oscilloscope inputs are usually high impedance so that they only minimally affect the circuit being measured when connected. However, the input impedance is made a specific nominal value, rather than arbitrarily high, because of the common use of X10 probes. A common value for oscilloscope nominal impedance is 1 MΩ resistance and 20 pF capacitance.[31] With a known input impedance to the oscilloscope, the probe designer can ensure that the probe input impedance is exactly ten times this figure (actually oscilloscope plus probe cable impedance). Since the impedance included the input capacitance and the probe is an impedance divider circuit, the result is that the waveform being measured is not distorted by the RC circuit formed by the probe resistance and the capacitance of the input (or the cable capacitance which is generally higher).[32][33] ## References 1. ^ Maslin, p.78 2. ^ Graf, p.506. 3. ^ Schmitt, pp.301–302. 4. ^ a b Schmitt, p.301. 5. ^ Bird, pp.564, 569. 6. ^ a b Whitaker, p.115. 7. ^ a b Golio, p.6-41. 8. ^ a b c Breed, pp.6–7. 9. ^ Harmon Banning (W. L. Gore & Associates, Inc.), "The History of 50 Ω", RF Cafe 10. ^ Steve Lampen, "Coax History" (mailing list), Contesting.com. Lampen is Technology Development Manager at Belden Wire & Cable Co. and is the author of Wire, Cable and Fiber Optics. 11. ^ Chen, pp.574–575. 12. ^ Gulati, p.424. 13. ^ Gulati, p.426. 14. ^ Heys (1989), pp. 3–4 15. ^ Straw (2003) 16. ^ Rymaszewski et al, p.407. 17. ^ Ciciora, p.435. 18. ^ Eargle & Foreman, p.83. 19. ^ a b Davis&Jones, p.205. 20. ^ Ballou, p.523. 21. ^ Vasey, pp.34–35. 22. ^ Davis&Jones, p.206. 23. ^ Davis&Jones, p.233. 24. ^ Stark, p.200. 25. ^ Davis&Jones, p.91. 26. ^ Ballou, pp.523, 1178. 27. ^ van der Veen, p.27. 28. ^ Electronic Industries Standard SE-105, August 1949. 29. ^ Ballou, p.419. 30. ^ International standard IEC 60268-4 Sound system equipment – Part 4: Microphones. 31. ^ pp.97–98. 32. ^ Hickman, pp.33–37. 33. ^ O'Dell, pp.72–79. ## Bibliography • Glen Ballou, Handbook for Sound Engineers, Gulf Professional Publishing, 2005 ISBN 0-240-80758-8. • John Bird, Electrical Circuit Theory and Technology, Elsevier, 2007 ISBN 0-7506-8139-X. • Gary Breed, "There's nothing magic about 50 ohms", High Frequency Electronics, pp. 6–7, June 2007, Summit Technical Media LLC, archived 26 June 2015. • Wai-Kai Chen, The Electrical Engineering Handbook, Academic Press, 2005 ISBN 0-12-170960-4. • Walter S. Ciciora, Modern Cable Television Technology: Video, Voice, and Data Communications, Morgan Kaufmann, 2004 ISBN 1-55860-828-1. • Gary Davis, Ralph Jones, The Sound Reinforcement Handbook, Hal Leonard Corporation, 1989 ISBN 0-88188-900-8. • John M. Eargle, Chris Foreman, Audio engineering for Sound Reinforcement, Hal Leonard Corporation, 2002, ISBN 0-634-04355-2. • John Michael Golio, The RF and Microwave Handbook, CRC Press, 2001 ISBN 0-8493-8592-X. • Rudolf F. Graf, Modern Dictionary of Electronics, Newnes, 1999 ISBN 0-7506-9866-7. • R.R. Gulati, Modern Television Practice Principles, Technology and Servicing, New Age International, ISBN 81-224-1360-9. • John D. Heys, Practical Wire Antennas, Radio Society of Great Britain, 1989 ISBN 0-900612-87-8. • Ian Hickman, Oscilloscopes: How to Use Them, How They Work, Newnes, 2001 ISBN 0-7506-4757-4. • Stephen Lampen, Wire, Cable and Fiber Optics for Video and Audio Engineers, McGraw-Hill 1997 ISBN 0-07-038134-8. • A.K.Maini, Electronic Projects For Beginners, Pustak Mahal, 1997 ISBN 81-223-0152-5. • Nicholas M. Maslin, HF Communications: a Systems Approach, CRC Press, 1987 ISBN 0-273-02675-5. • Thomas Henry O'Dell, Circuits for Electronic Instrumentation, Cambridge University Press, 1991 ISBN 0-521-40428-2. • R. Tummala, E. J. Rymaszewski (ed), Alan G. Klopfenstein, Microelectronics Packaging Handbook, Volume 3, Springer, 1997 ISBN 0-412-08451-1. • Ron Schmitt, Electromagnetics Explained: a Handbook for Wireless/RF, EMC, and High-speed Electronics, Newnes, 2002 ISBN 0-7506-7403-2. • Scott Hunter Stark, Live Sound Reinforcement: a Comprehensive Guide to P.A. and Music Reinforcement Systems and Technology, Hal Leonard Corporation, 1996 ISBN 0-918371-07-4. • John Vasey, Concert Sound and Lighting Systems, Focal Press, 1999 ISBN 0-240-80364-7. • Menno van der Veen, Modern High-end Valve Amplifiers: Based on Toroidal Output Transformers, Elektor International Media, 1999 ISBN 0-905705-63-7. • Jerry C. Whitaker, Television Receivers, McGraw-Hill Professional, 2001 ISBN 0-07-138042-6.
4,637
20,319
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 3, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.515625
3
CC-MAIN-2024-10
latest
en
0.959635
https://brilliant.org/practice/sat-numbers/
1,490,307,938,000,000,000
text/html
crawl-data/CC-MAIN-2017-13/segments/1490218187225.79/warc/CC-MAIN-20170322212947-00388-ip-10-233-31-227.ec2.internal.warc.gz
790,364,839
14,291
× Back to all chapters # SAT Numbers If $$x$$ and $$y$$ are integers, for which of the following ordered pairs $$(x,y)$$ is $$x-2y$$ an odd number? (A) $$\ \ (32,1)$$ (B) $$\ \ (16, 1)$$ (C) $$\ \ (6, 17)$$ (D) $$\ \ (0, -19)$$ (E) $$\ \ (-15, -10)$$ If $$x$$ and $$y$$ are integers, and $$x^{6}y^{9}$$ is odd, which of the following must be true? I. $$xy$$ is odd. II. $$x^{6}y$$ is even. III. $$y^{9}$$ is odd. (A) I only (B) II only (C) I and II only (D) I and III only (E) I, II, and III If $$x$$ represents a positive even integer, which of the following represents the even integer that immediately precedes $$x$$? (A) $$\ \ x-1$$ (B) $$\ \ x-2$$ (C) $$\ \ x+2$$ (D) $$\ \ 2x$$ (E) $$\ \ 2x+2$$ How many integers between $$100$$ and $$1000$$ contain exactly one zero? (A) $$\ \ 9$$ (B) $$\ \ 81$$ (C) $$\ \ 162$$ (D) $$\ \ 170$$ (E) $$\ \ 200$$ If $$n$$ is an odd integer, all of the following must be odd integers EXCEPT: (A) $$\ \ n^{11}$$ (B) $$\ \ n^{11}+10$$ (C) $$\ \ n^{11} + 11n +11$$ (D) $$\ \ n^{10}$$ (E) $$\ \ n^{10}+ 10n + 11$$ ×
439
1,061
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.125
3
CC-MAIN-2017-13
longest
en
0.764963
https://www.coursehero.com/file/5997788/Section-53/
1,519,194,687,000,000,000
text/html
crawl-data/CC-MAIN-2018-09/segments/1518891813431.5/warc/CC-MAIN-20180221044156-20180221064156-00252.warc.gz
852,627,710
86,887
{[ promptMessage ]} Bookmark it {[ promptMessage ]} Section 5.3 # Section 5.3 - 1 1 1 15/20 Evaluate the integral 16.425 16.4... This preview shows pages 1–2. Sign up to view the full content. Section 5.3 (Review) (Homework) About this Assignment Current Score: 5 out of 5 Description Evaluating Definite Integrals 1. [SCalcCC2 5.3.12.] 1/1 points Last Response | Show Details All Responses Notes Evaluate the integral. 63.33333 63.3 2. [SCalcCC2 5.3.18.] 1/1 points Last Response | Show Details All Responses Notes Evaluate the integral. 2 2 3. [SCalcCC2 5.3.24.] 1/1 points Last Response | Show Details All Responses Notes part score total submissions This preview has intentionally blurred sections. Sign up to view the full version. View Full Document This is the end of the preview. Sign up to access the rest of the document. Unformatted text preview: 1 1 1 15/20 Evaluate the integral. 16.425 16.4 4. [SCalcCC2 5.3.30.] 1/1 points Last Response | Show Details All Responses Notes Evaluate the integral. .1547 0.155 5. [SCalcCC2 5.3.36.] 1/1 points Last Response | Show Details All Responses Notes Use a graph to give a rough estimate of the area of the region that lies beneath the given curve. Then find the exact area. y = sec 2 x , 0 x /3 1.732 1.73... View Full Document {[ snackBarMessage ]} ### Page1 / 2 Section 5.3 - 1 1 1 15/20 Evaluate the integral 16.425 16.4... This preview shows document pages 1 - 2. Sign up to view the full document. View Full Document Ask a homework question - tutors are online
465
1,533
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.6875
3
CC-MAIN-2018-09
latest
en
0.611433
https://pastebin.com/5yezbv3s
1,670,603,129,000,000,000
text/html
crawl-data/CC-MAIN-2022-49/segments/1669446711417.46/warc/CC-MAIN-20221209144722-20221209174722-00197.warc.gz
487,159,514
8,049
# Untitled Oct 2nd, 2022 915 0 Never Not a member of Pastebin yet? Sign Up, it unlocks many cool features! 1. import csv 2. import re 3. import math 4. from collections import Counter 5. 6. csv_file = 'vacancies_big.csv' 7. 8. 9. def get_suffix_by_rubles(count): 10.     if count % 10 == 0 or 5 <= count % 10 <= 9 or 10 <= count % 100 <= 19: 11.         return 'рублей' 12.     elif 2 <= count % 10 <= 4: 13.         return 'рубля' 14.     else: 15.         return 'рубль' 16. 17. 18. def get_suffix_by_count(count): 19.     if count % 10 == 0 or 5 <= count % 10 <= 9 or 10 <= count % 100 <= 19: 20.         return 'раз' 21.     elif 2 <= count % 10 <= 4: 22.         return 'раза' 23.     else: 24.         return 'раз' 25. 26. 27. def get_list_by_salaries(current_list: list, is_high_salary: bool): 28.     items_counter = 0 29.     list_sorted_by_average_salary = sorted( 30.         current_list, 31.         key=lambda salary: math.floor((float(salary[6]) + float(salary[7])) / 2), 32.         reverse=is_high_salary) 33. 34.     for vacancy in list_sorted_by_average_salary: 35.         if vacancy[9] != 'RUR': 36.             continue 37.         if items_counter == 10: 38.             break 39. 40.         items_counter += 1 41.         average_salary = math.floor((float(vacancy[6]) + float(vacancy[7])) / 2) 42. 43.         print(f'\t{items_counter}) {vacancy[0]} в компании \"{vacancy[5]}\" - ' 44.               f'{average_salary} {get_suffix_by_rubles(average_salary)} (г. {vacancy[10]})') 45. 46. 47. def get_top_mentioned_skills(current_list: list): 48.     items_counter = 0 49.     all_skills = list() 50. 51.     for vacancy in current_list: 52.         skills = vacancy[2].split(', ') 53.         for skill in skills: 54.             all_skills.append(skill) 55. 56.     statistic = dict(Counter(all_skills).most_common()) 57. 58.     for key, count in statistic.items(): 59.         if items_counter != 10: 60.             items_counter += 1 61.             print(f'\t{items_counter}) {key} - ' 62.                   f'упоминается {count} {get_suffix_by_count(count)}') 63.         else: 64.             break 65. 66. 67. def get_cities_by_salaries(current_list: list): 68.     items_counter = 0 69.     all_cities = set() 70.     list_sorted_by_average_salary = sorted( 71.         current_list, 72.         key=lambda salary: math.floor((float(salary[6]) + float(salary[7])) / 2), 73.         reverse=True) 74. 75.     for vacancy in list_sorted_by_average_salary: 76.         if all_cities.__contains__(vacancy[10]): 77.             continue 78.         else: 80.             if vacancy[9] != 'RUR': 81.                 continue 82.             if items_counter == 10: 83.                 break 84. 85.         items_counter += 1 86.         average_salary = math.floor((float(vacancy[6]) + float(vacancy[7])) / 2) 87. 88.         print(f'\t{items_counter}) {vacancy[10]} - средняя зарплата ' 89.               f'{average_salary} {get_suffix_by_rubles(average_salary)} ' 90.               f'(1 вакансия)') 91. 92. 93. def get_cities_count(current_list: list) -> int: 94.     all_cities = set() 95.     for vacancy in current_list: 97.     return len(all_cities) 98. 99. 100. def get_skills_count(current_list: list) -> int: 101.     all_skills = set() 102.     for vacancy in current_list: 103.         skills = vacancy[2].split(', ') 104.         for skill in skills: 106.     return len(all_skills) 107. 108. 109. vacancies_list = list() 110. 111. 112. with open(csv_file, 'r', encoding='utf-8') as f: 115.     data, title[0] = [], 'name' 116.     html_tags = re.compile(r'<[^>]+>') 118.         if len(line) < len(title): 119.             continue 120.         is_correct_string = True 121.         for index, value in enumerate(line): 122.             normal_string = re.sub(html_tags, '', value)\ 123.                 .replace('\n', ', ')\ 124.                 .replace('\r\n', ', ') 125.             normal_string = ' '.join(normal_string.split()) 126.             line[index] = normal_string 127.             if len(normal_string) == 0: 128.                 is_correct_string = False 129.                 break 130.         if is_correct_string: 131.             vacancies_list.append(line) 132. 133. 134. print('Самые высокие зарплаты:') 135. get_list_by_salaries(vacancies_list, True) 136. print() 137. print('Самые низкие зарплаты:') 138. get_list_by_salaries(vacancies_list, False) 139. print() 140. print(f'Из {get_skills_count(vacancies_list)} скиллов, самыми популярными являются:') 141. get_top_mentioned_skills(vacancies_list) 142. print() 143. print(f'Из {get_cities_count(vacancies_list)} городов, самые высокие средние ЗП:') 144. get_cities_by_salaries(vacancies_list)
1,365
4,713
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.8125
3
CC-MAIN-2022-49
latest
en
0.174336
https://www.efunda.com/glossary/units/units--surface_tension--ounce_force_per_inch.cfm
1,718,584,612,000,000,000
text/html
crawl-data/CC-MAIN-2024-26/segments/1718198861674.39/warc/CC-MAIN-20240616233956-20240617023956-00221.warc.gz
667,824,485
6,671
Materials Design Center Processes Unit Conversion Formulas Mathematics Calculators Discussion Forum Trade Publications Directory Service Selecting the Right 3D Printer Discover how to choose the right 3D printer for your needs and the key performance attributes to consider. Salary Expectation 8 things to know about the interview question "What's your salary expectation"? 3D Scanners A white paper to assist in the evaluation of 3D scanning hardware solutions. Mechanical Engineers Outlook Guide for those interested in becoming a mechanical engineer. Includes qualifications, pay, and job duties. more free publications Glossary » Units » Surface Tension » Ounce Force Per Inch Ounce Force Per Inch (ozf/in) is a unit in the category of Surface tension. It is also known as ounce force/inch. This unit is commonly used in the UK, US unit systems. Ounce Force Per Inch (ozf/in) has a dimension of MT-2 where M is mass, and T is time. It can be converted to the corresponding standard SI unit N/m by multiplying its value by a factor of 10.945427205. Note that the seven base dimensions are M (Mass), L (Length), T (Time), Q (Temperature), N (Aamount of Substance), I (Electric Current), and J (Luminous Intensity). Other units in the category of Surface tension include Dyne Per Centimeter (dyn/cm), Erg Per Square Centimeter (erg/cm2), Erg Per Square Millimeter (erg/mm2), Kilogram Force Per Meter (kgf/m), Newton Per Meter (N/m), Ounce Force Per Foot (ozf/ft), Pound Force Per Foot (lbf/ft), Pound Force Per Inch (lbf/in), Poundal Per Foot (pdl/ft), and Poundal Per Inch (pdl/in). Additional Information N/A Related Pages Glossary Selecting the Right 3D Printer Discover how to choose the right 3D printer for your needs and the key performance attributes to consider. Salary Expectation 8 things to know about the interview question "What's your salary expectation"? 3D Scanners A white paper to assist in the evaluation of 3D scanning hardware solutions. Mechanical Engineers Outlook Guide for those interested in becoming a mechanical engineer. Includes qualifications, pay, and job duties.
493
2,115
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.828125
3
CC-MAIN-2024-26
latest
en
0.839579
https://medium.com/caf%C3%A9-com-rodolfo-pinotti/the-universal-law-of-business-gravity-and-how-it-affects-every-aspect-of-your-company-6248607f043d
1,575,839,698,000,000,000
text/html
crawl-data/CC-MAIN-2019-51/segments/1575540514893.41/warc/CC-MAIN-20191208202454-20191208230454-00534.warc.gz
453,323,717
27,910
The universal law of “business gravity” and how it affects every aspect of your company. Brainstorming. Dec 2, 2016 · 5 min read I have been thinking a lot about growth strategies and how they affect business success, which ultimately means a scalable product created by happy employees, loved by customers, meaningful to society and profitable to our shareholders. Such a dream! Yes, I know it is incredibly difficult to achieve that, but we will all agree that it’s possible in some extent. I have recently celebrated one year of at Trackmob and along the way I’ve participated in almost all business related activity. Billing, finance, accounts, sales, marketing, support... you name it. This holistic view of the business structure helped me to better understand how everything is interconnected and bound by an invisible force that, although not visible, is as real as gravity — “Business Gravity”. What I call business gravity is a subtle force that is invisible but capable of driving changes. For better or worse, this force is acting right now on your hiring process, sales, fundraising and even your business valuation. Everything. How? The force of gravity (F), as physics shows, is a relation between the mass of two objects (m1 and m2) and its distance (d). The greater the mass, the stronger the force. The opposite is true for the distance. As it grows, the force is exponentially reduced. “G” is a constant value that multiplies the relation between masses and distance. Alright, let’s play a little with that. I invite you to think about your company as an astronomic object. Let’s say you are M1. How great do you think it would be? Small? Big? Now let’s imagine another object that your company interacts with…What about an employee? Good, M2 is an employee. A very good one, the kind you don’t want to loose. Question 1— what are the odds that this employee will keep “orbiting” your company in the near future? Let’s try to answer this in terms of Business Gravity: Mass — How “attractable” is your company? Mass, in this case, might not be synonym of revenues. We are talking about intrinsic attributes that make a company “attractive” to employees. People, culture, brand, coworkers. Consider yourself. What makes a company attractive/massive to you? Distance — How “far” the person is? Distance concerns the estate of the object. Maybe the person just got onboard and is yet to have time to find affinity with the companies values. Despite the reason, the employee has a position related to your company. Let’s remember that the distance is the denominator of the formula and this value must be squared. So let’s imagine that your employee moves 20% away, or 10o “positions” away from the original position (500). What is the impact over the force? Observe that moving 20% away translates into a reduction of 44% of the force. This concept of exponential growth is clear to me in situations where small actions, whether positive or negative, can result in dramatic consequences. We can apply this framework to multiple situations. Imagine an early-stage startup seeking investment. Not only does it’s attractiveness matter (team, market, growth, and other intrinsic attributes) but also how well they are positioned and connected. Mass and distance will drive how investors will orbit the startup. Ok, you’ve got the idea. Alright. Shall we move forward? Let’s talk about moving objects. Business gravity in practice — motion and energy. In overall, most people are interest not only in analyzing, but moving objects. This process can happen in two ways. Naturally, as your business “attracts” the object with its own mass, or artificially, which means that you will apply some sort of energy to bring it close to you. The concept of business gravity helped me lot to think about causality and business strategy. I have watched several of managers, investors and entrepreneurs applying a lot of energy to “artificially” change a scenario, which usually translates into burning fuel (money). Although it might be a strategy to leverage the company, quite often natural movements are far more effective and offer better returns over the time. Why? Let’s go back to the employee exemple. For some reason your talent wants to leave. Your first reaction is offer him more money to stay. That’s the easy way, moving objects by force and burning fuel. Another option would be to truly understand what is going on and invest in people development, connecting work, purpose and happiness. Difficult, but possible. That would require you to thinking carefully about onboarding, culture, learning, satisfaction, health, happiness, family, incentives and other complex issues. Once done, it will make your company more attractable not only to that unhappy employee, but to all employees. “(…) financial incentives play an important role in retention – but money alone won’t do the trick. Praise from one’s manager, attention from leaders, frequent promotions, opportunities to lead projects, and chances to join fast-track management programs are often more effective than cash”. Retaining key employees in times of change, McKinsey. You see, creating a massive business is difficult. It requires you to think. Burning money is easy and makes you lazy. That’s the beauty of this analogy. People usually think they need a CFO to “count the rockets”, but what they actually need is someone to ask WHY - tell me why you need more rockets and I will probably convince you otherwise. If you do have a good destination, I will tell you how to use your rockets more efficiently to get there. If you really want to create a successful business, if you want to bring customers, employees and people close to you, you will have learn how to burn rockets efficiently as you create mass, awareness and scale. You will have to figure it out how to captivate not only one but all employees. You will have to learn how to bring onboard not one but hundreds/thousands of customers. And you can’t do all that just burning fuel. That’s paramount. Quoting Einstein: “Nothing happens until something moves.” Written by
1,266
6,136
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.53125
3
CC-MAIN-2019-51
latest
en
0.962564
https://web2.0calc.com/questions/which-postulate-or-theorem-can-be-used-to-prove-thatbcd
1,548,263,399,000,000,000
text/html
crawl-data/CC-MAIN-2019-04/segments/1547584334618.80/warc/CC-MAIN-20190123151455-20190123173455-00380.warc.gz
661,075,076
5,897
+0 # Which postulate or theorem can be used to prove thatBCD is similar to EFG? 0 605 1 +117 Which postulate or theorem can be used to prove that△BCD is similar to △EFG? ASA Similarity Theorem ​ AA Similarity Postulate SAS Similarity Theorem SSS Similarity Theorem Feb 28, 2018 #1 +7347 0 Each side of  △BCD  is  2  times each side of  △EFG. So by the SSS Similarity Theorem,  △BCD ~ △EFG. Feb 28, 2018
158
414
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.90625
3
CC-MAIN-2019-04
longest
en
0.59481
https://lbs-to-kg.appspot.com/5/af/2610-pond-in-kilogram.html
1,579,543,428,000,000,000
text/html
crawl-data/CC-MAIN-2020-05/segments/1579250599718.13/warc/CC-MAIN-20200120165335-20200120194335-00206.warc.gz
520,037,724
6,168
Pond In Kilogram # 2610 lbs te kg2610 Pond te Kilogram lbs = kg ## Hoe om 2610 pond skakel na kilogram? 2610 lbs * 0.45359237 kg = 1183.8760857 kg 1 lbs ## Skakel 2610 lbs gemeenskaplike gewigte Eenhede van metingGewig Mikrogram1.1838760857e+12 µg Milligram1183876085.7 mg Gram1183876.0857 g ons41760.0 oz Pond2610.0 lbs Kilogram1183.8760857 kg Stone186.428571429 st Amerikaanse ton1.305 ton Ton1.1838760857 t Imperial ton1.1651785714 Long tons ## Alternatiewe spelling 2610 lb te Kilogram, 2610 lb na Kilogram, 2610 lbs te Kilogram, 2610 lbs na Kilogram, 2610 Pond te Kilogram, 2610 Pond na Kilogram, 2610 Pond te kg, 2610 Pond na kg, 2610 lbs te kg, 2610 lbs na kg
273
675
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.53125
3
CC-MAIN-2020-05
longest
en
0.074006
http://www.ask.com/web?q=When+Adding+and+Subtracting+Rational+Expressions+Why+Do+You+Need+a+LCD&o=2603&l=dir&qsrc=3139&gc=1
1,501,041,690,000,000,000
text/html
crawl-data/CC-MAIN-2017-30/segments/1500549425751.38/warc/CC-MAIN-20170726022311-20170726042311-00179.warc.gz
365,148,603
16,670
Web Results Demonstrates the steps involved in adding rational expressions, comparing ... you'll be doing with rational expressions because, just like with regular fractions, you'll have to ... (For old folks like me, whenever you see "LCM", think "LCD", or " lowest ... To convert the "2/x" to the common denominator, I will need to multiply by ... Here are the steps required for Adding and Subtracting Rational Expressions: ... Step 7: Simplify or reduce the rational expression if you can. ... Step 4: Combine the fraction by adding or subtracting the numerators and keeping the LCD. Step 4 . You can always find a common denominator by multiplying the ... at 3:00 why did Sal make it into a difference of square, couldn't he have just left it as .... Edit: The exercise Adding and subtracting rational expressions 5 works now! 1 Vote. Do It Faster, Learn It Better. ... To add or subtract rational expressions with unlike denominators, first find the LCM of the ... The LCM of the denominators of fraction or rational expressions is also called least common denominator , or LCD. ... Since 3a and 4b have no common factors, the LCM is simply their product: 3a⋅4 b . www.montereyinstitute.org/courses/DevelopmentalMath/COURSE_TEXT2_RESOURCE/U15_L1_T3_text_final.html Adding rational expressions with the same denominator is the simplest place to start, so let's ... You know how to do this with numeric fractions. .... You need to multiply 15m2 by 7 to get the LCD, so multiply the entire rational expression by . Jul 17, 2011 ... In this tutorial we will be looking at adding and subtracting them. If you need a review on simplifying, multiplying and dividing rational expressions, .... Since the first rational expression already has the LCD, we do not need to ... Dec 15, 2009 ... In this tutorial we will be looking at adding and subtracting them. If you need a review on simplifying rational expressions, feel free to go back to Tutorial 8: ... Step 2: Write equivalent fractions using the LCD if needed. www.sparknotes.com/math/algebra2/rationalexpressions/section2.rhtml A summary of Adding and Subtracting Rational Expressions in 's Rational Expressions. ... In Pre-Algebra, we learned that fractions can be added or subtracted if and only if they have ... least common denominator--i.e. all the factors in the LCD that do not appear in each denominator ... QUIZ: Were you born in the wrong era?
552
2,417
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.3125
4
CC-MAIN-2017-30
longest
en
0.914339
http://www.triviala.com/quizzes/playquick/id/562
1,566,126,587,000,000,000
text/html
crawl-data/CC-MAIN-2019-35/segments/1566027313803.9/warc/CC-MAIN-20190818104019-20190818130019-00427.warc.gz
320,382,361
4,524
# Maths Skills Quiz ### Let's push that brain to the limit! 8 Questions. Created by: VirtuousFever Played: 276 times Comments: 1 comment Favs: 0 users like this quiz Rating: 4 stars 3.8 out of 5, based on 23 votes Login or Register to view the answers and save your score! 1 • 1,000 • 10,000 • 5000 2 • 3.14 • 3.16 • 3.41 3 ### What does the term Mode mean? • Frequently occuring number • The average number • Half the sample 4 ### What is the common way to multiply by ten? • Add a zero to the end, move the decimal along • Use a calculator and the M button 5 ### What does the term cubed mean? • To times by itself three times ( n x n x n ) • To times by itself twice ( n x n ) • To add 6 6 • 190 • 160 • 180 7 • 79.4 • 794 • 7.9 8 • 3 out of 6 • 2 out of 6 • 1 out of 6
277
794
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.75
3
CC-MAIN-2019-35
latest
en
0.831053
http://physicshelpforum.com/advanced-electricity-magnetism/12119-current-carrying-capacity.html
1,558,908,538,000,000,000
text/html
crawl-data/CC-MAIN-2019-22/segments/1558232259757.86/warc/CC-MAIN-20190526205447-20190526231447-00045.warc.gz
149,727,464
9,781
Physics Help Forum Current Carrying Capacity Aug 26th 2016, 05:17 AM   #1 Junior Member Join Date: Aug 2016 Posts: 1 Current Carrying Capacity I am trying to solve for the current carrying capacity of a wire in a glass-to-metal seal. I have been trying to use the neher-mcgrath formula to do so but have been unsuccessful. This may or may not be the appropriate formula for what I am trying to do. My end goal is to have a working understanding of a formula that will enable me to make a chart similar to the one titled "Current Capacities of Standard Kemlon Glass Seals (52 alloy) in Amperes with 50 Degree [Fahrenheit] Temperature Rise" on this page: Kemtite High Pressure Connectors Technical Information The main difference is I want to be able to make a chart in excel that will let me fill in the required properties and excel will do the math to give me current carrying capacity for that setup. What I have done so far: I have found the neher-mcgrath formula for current carrying capacity (ampacity). This formula reads I = square root((T sub c - (T sub a - delta T))/(R sub dc * (1+Y sub c) * R sub ca)) Sorry if that is hard to understand, here is a link to where I found it: Understanding the Neher-McGrath Calculation and the Ampacity of Conductors Where I = ampacity, T sub c = conductor temperature, T sub a = ambient temperature, delta T = conductor temperature due to dielectric loss, R sub dc = conductor dc resistance, Y sub c = loss increment due to conductor skin, R sub ca = thermal resistance between conductor and ambient temperature As I mentioned at the beginning of this post, the wire would be within a glass to metal seal. There is no conductor skin (unless air counts?) and I read that Y sub c was negligible for wire sizes smaller than AWG 2. I read that if the voltage will be less than 2000V delta T is negligible. That said I used the shortened version labeled "Equation 1" which omitted those values. For conductor temp I was trying to match the chart given as an example to verify I was performing the calculations correctly, so I used 35 degrees Celsius. I used 25 degrees Celsius as my ambient temperature. The conductor (52 alloy) dc resistance is .00000429 ohm-mm which I multiplied by the length of my pin then dived that by the cross sectional area to get .001517277 ohms. The thermal resistance I was using (which is most likely in error) was that of the 52 alloy is .0132 watt/mm Celsius which I divided by the length to get 481.060606 watts Celsius. When i run the calculation for a .030 dimater pin as shown in the chart i get 3.7 ampacity which is close to their 3.8. At this point i was feeling pretty good but then as I tried the other pin diameters and temperatures listed in the chart the numbers I got were way off mark. Any help in understanding where I went wrong would be greatly appreciated. I will attach the excel file I was working with as well. Attached Files Ampacity.xls (23.0 KB, 7 views) Tags capacity, carrying, current Thread Tools Display Modes Linear Mode Similar Physics Forum Discussions Thread Thread Starter Forum Replies Last Post Walter Lewin Electricity and Magnetism 1 Feb 8th 2011 09:08 PM greencheeseca Advanced Electricity and Magnetism 3 Mar 15th 2010 04:59 PM tanaki Thermodynamics and Fluid Mechanics 1 Dec 10th 2009 12:40 AM ssadi Electricity and Magnetism 3 Aug 14th 2009 03:49 PM helloying Thermodynamics and Fluid Mechanics 2 Apr 29th 2009 03:25 PM
834
3,450
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.609375
4
CC-MAIN-2019-22
latest
en
0.939078
https://www.perlmonks.org/index.pl/?node_id=779302
1,590,500,991,000,000,000
text/html
crawl-data/CC-MAIN-2020-24/segments/1590347390758.21/warc/CC-MAIN-20200526112939-20200526142939-00052.warc.gz
848,235,789
10,170
Perl-Sensitive Sunglasses PerlMonks ### Re: Hangman Assistant on Jul 12, 2009 at 03:47 UTC ( #779302=note: print w/replies, xml ) Need Help?? No comments on your code, but wouldn't the optimal strategy be to pick not the most common letter, but the letter that is closest to appearing in exactly half of the possible remaining words? That way you eliminate ~1/2 of the candidates in each turn. At least in your example, a letter never appears in more than half of the candidates, so the most frequent letter and the closest-to-half-half letter coincide. But imagine if you found out that the actual word contained Q. Then for sure the next most common letter will be a U; but guessing U and getting it right will not give you much new information. Aiming for half-half lets your correct and incorrect guesses both contribute information. Replies are listed 'Best First'. Re^2: Hangman Assistant by Lawliet (Curate) on Jul 12, 2009 at 05:21 UTC You make a good point, and I made the following change to the my program: ```# Find how close each letter is to half of the total word possibilitie +s to ensure maximum gain every guess after being sorted foreach my \$occur (keys %alphabet) { \$alphabet{\$occur} = abs(\$#narrowed/2 - abs(\$alphabet{\$occur} - \$#n +arrowed + 1)); } say \$_ foreach @narrowed; # Word list say \$#narrowed + 1; say sort { \$alphabet{\$a} <=> \$alphabet{\$b} } keys %alphabet; # Sort as +cendingly; whichever letter is closest to 0, i.e., and therefore whic +hever letter will eliminate the most words. However, as I play, I notice that although it does eliminate a lot of words very quickly, when it gets to a low amount of words, it becomes useless, telling me to guess letters that are not in any of the words, and telling me to guess letters that are in all of the words last. Surely, when it comes to this point, the user can easily guess on his own but that is not really the point. I want the program to be able to find the individual word in a small amount of guesses. Perhaps I should use your method when there are more than, say, 10 possibilities, and mine from there on out. Example illustrating my point: I am kind of speaking to myself here, so this node is just publishing my own mental thoughts. Feel free to comment or object them. I don't mind occasionally having to reinvent a wheel; I don't even mind using someone's reinvented wheel occasionally. But it helps a lot if it is symmetric, contains no fewer than ten sides, and has the axle centered. I do tire of trapezoidal wheels with offset axles. --Joseph Newcomer it becomes useless, telling me to guess letters that are not in any of the words That would indicate a bug in the implementation, not a problem in the approach as you claim. Re^2: Hangman Assistant by Limbic~Region (Chancellor) on Jul 13, 2009 at 00:41 UTC I disagree with your strategy based on my limited knowledge of the game hangman. It is my understanding that the game continues until you have either revealed the word or made too many incorrect guesses. I think the best strategy then would be to guess the letter that appears in the most of the candidate words. I too haven't looked at the OP's code but my strategy doesn't necessarily pick the most popular letter but the one that appears in the most words. If you are correct you get new information (position of that letter) and are no closer to losing the game. If you are wrong, you eliminate the most possible wrong answers. I will code this strategy up to see how it does in a follow on post. Cheers - L~R Those were my thoughts too. However, if the user picked an extremely common word, with common letters (which makes it a common word), then shaving off 50% each time is more helpful than eliminating the 10% that don't have the recommended, most common letter. Of course this is all theoretical and should be tested before changes are made (which I failed to do initially). I don't mind occasionally having to reinvent a wheel; I don't even mind using someone's reinvented wheel occasionally. But it helps a lot if it is symmetric, contains no fewer than ten sides, and has the axle centered. I do tire of trapezoidal wheels with offset axles. --Joseph Newcomer Lawliet, then shaving off 50% each time is more helpful than eliminating the 10% that don't have the recommended, most common letter That's not the way it works. At least not how I understand the game. So let's say you have a really common word with common letters and you pick the most common letter amongst the possible candidates. If you pick a letter that is correct, you gain new information (the position of that letter) and you don't get penalized for a wrong guess. As long as you keep guessing correct letters you can go on forever. Additionally, you still prune your candidate list because even though lots of words have the same letter - they don't all have them in the same position. If the letter is wrong, well then you purge the majority of the words in your candidate list. You haven't convinced me approach isn't superior to the binary search of blokhead's. It can easily be tested - just modify the code for each algorithm to output "win <total_guesses> <wrong_guesses>" or "lose" rather than the verbose output. Then write a wrapper script that tests both algorithms against 10_000 randomly selected words. You can then gather statistics on which algorithm produced the most wins and for the wins, the average number of guesses required and the average number of wrong guesses used. Oh, and they should both "lose" after the same number of wrong guesses - my code defaults to 7 but is configurable. Cheers - L~R Re^2: Hangman Assistant by Limbic~Region (Chancellor) on Jul 13, 2009 at 18:05 UTC Some more thoughts: Your strategy is effectively a binary search. Assuming a perfect distribution (it is always possible to find a letter that is in exactly half of the remaining words), you should be able to guess (on average) up to twice as many times as you are allowed wrong guesses before losing the game. Of course, the last guess still has a 50% chance of being wrong so I believe your strategy is guaranteed to work when the total number of initial candidates is 2^(2G - 1) or less, where G represents the number of allowed wrong guesses. For instance, when G = 7 then you are guaranteed (100%) to win when the initial number of candidates is 8192 or less and 50% when it is up to 16,384. For eclectic, there are 9638 initial words in my word list so only a 50% chance of success. My strategy is optimized not to guess wrong. After only 5 guesses (2 right and 3 wrong) it had narrowed the search space down to exactly 1 word. This is because I prune by position not just by presence of letter so even successful guesses on a popular letter can still effectively decrease the search space. Your approach would be improved with this strategy as well. I don't think the opportunities stop there. In a private /msg with Lawliet the idea of finding the solution with the least number of guesses was proposed. I indicated that my strategy would change - but to what? A binary search is already optimal. There needs to be some balance then between improving your odds from 50/50 of guessing wrong while still effectively reducing your search space each time. This way you can survive long enough to win but not guess ad nauseum. I am kicking around some ideas where you still look for a very popular letter (say in 70% of the remaining words) but by position would still split that 70% in half if you guessed right. The result then would be that you would guess wrong 30% of the time and remove 70% of your search space or guess right 70% of the time but reduce your search space by 65%. Does this sound viable to you? Cheers - L~R That makes sense. I hadn't considered also taking into account the positions of the letters when you guess a correct letter. So in my example, when the word has a Q, and you guess U, you can still potentially get some useful information if many of the candidate words have U appearing in different places (i.e., you could distinguish QUEUE from QUEST). In this case, it would be "best" (if minimizing total # of guesses) to try to choose a letter whose absence / presence at all positions will partition the candidates into a large number of sets, each with size as small as possible. Update: expanding with an example: Suppose the word to be guessed matches S T _ _ _. Then suppose we are considering E for our next guess. All of the candidate words will then fall into one of these 8 classifications: ```S T _ _ _ (no E in the word) S T _ _ E (E in last position ONLY) S T _ E _ (etc..) S T _ E E S T E _ _ S T E _ E S T E E _ S T E E E So we have 8 buckets, and we put all of the candidate words into the appropriate bucket. Suppose the bucket with most words has n words in it. Then in the worst case, after guessing E, we will have n remaining candidates. So you can take n to be the worst-case score of guessing E. Now compute this score for every letter, and take the letter with lowest score. Note that there might be other ways to score each possible next-letter guess. Number of non-empty buckets comes to mind as an "average case" measure (to be maximized). Again, this is all assuming we're minimizing the total number of guesses. That way, all of the possible outcomes are (i.e., the guessed letter appears or doesn't appear in the word) are treated the same. To minimize the number of wrong guesses, you have to treat the "doesn't appear in the word" outcome differently and weight things in some better way. Re^2: Hangman Assistant by JavaFan (Canon) on Jul 13, 2009 at 14:42 UTC The goal of hangman isn't to minimize the number of guesses (if it was, your approach would make sense), but to minimize the number of wrong guesses. Or, to be more specific, have no more than a set number of incorrect guesses. That actually means that the hardest hangman games are where you have to guess short words. Given that /usr/share/dict/words has 23 three letter words ending in at (only "aat", "iat" and "uat" are missing) only luck determines whether you "win" guessing a word like "mat", "cat" or "hat". Create A New User Node Status? node history Node Type: note [id://779302] help Chatterbox? and the web crawler heard nothing... How do I use this? | Other CB clients Other Users? Others romping around the Monastery: (7) As of 2020-05-26 13:48 GMT Sections? Information? Find Nodes? Leftovers? Voting Booth? If programming languages were movie genres, Perl would be: Results (150 votes). Check out past polls. Notices?
2,486
10,611
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.71875
3
CC-MAIN-2020-24
latest
en
0.934293
http://lessonplanspage.com/mathgraphunit2usingscatterplots7-htm/
1,519,283,294,000,000,000
text/html
crawl-data/CC-MAIN-2018-09/segments/1518891814036.49/warc/CC-MAIN-20180222061730-20180222081730-00296.warc.gz
199,727,241
29,259
# Putting Scatter Plots to Use. Subject: Math 7 Title – Putting Scatter Plots to Use By – Kristin Reeves Primary Subject – Math Multimedia Graphing Unit Contents: Content: Putting our new knowledge of scatter plots to use. Benchmarks: D.AN.07.04 Create and interpret scatter plots and find the line of best fit; use and estimated line to answer questions about the data. 4.b.1. Students create a project (e.g., presentation, web page, newsletter, information brochure) using a variety of media and formats (e.g., graphs, charts, audio, graphics, video) to present content information to an audience. Learning Resources and Materials: • Worksheet with a blank chart and a blank scatter plot. Development of Lesson: Introduction: The previous lesson “Making Scatter Plots” is the introduction to this lesson. Methods/Procedures: 1. The class will be divided into groups. 2. If available, we will go into the gym or to an outside basketball court. Each group will be assigned to a basketball hoop. The number of groups will correspond to the number of hoops available. 3. Each group member will be given five turns to make a basket, each turn taking a step away from the basket. (These increments should be marked on the floor to make everyone equal.) 4. The groups will record in their chart, how many people make baskets at each distance. 5. Then, back in the classroom, the groups will input their data into Microsoft Excel and turn it into a scatter plot. 6. Observing their data, they will make observations about the trend(s). 7. Finally, we will compare our results as a class. For my two students with Down Syndrome, I will work with them personally to record their results. They will get a smaller net and a chance for one-on-one learning. Assessment/Evaluation: The group presentations will be graded. Closure: In our final test on graphing, there will be a section on scatter plots. Performances on this test will prove how well the students have absorbed this material and whether or not these lessons have been effective. Worksheet: Name _________________________________ Hour ___________ Date __________________ Group Number _______ Group Members ____________________________________________ Directions: 1. Record each group member’s name the spaces provided below. 3. Complete the scatter plot below with the appropriate labels. Name Trial One Trial Two Trial Three Trial Four Trial Five __________________________________ (Title) E-Mail Kristin Reeves!
510
2,496
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.890625
4
CC-MAIN-2018-09
longest
en
0.894171
https://core-cms.prod.aop.cambridge.org/core/books/game-theory/mixed-strategies/07B713347A3F1F8AA948366A962FBC1B
1,620,723,317,000,000,000
text/html
crawl-data/CC-MAIN-2021-21/segments/1620243991904.6/warc/CC-MAIN-20210511060441-20210511090441-00112.warc.gz
204,812,338
18,517
Skip to main content Accessibility help Home • This chapter is unavailable for purchase • Print publication year: 2013 • Online publication date: March 2013 # 5 - Mixed strategies ## Summary Chapter summary Given a game in strategic form we extend the strategy set of a player to the set of all probability distributions over his strategies. The elements of the new set are called mixed strategies, while the elements of the original strategy set are called pure strategies. Thus, a mixed strategy is a probability distribution over pure strategies. For a strategic-form game with finitely many pure strategies for each player we define the mixed extension of the game, which is a game in strategic form in which the set of strategies of each player is his set of mixed strategies, and his payoff function is the multilinear extension of his payoff function in the original game. The main result of the chapter is the Nash Theorem, which is one of the milestones of game theory. It states that the mixed extension always has a Nash equilibrium; that is, a Nash equilibrium in mixed strategies exists in every strategic-form game in which all players have finitely many pure strategies. We prove the theorem and provide ways to compute equilibria in special classes of games, although the problem of computing Nash equilibrium in general games is computationally hard. We generalize the Nash Theorem to mixed extensions in which the set of strategies of each player is not the whole set of mixed strategies, but rather a polytope subset of this set. ### Related content Powered by UNSILO
318
1,594
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.515625
3
CC-MAIN-2021-21
latest
en
0.938265
http://www.solutioninn.com/comfort-company-produces-leather-office-chairs-the-standard-cost-per
1,508,497,828,000,000,000
text/html
crawl-data/CC-MAIN-2017-43/segments/1508187824068.35/warc/CC-MAIN-20171020101632-20171020121632-00802.warc.gz
561,570,901
8,375
Comfort Company produces leather office chairs The standard cost per Comfort Company produces leather office chairs. The standard cost per chair is as follows: Direct materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . \$35 Direct labour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Variable overhead (2 machine-hours at \$4.00)* . . . . . . . . . . . . . . . . . . . 8 Fixed overhead (2 machine-hours at \$8.00)* . . . . . . . . . . . . . . . . . . . . . 16 Total standard cost per chair . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . \$74 *Overhead rates are based on a denominator activity level of 30,000 machine-hours. During 2015, Comfort Company produced and sold 12,000 office chairs. Management believes that the denominator level of activity represents 75% of theoretical capacity and 80% of practical capacity. Required: 1. Calculate the total overhead costs at the following levels of activity: theoretical, practical, denominator, and actual (2015). 2. Assuming Comfort Company can sell all of the chairs it can produce for \$100 per unit, calculate the opportunity loss of producing 12,000 chairs in 2015 compared to the following capacity utilization alternatives: theoretical, practical, and denominator. Membership
374
1,340
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.734375
3
CC-MAIN-2017-43
latest
en
0.641528
http://www.cham.co.uk/phoenics/d_earth/d_core/gxblin.for
1,716,879,669,000,000,000
text/html
crawl-data/CC-MAIN-2024-22/segments/1715971059078.15/warc/CC-MAIN-20240528061449-20240528091449-00426.warc.gz
31,376,310
27,656
c ```<080223 C file-name GXBLIN.HTM 200623 c--> C!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! C------------------------------------------------------------------------- SUBROUTINE GXBLIN C------------------------------------------------------------------------- C C PHOENICS V2020 C Author: M.R.Malin / J C Ludwig C Date: 24/09/04 - 16/04/20 C C Subroutine GXBLIN is called from Group 1 Section 1 and Group 13 Section C 19 of GREX3 {using the logical condition IF(NPATCH(1:4=='BLIN')}. C C The purpose of this subroutine is to set mass-flow boundary C conditions for the momentum and k-e (or k-f) equations so as to C define a boundary-layer profile at domain boundaries. At present an C option is provided only for a neutral atmospheric boundary layer C (wind profile) using either a logarithmic or power-law velocity C profile, as follows: C C Q = {QTAU/Kappa}*ln((z-d)/zo) or Q = Qref*(z/zref)^a C C k = QTAU^2/sqrt(cmucd) C C e = QTAU^3/(K*[z-d]) or f = QTAU/(Kappa*sqrt(cmucd)*(z-d)) C C where the friction velocity QTAU is given by: C C Q*= Qref*K/log([zref-d]/zo) C C and C C Kappa = 0.41, zo is the effective roughness height of the ground terrain, C z is the displacement height, Qref is the reference velocity at the C reference height zref, a is the power-law exponent, z is the vertical C coordinate, and Q is the velocity at the height z from the ground. C C The displacement height d is a positive quantity, although the user C can set d=-z0 to create the inlet wind profiles proposed by Richards C & Hoxey (1993), which are often used in wind-engineering simulations. C ( see Richards,P.J. & Hoxey,R.P., "Appropriate boundary conditions for C computational wind engineering models using the k-e turbulence model." C J.Wind Engng & Industrial Aerodynamics, 46 & 47, 145-153, (1993) ). C C C The vertical coordinate is taken to start at the lower edge of the first C cut or fully-open cell in each column of an inlet plane. C C The facility requires that the user defines a PATCH whose name starts C with the 4 characters BLIN at the inlet boundary, such as for example: C C PATCH(BLIN1,LOW,1,NX,1,NY,1,1,1,1) C COVAL(BLIN1,P1,FIXFLU, GRND7) C COVAL(BLIN1,U1,0.0 , GRND7) C COVAL(BLIN1,V1,0.0 , GRND7) C COVAL(BLIN1,W1,0.0 , GRND7) C COVAL(BLIN1,KE,0.0 , GRND7) C COVAL(BLIN1,EP,0.0 , GRND7) or COVAL(BLIN1,OMEG,0.0 , GRND7) C COVAL(BLIN1,TEM1,0.0 , GRND7) C C In addition, the following SPEDAT statements must be set in the Q1 C input file: C C SPEDAT(SET,BLIN1,VELX,R,Wx) : x-component of Qref C SPEDAT(SET,BLIN1,VELY,R,Wy) : y-component of Qref C SPEDAT(SET,BLIN1,VELZ,R,Wz) : z-component of Qref C SPEDAT(SET,BLIN1,TAIR,R,Tair) : Air temperature C C SPEDAT(SET,BLIN1,VDIR,C,Y) : vertical coordinate direction C SPEDAT(SET,BLIN1,RHOIN,R,1.189): inlet density presumed uniform C SPEDAT(SET,BLIN1,BLTY,C,LOGL) : profile type, i.e. LOGL, POWL or TABLE C SPEDAT(SET,BLIN1,ZO,R,2.0E-02) : effective roughness height C SPEDAT(SET,BLIN1,REFH,R,10.0) : ref. height for ref. velocity C SPEDAT(SET,BLIN1,ALPHA,R,0.21) : power-law index for power-law profile C SPEDAT(SET,BLIN,U-TABLE,C,file_name) : name of velocity profile table for C tabular profile C SPEDAT(SET,BLIN,K-TABLE,C,file_name) : name of turbulent kinetic energy C profile table for tabular profile C C NOTE: for tabular profiles, it is the user's resposibility to ensure that C the tables cover the entire height of the domain. If any cells are C found below the lowest point in the table or above the highest point C in the table, the first or last values will be used. In between, C linear interpolation will be used to obtain values at the local height C above the terrain. It is the user's responsibility to ensure that C the roughness height used in the fully-rough wall function is consistent C with the velocity and turbulent kinetic energy profiles. C C The table should contain a header line (giving the titles of the columns) C followed by pairs of values in free format with blank or comma as C separators. The first value is the height, the second is the velocity or C turbulent kinetic energy. The two tables do not have to contain the same C number of data points. C C The PIL variable WALLB is used for the displacement height. C C To calculate external values at fixed pressure boundaries, the COVAL for P1 C can be replaced with a normal pressure boundary condition: C COVAL(BLIN1,P1,COef, 0.0) C Note that the external pressure is always kept at zero relative to pRESS), and C PRESS0 is updated to the current atmospheric pressure. C C In order for the BLIN patch to automatically detect inflow/outflow based C on wind direction and switch between fixed mass flow and fixed pressure C the COVAL for P1 should be: C COVAL(BLIN1,P1,GRND8, GRND7) C C At the SKY boundary, in addition to the fixed pressure condition, the diffusive C vertical link can be introduced by using GRND7 as the COefficient for velocities C and turbulence variables. The SKY patch is recognised because it is normal to C the vertical coordinate direction. C COVAL(BLIN1,P1, COef, Pext) C COVAL(BLIN1,U1, GRND7, GRND7) C COVAL(BLIN1,V1, GRND7, GRND7) C COVAL(BLIN1,W1, 0.0, 0.0) [the vertical component does not need a COVAL] C COVAL(BLIN1,KE, GRND7, GRND7) C COVAL(BLIN1,EP, GRND7, GRND7) or COVAL(BLIN1,OMEG, GRND7, GRND7) C COVAL(BLIN1,TEM1,GRND7, GRND7) C C The transient variation of wind speed, direction and air temperature may be taken C from a weather data file in EPW format. In this case, the name of the data file C is passed as: C SPEDAT(SET,'BLIN','WEATHERFILE',C,filename) C C The data required for the current run will have been extracted from the weather file C by the pre-processor. The number of lines of data sent is set as: C SPEDAT(SET,'BLIN','NLINES',I,nlines) C C The number of entries per hour (defaulted to 1) is set by: C SPEDAT(SET,'BLIN','NPHOUR',I,nphour) C C Each line of data contains the wind speed, wind direction and temperature C SPEDAT(SET,'BLIN','LINEn',C, wspeed,wdir,wtemp) C C The direction of the domain relative to North must also be specified as C SPEDAT(SET,'BLIN','AXDIR',R,axdir) C where axdir is the angle between Y and North (for up Z) C C As an alternative to using a weather file, multiple WIND objects can be specified C in a transient. The start and end times must be set to prevent overlap. C C If STORE(WAMP) appears in the Q1, the wind amplification factor, defined C as absolute velocity / wind speed at 1.5m, will be calculated and placed in WAMP. C If VABS has been stored, that will be used for the absolute velocity. If it C is not stored, the absolute velocity will be calculated locally. The reference C height for WAMP can be changed using the following spedat: C SPEDAT(SET,'BLIN','WAMPH',R,wamph) C If the line is absent, a height of 1.5m will be assumed. C C Pasquill Stability-Class Inlet Profiles C C Class-F profiles C Q = (QTAU/Kappa)*(ln((z-d)/zo) - PSIU) C T = To - (g/Cp)*(Z-ZTo) + (T*/kappa)*(ln((z-d)/zo) - PSIT) C k = {QTAU^2/sqrt(cmucd)} * SQRT(1. - ZETA/PHI) C e = {QTAU^3/(K*[z-d])} *PHI*(1.-ZETA/PHI) or C f = {QTAU/(Kappa*sqrt(cmucd)*(z-d))} * PHI C C where PSIU = -5*z/LMO Monin-Obukhov (MO) Similarity parameter C PSIT = -5*z/LMO MO Similarity parameter for heat C PHI = 1. + 5*z/LMO MO Similarity parameter for turbulence C LMO = MO length scale C T* = -Qw/(rho*Cp*QTAU) Friction temperature C C This facility requires the following SPEDAT statements to be set in the Q1 C input file: C SPEDAT(SET,BLIN,MOLEN,I,MOLEN) : MOLEN = 0 User-specified Monin-Obukhov length C = 1 TNO formulae C = 2 PHAST formulae C SPEDAT(SET,BLIN,ITPRO,I,ITPRO) : ITPRO = 0 No Pasquill profile, so neutral C = 1 Class A: Very unstable (LMO<0) C = 2 Class B: Moderately unstable (LMO<0) C = 3 Class C: Slightly unstable (LMO<0) C = 4 Class D: Neutral (LMO=0) C = 5 Class E: Slightly stable (LMO>0) C = 6 Class F: Moderately stable (LMO>0) C C Nb: ITPRO=4 is neutral with a uniform temperature profile, although this may be used C in the future to code a neutral logarithmic temperature profile. C C SPEDAT(SET,BLIN1,GT0 ,R,To) : T0 C SPEDAT(SET,BLIN1,GTZ0 ,R,ZTo) : Z height for T0 C SPEDAT(SET,BLIN1,QWALL,R,Qw) : heat flux (computed in pre-processor for terrain heat flux) C C Limitations: C No provision has been made as yet for scalar profiles, and the logarithmic C profile is restricted to the fully-rough wall law, as encountered in the C atmospheric boundary layer. At present, the GXBLIN facility cannot be used C for BFC=T, GCV=T or CCM=T. C C------------------------------------------------------------------------ USE weather_data INCLUDE 'farray' INCLUDE 'patnos' INCLUDE 'parallel' INCLUDE 'objnam' INCLUDE 'd_earth/parvar' INCLUDE 'patcmn' INCLUDE 'satear' INCLUDE 'grdloc' INCLUDE 'satgrd' INCLUDE 'grdear' INCLUDE 'grdbfc' INCLUDE 'd_earth/pbcstr' common/topinf/topid,idx,idy,idz common/comlnk/llnk,hlnk,slnk,nlnk,elnk,wlnk integer topid,llnk,hlnk,slnk,nlnk,elnk,wlnk INCLUDE 'parear' common/wdomin/nxsd,nysd,nzsd COMMON /INTDMN/IDMN,NUMDMN,NXMAX,NYMAX,NZMAX,NXYMAX,NXYZMX,NCLTOT, 1 NDOMMX COMMON/GENI/NXNY,IGFIL1(8),NFM,IGF(21),IPRL,IBTAU,ILTLS,IGFIL(15), COMMON/DRHODP/ITEMP,IDEN/DVMOD/IDVCGR COMMON/NAMFN/NAMFUN,NAMSUB CHARACTER*6 NAMFUN,NAMSUB COMMON /FACES/L0FACE,L0FACZ COMMON /WORDI1/ NWDS,NCHARS(20),NSEMI,H,NLINES INTEGER H COMMON /WORDL1/ ERROR,MORWDS COMMON /WORDC1/ WD(20),INLINE LOGICAL ERROR,MORWDS CHARACTER WD*20, INLINE*120,CH1*13,CH2*13,CHGR_13*13,DELIM*1 COMMON/TSKEMI/ LBKP,LBKT,LBET,LBVOSQ,LBOMEG COMMON/LRNTM4/RSCMCD,CDDAK2,TAUDKE,RTTDKE,AKC,EWC COMMON/KWMOD3/CWWALL,SIGK1,SIGK2,SIGW1,SIGW2,CWA1,CWA2,CWB1,CWB2, 1 BETAST,CDSIG COMMON/KWMOD1/LBKWF1(7),LBBF1,LBBF2,LBBF3,LBSIGK,LBSIGW,LBKWF2(5) COMMON/RHILO/HI3D,RLO3D COMMON/IHILO/IXHI3D,IYHI3D,IZHI3D,IXLO3D,IYLO3D,IZLO3D,IMAXC(150) COMMON/PASQLF/ITPRO,PASQBUOY,BUOSSG,MOLEN LOGICAL PASQBUOY,BUOSSG 1 LWP,SWP,WWP,LTZ,SHOWCOMF,WEICOEF,NEN,LPAR,LAWS,NEZ, 1 LGWC CHARACTER*14 CTEMP*8,BLTY*5,VDIR*1,LINE*256,CVAL*68,SITENAME*68 CHARACTER*8 TYPECOMF,WINDFILE*256,TERRNAM*12,COBNM2*12, 1 VELFILE*256,KEFILE*256,DLIM*4 CHARACTER PASQUILL*16 PARAMETER (MAXDMN = 10) INTEGER L0IPAT(MAXDMN),L0VELX(MAXDMN),L0VELY(MAXDMN), 1 L0VELZ(MAXDMN),L0QREF(MAXDMN),L0VDIR(MAXDMN), 1 L0ZREF(MAXDMN),L0HREF(MAXDMN),L0BLTY(MAXDMN), 1 L0DENS(MAXDMN),L0POWR(MAXDMN),L0H, 1 NBLIN(MAXDMN), L0SKY(MAXDMN),L0TAIR(MAXDMN), 1 L0PCOE(MAXDMN),L0PVAL(MAXDMN),L0RELH(MAXDMN), 1 L0WMP(MAXDMN),L0HIWAF(MAXDMN),L0VAB(MAXDMN) REAL, ALLOCATABLE :: BUFF(:) INTEGER, ALLOCATABLE :: L0HI(:,:) REAL, ALLOCATABLE :: WNDA(:,:),VEL_TAB(:,:), KE_TAB(:,:) REAL NORML(3) REAL, ALLOCATABLE :: LTHRESH(:), LUTHR(:),LPRO(:,:) REAL THRESHD(7),UTHRD(7) LOGICAL*4 EXISTS COMMON/IPRB/LBPRB1,LBPRB2,LBPRB3,LBPRB4,LBPRB5,LBPRB6, 1 L0PRB1,L0PRB2,L0PRB3,L0PRB4,L0PRB5,L0PRB6 INTEGER LBPRB(6),L0PRB(6) EQUIVALENCE(LBPRB(1),LBPRB1),(L0PRB(1),L0PRB1) SAVE L0IPAT,L0VELX,L0VELY,L0VELZ,L0QREF,L0VDIR,L0ZREF,L0HREF, 1 L0BLTY,L0DENS,L0POWR,L0HI,L0H,NBLIN,L0SKY,NL,L0TAIR, 1 L0PCOE,L0PVAL,L0RELH,IFLO,IHUM,IHUNIT,LBWAMP,LBVABS, 1 IBLIN,INIBUOY,LBCP,LBPRO,LBDAN,LBNEN,ISECT,NBINS,L0WMP, 1 L0WAMP,LBWAF,L0HIWAF,IERR1,IERR2,LBVAV,L0VAB, 1 NLINEV,NLINEK,LBLAWS,NCRIT,LBWAT,LBRHIN,LBTREF,L0TREF0, 1 L0PIN0,L0RHIN0,LBPIN SAVE AXDIR, RHOIN,WAMPVR, WMAST,WNDA,UTHR,DTHR,SECTP,VSECT, 1 AW,AKW, WDIRN, ANG, GT0,GZT0,GALR,GQWALL,GLMO SAVE LTHRESH,LUTHR,LPRO SAVE SHOWCOMF,WEICOEF,NEN,LAWS SAVE TYPECOMF SAVE VEL_TAB, KE_TAB, DLIM, VDIR DATA THRESHD /0.06, 0.02, 0.04, 0.06, 0.01,0.0, 0.0 / DATA UTHRD /10.95, 10.95, 8.25, 5.6, 5.6, 0.0, 0.0 / c*********************************************************************** c IF(USP) RETURN IXL=IABS(IXL) NAMSUB='GXBLIN'; NAMFUN=' ' C***************************************************************** C--- GROUP 1. Preliminaries IF(IGR==1) THEN DLIM(1:1)=' '; DLIM(2:2)=','; DLIM(3:3)=';'; DLIM(4:4)=CHAR(9) C C... Group 1 Section 1 C ================= IF(ISC==1) THEN IF(.NOT.NULLPR) THEN CALL WRYT40('GXBLIN of: 200623') CALL WRIT40('GXBLIN of: 200623') ENDIF CALL GXMAKE(L0H,NXNY,'HDIS') ! storage for grid node heights C C... Group 1 Section 2 C ================= ELSEIF(ISC==2) THEN 100 CONTINUE IF(IDMN>1) CALL INDXDM ! get indices for current FGV domain IF(NUMDMN>MAXDMN) THEN CALL WRITBL CALL WRITST CALL WRIT40('Please increase MAXDMN in GXBLIN') CALL WRIT2I('Current ',MAXDMN,'; Needed',NUMDMN) CALL WRITST CALL SET_ERR(557,'MAXDMN too small in GXBLIN',1) RETURN ENDIF C... Count how many BLINs NBLIN(IDMN)=0 DO IP=1,NUMPAT IF((NAMPAT(IP)(1:4)=='BLIN'.OR.NAMPAT(IP)(1:4)=='GXBL') 1 .AND.LPTDMN(IP)) NBLIN(IDMN)=NBLIN(IDMN)+1 ENDDO C IF(NBLIN(IDMN)==0) RETURN ! no BLINs so nothing to do IF(.NOT.ALLOCATED(L0HI)) THEN ALLOCATE(L0HI(MAXDMN,NBLIN(IDMN)),STAT=IERR) IF(IERR==0) THEN L0HI=0 ELSE CALL WRITBL CALL WRITST CALL WRIT40('Memory allocation error in GXBLIN') CALL WRITST CALL SET_ERR(557,'Memory allocation error in GXBLIN',1) RETURN ENDIF ENDIF C... check for humidity and VABS IHUM=0; IHUNIT=2; LBVABS=0 DO IPH=NPHI,16,-1 IF(NAME(IPH)=='MH2O') THEN IHUM=IPH CALL GETSDI('BLIN','HUNIT',IHUNIT) ELSEIF(NAME(IPH)=='VABS') THEN LBVABS=IPH ENDIF ENDDO C... Allocate storage for BLIN parameters CALL GXMAKE0(L0IPAT(IDMN),NUMPAT,'L0IPAT') CALL GXMAKE(L0VELX(IDMN),NBLIN(IDMN),'L0VELX') CALL GXMAKE(L0VELY(IDMN),NBLIN(IDMN),'L0VELY') CALL GXMAKE(L0VELZ(IDMN),NBLIN(IDMN),'L0VELZ') CALL GXMAKE(L0QREF(IDMN),NBLIN(IDMN),'L0QREF') CALL GXMAKE(L0TAIR(IDMN),NBLIN(IDMN),'L0TAIR') CALL GXMAKE(L0PCOE(IDMN),NBLIN(IDMN),'L0PCOE') CALL GXMAKE(L0PVAL(IDMN),NBLIN(IDMN),'L0PVAL') CALL GXMAKE(L0RELH(IDMN),NBLIN(IDMN),'L0RELH') CALL GXMAKE(L0VDIR(IDMN),NBLIN(IDMN),'L0VDIR') CALL GXMAKE(L0ZREF(IDMN),NBLIN(IDMN),'L0ZREF') CALL GXMAKE(L0HREF(IDMN),NBLIN(IDMN),'L0HREF') CALL GXMAKE(L0BLTY(IDMN),NBLIN(IDMN),'L0BLTY') CALL GXMAKE(L0DENS(IDMN),NBLIN(IDMN),'L0DENS') CALL GXMAKE(L0POWR(IDMN),NBLIN(IDMN),'L0POWR') CALL GXMAKE0(L0SKY(IDMN), NBLIN(IDMN),'L0SKY') IF(LBVABS<=0) CALL GXMAKE(L0VAB(IDMN),NX*NY,'VABS') C... Get Weather data EPWNAM=' ';CALL GETSDC('BLIN','WEATHERFILE',EPWNAM) NL=0 LOCATION=' '; CALL GETSDC('BLIN','LOCATION',LOCATION) ! Location CALL GETSDI('BLIN','NLINES',NL) NPHOUR=1; CALL GETSDI('BLIN','NPHOUR',NPHOUR) IF(NL>0) THEN ALLOCATE(WSPD(NL),WDIR(NL),AIRTEMP(NL),ATMPRES(NL), 1 RELHUM(NL),STAT=IERR) WSDP=0.; WDIR=0.; AIRTEMP=0.; ATMPRES=0.; RELHUM=0. DO I=1,NL WRITE(LINE,'(''LINE'',I4)') I LL=8; CALL REMSPC(LINE,LL) CALL GETSDC('BLIN',LINE(1:LL),CVAL) LL=LENGZZ(CVAL) CALL SPLTZZZ(CVAL,WD,NWDS,NCHARS,LL,DLIM,20) WDIR(I)=RRDZZZ(1);WSPD(I)=RRDZZZ(2);AIRTEMP(I)=RRDZZZ(3) CALL GETSDC('BLINA',LINE(1:LL),CVAL) LL=LENGZZ(CVAL) CALL SPLTZZZ(CVAL,WD,NWDS,NCHARS,LL,DLIM,20) ATMPRES(I)=RRDZZZ(1); RELHUM(I)=RRDZZZ(2) ENDDO ENDIF ENDIF WDIRN=-999.0; CALL GETSDR('BLIN','WDIR',WDIRN) AXDIR=0.0; CALL GETSDR('BLIN','AXDIR',AXDIR) INIBUOY=1; CALL GETSDI('BLIN','INIBUOY',INIBUOY) C... Now loop over BLINs, extract parameters and save NBLIN(IDMN)=0 DO IP=1,NUMPAT IF(.NOT.LPTDMN(IP)) GO TO 103 C... allow GENTRA particles to pass through WIND/WIND_PROFILE IF(NAMPAT(IP)(1:4)=='BLIN'.OR.NAMPAT(IP)(1:4)=='GXBL') THEN NBLIN(IDMN)=NBLIN(IDMN)+1 F(L0IPAT(IDMN)+IP)=NBLIN(IDMN) CTEMP=NAMPAT(IP) CALL GETCV(IP,1,GCO,GVAL) IF(QEQ(GCO,-999.0)) CYCLE ! no settings for ground plane C.. inlet velocity vector at reference height (will be overwritten for tabular input) VELX=0.0; VELY=0.0; VELZ=0.0 CALL GETSDR(CTEMP,'VELX',VELX) F(L0VELX(IDMN)+NBLIN(IDMN))=VELX CALL GETSDR(CTEMP,'VELY',VELY) F(L0VELY(IDMN)+NBLIN(IDMN))=VELY CALL GETSDR(CTEMP,'VELZ',VELZ) F(L0VELZ(IDMN)+NBLIN(IDMN))=VELZ C.. compute inlet velocity magnitude at reference height QREF = SQRT ( VELX*VELX + VELY*VELY + VELZ*VELZ ) F(L0QREF(IDMN)+NBLIN(IDMN))=QREF C... external temperature TAIR=20.0; CALL GETSDR(CTEMP,'TAIR',TAIR) F(L0TAIR(IDMN)+NBLIN(IDMN))=TAIR C... pressure coefficient & value PCOEF=1000.0; CALL GETSDR(CTEMP,'PCOEF',PCOEF) F(L0PCOE(IDMN)+NBLIN(IDMN))=PCOEF PEXT=0.0; CALL GETSDR(CTEMP,'PEXT',PEXT) C... External pressure always 0 relative to PRESS0, which is set to atmospheric F(L0PVAL(IDMN)+NBLIN(IDMN))=PEXT C... humidity IF(IHUM>0) THEN HUMIN=0.0; CALL GETSDR(CTEMP,'HUMIN',HUMIN) IF(IHUNIT>0) THEN IF(IHUNIT==1) THEN ! convert from humidity ratio HUMIN=1.E-3*HUMIN ELSEIF(IHUNIT==2) THEN ! convert from relative humidity TIN=TAIR+TEMP0-273. ! convert to deg C CALL GETSDR(CTEMP,'PEXT',PEXT) PIN=PEXT PVAP = 1.E-2*HUMIN*PVH2O(TIN) C... save Relative humidity for printout GWRAT = 18.015/28.96; RELH=HUMIN HUMIN = GWRAT*PVAP/(PIN-PVAP) ENDIF HUMIN=HUMIN/(1+HUMIN) ! convert to mass fraction ENDIF F(L0RELH(IDMN)+NBLIN(IDMN))=HUMIN ENDIF C.. vertical coordinate direction VDIR='Z'; CALL GETSDC(CTEMP,'VDIR',VDIR) IF(VDIR=='X') THEN F(L0VDIR(IDMN)+NBLIN(IDMN))=1. ELSEIF(VDIR=='Y') THEN F(L0VDIR(IDMN)+NBLIN(IDMN))=2. ELSEIF(VDIR=='Z') THEN F(L0VDIR(IDMN)+NBLIN(IDMN))=3. ENDIF C.. inlet density RHOIN=1.189; CALL GETSDR(CTEMP,'RHOIN',RHOIN) F(L0DENS(IDMN)+NBLIN(IDMN))=RHOIN C.. profile type = TABLE (3) for tabular, C POWL (2) for power law & C LOGL (1) for log law BLTY='LOGL'; CALL GETSDC(CTEMP,'BLTY',BLTY) F(L0BLTY(IDMN)+NBLIN(IDMN))=1 IF(BLTY=='POWL') F(L0BLTY(IDMN)+NBLIN(IDMN))=2 IF(BLTY=='TABLE') F(L0BLTY(IDMN)+NBLIN(IDMN))=3 C.. effective roughness length ZO=0.1; CALL GETSDR(CTEMP,'ZO',ZO) F(L0ZREF(IDMN)+NBLIN(IDMN))=ZO C.. reference height for wind reference velocity REFH=10.0; CALL GETSDR(CTEMP,'REFH',REFH) F(L0HREF(IDMN)+NBLIN(IDMN))=REFH C IF(BLTY=='POWL') THEN ALPHA=1./7.; CALL GETSDR(CTEMP,'ALPHA',ALPHA) F(L0POWR(IDMN)+NBLIN(IDMN))=ALPHA ENDIF CALL GETPAT(IP,IREG,TYP,JXF,JXL,JYF,JYL,JZF,JZL,JTF,JTL) IPBC=0 C... Get lower limit of open area for each BLIN IF(VDIR=='Z') THEN ! UP direction is Z. Deal with X-Z and Y-Z IF(QEQ(TYP,1.0)) THEN ! CELL, initialisation I1=JXF; I2=JXL; J1=JYF; J2=JYL; K1=1; K2=NZ NDIM=NX*NY ! need NXNY edge locations F(L0SKY(IDMN)+NBLIN(IDMN))=-1 ELSEIF(TYP<4.) THEN ! West/East (TYP= 3 or 2) I1=ITWO(JXF,JXL,TYP>2.) ! JXF if TYP=3, JXL if TYP=2 I2=ITWO(JXF,JXL,TYP>2.) J1=JYF; J2=JYL; K1=JZF; K2=JZL NDIM=NY ! need NY edge locations ELSEIF(TYP<6.) THEN ! South/North (TYP= 5 or 4) I1=JXF; I2=JXL J1=ITWO(JYF,JYL,TYP>4.) ! JYF if TYP=5, JYL if TYP=4 J2=ITWO(JYF,JYL,TYP>4.) K1=JZF; K2=JZL NDIM=NX ! need NX edge locations ELSE ! High/Low (TYP = 6 or 7) for SKY I1=JXF; I2=JXL; J1=JYF; J2=JYL; K1=1; K2=NZ NDIM=NX*NY ! need NXNY edge locations F(L0SKY(IDMN)+NBLIN(IDMN))=1 ENDIF CALL GXMAKE0(L0HI(IDMN,NBLIN(IDMN)),NDIM,'L0HI') L0DIST=L0F(ZWNZ) ISKY=F(L0SKY(IDMN)+NBLIN(IDMN)) C... loop over area of BLIN to find first open cell DO IX=I1,I2 DO IY=J1,J2 DIST=0.0 DO IZZ=K1,K2 IZZM1=(IZZ-1)*NXNY L0FACZ= L0FACE+IZZM1 I=(IX-1)*NY+IY IJKDM=I+IZZM1 IF(ISKY==0) THEN IB=ITWO(IX,IY,TYP>3.) ELSE IB=I ENDIF IF(PARSOL) IPBC = IPBSEQ(IDMN,IJKDM) ! get cut-cell number IF(IPBC>0.AND.ISKY==0) THEN ! dealing with cut cel IT1= ISHPB(IDMN,IPBC) DIST = F(IT1+PBC_CTCEN(3)) ! cut-face centre in Z F(L0HI(IDMN,NBLIN(IDMN))+IB)=DIST GO TO 1001 ! next column ELSEIF(.NOT.SLD(I)) THEN ! open cell IF(IZZ>1.AND.(LWP(I).OR.SLD(I-NXNY))) 1 DIST=F(L0DIST+IZZ-1) F(L0HI(IDMN,NBLIN(IDMN))+IB)=DIST GO TO 1001 ! next column ENDIF ENDDO 1001 CONTINUE ENDDO ENDDO C... if domain is split in Z and UP direction is Z, must propagate C terrain height to all processors at higher (Z) level IF(MIMD.AND.NPROC>1.AND.TYP<6.) THEN ! parallel & E,W,N or S C... Must send terrain height from lowest (in Z) processors to upper ones. IF(NZSD>1) THEN ! domain is split in Z ITAG=4880; IXSD=IDX+1; IYSD=IDY+1; IZSD=IDZ+1 NCELLS=ITWO(I2-I1+1, J2-J1+1,TYP>3.) IF(IZSD0) THEN ! send to above CALL SENDI_FOR(NCELLS,1,ITAG,IDD) CALL SENDR_FOR(BUFF,NCELLS,ITAG,IDD) ENDIF ENDIF DEALLOCATE(BUFF,STAT=IERR) ENDIF ENDIF ELSEIF(VDIR=='Y') THEN ! UP direction is Y. Deal with Z-Y and X-Y IF(QEQ(TYP,1.0)) THEN ! CELL, initialisation I1=JXF; I2=JXL; K1=JZF; K2=JZL; J1=1; J2=NY NDIM=NX*NZ F(L0SKY(IDMN)+NBLIN(IDMN))=-1 ELSEIF(TYP<4.) THEN ! West/East (TYP= 3 or 2) I1=ITWO(JXF,JXL,TYP>2.) I2=ITWO(JXF,JXL,TYP>2.) J1=JYF; J2=JYL; K1=JZF; K2=JZL NDIM=NZ ELSEIF(TYP>5.) THEN ! Low/High (TYP= 7 or 6) I1=JXF; I2=JXL; J1=JYF; J2=JYL K1=ITWO(JZF,JZL,TYP>6.) ! JZF if TYP=7, JZL if TYP=6 K2=ITWO(JZF,JZL,TYP>6.) NDIM=NX ELSE ! North/south for SKY I1=JXF; I2=JXL; K1=JZF; K2=JZL; J1=1; J2=NY NDIM=NX*NZ F(L0SKY(IDMN)+NBLIN(IDMN))=1 ENDIF CALL GXMAKE0(L0HI(IDMN,NBLIN(IDMN)),NDIM,'L0HI') L0DIST=L0F(YV2D) ISKY=F(L0SKY(IDMN)+NBLIN(IDMN)) C... loop over area of BLIN to find first open cell DO IX=I1,I2 DO IZZ=K1,K2 IZZM1=(IZZ-1)*NXNY L0FACZ= L0FACE+IZZM1 DIST=0.0 DO IY=J1,J2 I=(IX-1)*NY+IY IJKDM=I+IZZM1 IF(ISKY==0) THEN IB=ITWO(IX,IZZ,TYP>5.) ELSE IB=(IX-1)*NZ+IZZ ENDIF IF(PARSOL) IPBC = IPBSEQ(IDMN,IJKDM) ! get cut-cell number IF(IPBC>0.AND.ISKY==0) THEN ! dealing with cut cel IT1= ISHPB(IDMN,IPBC) DIST = F(IT1+PBC_CTCEN(2)) ! cut-face centre in Y F(L0HI(IDMN,NBLIN(IDMN))+IB)=DIST GO TO 1002 ELSEIF(.NOT.SLD(I)) THEN ! open cell IF(IY>1.AND.(SWP(I).OR.SLD(I-1))) 1 DIST=F(L0DIST+I-1) F(L0HI(IDMN,NBLIN(IDMN))+IB)=DIST GO TO 1002 ENDIF ENDDO 1002 CONTINUE ENDDO ENDDO C... if domain is split in Y and UP direction is Y, must propagate C terrain height to all processors at higher (Y) level IF(MIMD.AND.NPROC>1.AND.(TYP<4..OR.TYP>5.)) THEN ! parallel & E,W,H or L C... Must send terrain height from lowest (in Z) processors to upper ones. IF(NYSD>1) THEN ! domain is split in Z ITAG=4880; IXSD=IDX+1; IYSD=IDY+1; IZSD=IDZ+1 NCELLS=ITWO(I2-I1+1, K2-K1+1,TYP>5.) IF(IYSD0) THEN ! send to above CALL SENDI_FOR(NCELLS,1,ITAG,IDD) CALL SENDR_FOR(BUFF,NCELLS,ITAG,IDD) ENDIF ENDIF DEALLOCATE(BUFF,STAT=IERR) ENDIF ENDIF ELSEIF(VDIR=='X') THEN ! UP direction is X. Deal with Z-X and Y-X IF(QEQ(TYP,1.0)) THEN ! CELL, initialisation K1=JZF; K2=JZL; J1=JYF; J2=JYL; I1=1; I2=NX NDIM=NZ*NY F(L0SKY(IDMN)+NBLIN(IDMN))=-1 ELSEIF(TYP>5.) THEN ! Low/High I1=JXF; I2=JXL; J1=JYF; J2=JYL K1=ITWO(JZF,JZL,TYP>6.) ! JZF if TYP=7, JZL if TYP=6 K2=ITWO(JZF,JZL,TYP>6.) NDIM=NY ELSEIF(TYP>3.) THEN ! South/North I1=JXF; I2=JXL J1=ITWO(JYF,JYL,TYP>4) ! JYF if TYP=5, JYL if TYP=4 J2=ITWO(JYF,JYL,TYP>4) K1=JZF; K2=JZL NDIM=NZ ELSE ! East/West for SKY K1=JZF; K2=JZL; J1=JYF; J2=JYL; I1=1; I2=NX NDIM=NZ*NY F(L0SKY(IDMN)+NBLIN(IDMN))=1 ENDIF CALL GXMAKE0(L0HI(IDMN,NBLIN(IDMN)),NDIM,'L0HI') L0DIST=L0F(XU2D) ISKY=F(L0SKY(IDMN)+NBLIN(IDMN)) C... loop over area of BLIN to find first open cell DO IZZ=K1,K2 IZZM1=(IZZ-1)*NXNY L0FACZ= L0FACE+IZZM1 DO IY=J1,J2 DIST=0.0 DO IX=I1,I2 I=(IX-1)*NY+IY IJKDM=I+IZZM1 IF(ISKY==0) THEN IB=ITWO(IY,IZZ,TYP>5.) ELSE IB=(IY-1)*NZ+IZZ ENDIF IF(PARSOL) IPBC = IPBSEQ(IDMN,IJKDM) ! get cut-cell number IF(IPBC>0.AND.ISKY==0) THEN ! dealing with cut cell IT1= ISHPB(IDMN,IPBC) DIST = F(IT1+PBC_CTCEN(1)) ! cut-face centre in X F(L0HI(IDMN,NBLIN(IDMN))+IB)=DIST GO TO 1003 ELSEIF(.NOT.SLD(I)) THEN ! open cell IF(IX>1.AND.(WWP(I).OR.SLD(I-NY))) 1 DIST=F(L0DIST+I-NY) F(L0HI(IDMN,NBLIN(IDMN))+IB)=DIST GO TO 1003 ENDIF ENDDO 1003 CONTINUE ENDDO ENDDO C... if domain is split in X and UP direction is X, must propagate C terrain height to all processors at higher (X) level IF(MIMD.AND.NPROC>1.AND.TYP>3.) THEN ! parallel & N, S, H or L C... Must send terrain height from lowest (in X) processors to upper ones. IF(NXSD>1) THEN ! domain is split in X ITAG=4880; IXSD=IDX+1; IYSD=IDY+1; IZSD=IDZ+1 NCELLS=ITWO(J2-J1+1, K2-K1+1,TYP>5.) IF(IXSD0) THEN ! send to above CALL SENDI_FOR(NCELLS,1,ITAG,IDD) CALL SENDR_FOR(BUFF,NCELLS,ITAG,IDD) ENDIF ENDIF DEALLOCATE(BUFF,STAT=IERR) ENDIF ENDIF ENDIF ENDIF 103 CONTINUE ENDDO IF(ITEM1>0) THEN ITPRO=0 ! Uniform temperature profile CALL GETSDI('BLIN','ITPRO',ITPRO) C IF(ITPRO.GT.0) THEN GZT0 = 0.0 ! ZT0 - surface-temperature reference height CALL GETSDR('BLIN','GZT0',GZT0) GT0 = TAIR ! Temperature at ZT0 CALL GETSDR('BLIN','GT0',GT0) GQWALL = 0.0 ! Ground heat flux CALL GETSDR('BLIN','GQWALL',GQWALL) GALR = 9.81/CP1 ! Dry adiabatic lapse rate MOLEN=2 ! User-specified Monin-Obukhov length CALL GETSDI('BLIN','MOLEN',MOLEN) C IF(ITPRO==1) THEN ! Class A: Very unstable PASQUILL='Pasquill Class A' IF(MOLEN==1) THEN GLS=33.162 ; GZS=1117.0 ! TNO ELSE IF(MOLEN==2) THEN ALMO=-11.4 ; BLMO=0.1 ! PHAST ENDIF ELSEIF(ITPRO==2) THEN ! Class B: Moderately unstable PASQUILL='Pasquill Class B' IF(MOLEN==1) THEN GLS=32.258 ; GZS=11.46 ! TNO ELSE IF(MOLEN==2) THEN ALMO=-26.0 ; BLMO=0.17 ! PHAST ENDIF ELSEIF(ITPRO==3) THEN ! Class C: Slightly unstable PASQUILL='Pasquill Class C' IF(MOLEN==1) THEN GLS=51.787 ; GZS=1.324 ! TNO ELSE IF(MOLEN==2) THEN ALMO=-123.0 ; BLMO=0.3 ! PHAST ENDIF ELSEIF(ITPRO==4) THEN ! Class D: Neutral PASQUILL='Pasquill Class D' ELSEIF(ITPRO==5) THEN ! Class E: Slightly stable PASQUILL='Pasquill Class E' IF(MOLEN==1) THEN GLS=-48.33 ; GZS=1.262 ! TNO ELSE IF(MOLEN==2) THEN ALMO=123.0 ; BLMO=0.3 ! PHAST ENDIF ELSEIF(ITPRO==6) THEN ! Class F: Moderately stable PASQUILL='Pasquill Class F' C.. Monin Obukhov length IF(MOLEN==1) THEN GLS=-31.325 ; GZS=19.36 ! TNO ELSE IF(MOLEN==2) THEN ALMO=26.0 ; BLMO=0.17 ! PHAST ENDIF ENDIF IF(MOLEN==0) THEN GLM0 = 1.0 CALL GETSDR('GLMO','GLMO',GLMO) ! User value ELSE IF(MOLEN==1) THEN ZODZS= AMAX1(ZO,0.5)/GZS GLMO = GLS/LOG10(ZODZS) ! TNO ELSE IF(MOLEN==2) THEN GLMO=ALMO*(ZO)**BLMO ! PHAST ENDIF PASQBUOY=.FALSE.; BUOSSG=.FALSE. DO IP=1,NUMPAT IF(NAMPAT(IP)(1:8)=='BUOYANCY') THEN PASQBUOY=.TRUE. DO JIND=U1,W1,2 CALL GETCV(IP,JIND,GCO,GVAL) IF(QEQ(GVAL,GRND3)) BUOSSG=.TRUE. ENDDO ENDIF ENDDO LBTREF=LBNAME('TREF') IF(LBTREF<=0) CALL GXMAKE0(L0TREF0,NX*NY*NZ,'TREF') IF(.NOT.BUOSSG) THEN LBPIN =LBNAME('PIN') IF(LBPIN<=0) CALL GXMAKE0(L0PIN0,NX*NY*NZ,'PIN') LBRHIN =LBNAME('RHIN') IF(LBRHIN<=0) CALL GXMAKE0(L0RHIN0,NX*NY*NZ,'RHIN') ENDIF CALL WRITST CALL WRIT40(' Pasquill-Class data ') CALL WRIT40(' --------------------') WRITE(14,*) ' ITPRO = ',ITPRO WRITE(14,*) ' ',PASQUILL WRITE(14,*) ' Ground surface temperature = ',GT0, ' C' WRITE(14,*) ' Monin-Obukhov length = ',GLMO,' m' WRITE(14,*) ' Ground heat flux = ',GQWALL, 1 ' W/m^2' WRITE(14,*) ' Dry adiabatic lapse rate = ',GALR*1.E3, 1 ' C/km' IF(BUOSSG) THEN WRITE(14,*) ' Boussinesq buoyancy' ELSE WRITE(14,*) ' Density-difference buoyancy' ENDIF CALL WRITBL ENDIF ENDIF C... Wind amplification factor (1) SHOWCOMF=.FALSE.; CALL GETSDL('FLAIR','SHOWCOMF',SHOWCOMF) C... Wind amplification factor WAF (2) LBWAF=LBNAME('WAF') ! WAF = Vabs/(profile speed at local height) C... Wind Attenuation Coefficient WAT LBWAT=LBNAME('WAT') ! WAT = (Vabs/(profile speed at local height))-1 IF((LBWAF>0.OR.LBWAT>0).AND.VDIR=='Z') THEN CALL GXMAKE0(L0HIWAF(IDMN),NXNY,'L0HIWAF') ! store for height of surface L0DIST=L0F(ZWNZ) TERRNAM=' '; IBLK=0 CALL GETSDC('FLAIR','TERRAIN',TERRNAM) ! get name of parent IF(TERRNAM.NE.' ') THEN ! parent name set, so find patch number LPAR=MIMD.AND.NPROC.GT.1 ILIM=ITWO(GD_NUMPAT,NUMPAT,LPAR) ! for parallel loop over global patches DO III=1,ILIM IF(LPAR) THEN ! get object name for parallel COBNM2=GD_OBJNAM(III) ELSE COBNM2= OBJNAM(III) ENDIF IF(COBNM2.EQ.TERRNAM) THEN IF(LPAR) THEN IBLK=GD_INDPAT(III,1) ELSE IBLK=III ENDIF EXIT ENDIF ENDDO ENDIF DO IX=1,NX DO IY=1,NY I=(IX-1)*NY+IY DIST=0.0 DO IZZ=1,NZ ! scan up column at IX,IY L0FACZ=L0FACE+(IZZ-1)*NX*NY ! increment index for SLD() ! IF(IBLK>0) THEN ! have named terrain object L0PAT=L0PATNO(IDMN)+(IZZ-1)*NXNY ! index for patch-number store CALL GET_SURFACE(IX,IY,IZZ,IBLK,L0PAT,CAREA, 1 NORML,IPLUS) IF(CAREA>0.0) THEN IZZM1=(IZZ-1)*NXNY; L0FACZ= L0FACE+IZZM1 IJKDM=I+IZZM1 IF(PARSOL) IPBC = IPBSEQ(IDMN,IJKDM) ! get cut-cell number IF(IPBC>0) THEN ! dealing with cut cel IT1= ISHPB(IDMN,IPBC) DIST = F(IT1+PBC_CTCEN(3)) ! cut-face centre in Z GO TO 1004 ! next column ELSEIF(.NOT.SLD(I)) THEN ! open cell IF(IZZ>1.AND.(LWP(I).OR.SLD(I-NXNY))) THEN DIST=F(L0DIST+IZZ-1) ENDIF GO TO 1004 ! next column ENDIF ENDIF ! ELSEIF(.NOT.SLD(I)) THEN ! open cell ! IF(IZZ>1.AND.(LWP(I).OR.SLD(I-NXNY))) THEN ! DIST=F(L0DIST+IZZ-1) ! ENDIF ! GO TO 1004 ! next column ! ENDIF ENDDO ! end IZ loop 1004 CONTINUE F(L0HIWAF(IDMN)+I)=DIST ! store surface height ENDDO ! end IY loop ENDDO ! end IX loop IF(MIMD.AND.NPROC>1) THEN ! parallel C... Must send terrain height from lowest (in Z) processors to upper ones. IF(NZSD>1) THEN ! domain is split in Z ITAG=4880; IXSD=IDX+1; IYSD=IDY+1; IZSD=IDZ+1 NCELLS=NXNY IF(IZSD0) THEN ! send to above CALL SENDI_FOR(NCELLS,1,ITAG,IDD) CALL SENDR_FOR(BUFF,NCELLS,ITAG,IDD) ENDIF ENDIF DEALLOCATE(BUFF,STAT=IERR) ENDIF ! end of split-in-Z block ENDIF ! end of parallel block ENDIF ! end of WAF block C... loop back for next FGV domain IF(IDMN1) THEN IDMN=1 CALL INDXDM ENDIF C... Wind comfort IF(SHOWCOMF) THEN LBPRO=LBNAME('PRO') TYPECOMF='NEN8100'; CALL GETSDC('FLAIR','TYPECOMF',TYPECOMF) IF(TYPECOMF=='NEN8100') THEN LBDAN=LBNAME('PDAN'); LBNEN=LBNAME('NEN') IF(LBDAN+LBNEN<=0) CALL SET_ERR(584, 1 'Error. STORE NEN and/or PDAN missing for NEN8100',1) UTHR=5; DTHR=15; CALL GETSDR('FLAIR','DTHR',DTHR) ELSEIF(TYPECOMF=='LAWSON') THEN NCRIT=5; CALL GETSDI('FLAIR','NCRIT',NCRIT) NCRIT=MIN(NCRIT,7) ALLOCATE(LTHRESH(NCRIT),LUTHR(NCRIT),LPRO(NCRIT,NX*NY), 1 STAT=IERR) LTHRESH=0.0; LUTHR=0.0; LPRO=0.0 ! initialise DO IC=1,NCRIT LTHRESH(IC)=THRESHD(IC)*100.; LUTHR(IC)=UTHRD(IC) ! take default from data statement WRITE(LINE,'(''THRSH'',I1)') IC CALL GETSDR('FLAIR',LINE(1:6),LTHRESH(IC)) LTHRESH(IC)=LTHRESH(IC)/100. ! convert back from % WRITE(LINE,'(''UTHR'',I1)') IC CALL GETSDR('FLAIR',LINE(1:5),LUTHR(IC)) WRITE(LINE,'(''PRO'',I1)') IC LBPRB(IC)=LBNAME(LINE(1:4)) ENDDO LBLAWS=LBNAME('LAWS') IF(LBLAWS<=0) CALL SET_ERR(584, 1 'Error. STORE LAWS missing for Lawson',1) ELSE UTHR=6; CALL GETSDR('FLAIR','UTHR',UTHR) ENDIF WEICOEF=.FALSE.; CALL GETSDL('FLAIR','WEICOEF',WEICOEF) LU=111 WINDFILE='NOTSET' CALL GETSDCLONG('FLAIR','WINDFILE',WINDFILE) ! name of file LL=LENGZZ(WINDFILE); CALL LOWCSE(WINDFILE,-LL) IF(MIMD.AND.NPROC>1) THEN ! must copy remote file to local CALL SEND_FILE(LU,WINDFILE,IERR) IF(IERR.NE.0) GOTO 110 ENDIF OPEN(LU,FILE=WINDFILE,STATUS='OLD',ERR=110) C DHEIGHT=-999.0 ! initialise to silly value LBVAV=LBNAME('VAV') IF(WEICOEF) THEN ! read Weibull parameters LGWC=.FALSE.;CALL GETSDL('FLAIR','WEIB-GWC',LGWC) NSECT=12; REWIND LU READ(LU,'(A)') LINE; SITENAME=LINE ! site name CALL GETSDR('FLAIR','WEIB-H',DHEIGHT) CALL GETSDR('FLAIR','WEIB-AW',AW) CALL GETSDR('FLAIR','WEIB-AKW',AKW) CALL GETSDR('FLAIR','WEIB-SECTP',SECTP) ELSE ! Read probability bins NBINS=0; NSECT=12; REWIND LU 104 CONTINUE ! read file to count number of bins IF(LL==0) GO TO 105 ! catch empty lines NBINS=NBINS+1; GO TO 104 105 CONTINUE ! 1st three lines are headers NBINS=NBINS-3 REWIND(LU) ! now read file properly READ(LU,'(A)') LINE; SITENAME=LINE ! site name READ(LU,'(A)') LINE; LL=LENGZZ(LINE) ! mast location & height CALL SPLTZZZ(LINE,WD,NWDS,NCHARS,LL,DLIM,20) DHEIGHT=RRDZZZ(3) READ(LU,'(A)') LINE; LL=LENGZZ(LINE) ! no of sectors CALL SPLTZZZ(LINE,WD,NWDS,NCHARS,LL,DLIM,20) NSECT=IRDZZZ(1) ALLOCATE(WNDA(NBINS+1,NSECT+1),STAT=IERR) WNDA=0.0 ! initialise bins CALL SPLTZZZ(LINE,WD,NWDS,NCHARS,LL,DLIM,20) DO ISEC=1,NSECT WNDA(1,ISEC+1)=RRDZZZ(ISEC) ENDDO PSUM=TINY DO ISEC=2,NSECT+1 PSUM=PSUM+WNDA(1,ISEC) ENDDO DO ISEC=2,NSECT+1 WNDA(1,ISEC)=WNDA(1,ISEC)/(PSUM+TINY) ENDDO DO IB=2,NBINS ! read the probabilities for each bin LL=LENGZZ(LINE) CALL SPLTZZZ(LINE,WD,NWDS,NCHARS,LL,DLIM,20) DO ISEC=1,NSECT+1 WNDA(IB,ISEC)=RRDZZZ(ISEC) ENDDO ENDDO DO ISEC=2,NSECT+1 ! normalise probabilities for each direction PSUM=TINY DO IB=2,NBINS ! sum probabilities over all bins PSUM=PSUM+WNDA(IB,ISEC) ENDDO DO IB=2,NBINS ! divide through by sum WNDA(IB,ISEC)=WNDA(IB,ISEC)/PSUM ENDDO ENDDO ENDIF IF(DHEIGHT<=0.0) THEN CALL WRITST CALL WRIT40('ERROR! Mast (measurement) height not found') CALL WRITST CALL SET_ERR(583, 1 'Error. Mast (measurement) height not found',1) ENDIF SECSIZE=360./NSECT ! sector size IF(WDIRN>=360.0) THEN WDIRN=WDIRN-360.0 ELSEIF(WDIRN<0.0) THEN WDIRN=360.0-WDIRN ENDIF C... Wind speed sectors are centred on direction, so for 12 sectors of 30deg each, C... sector 1 will be from -15 to +15 C 2 15 to 45 etc. ISECT=MIN(NSECT,INT((WDIRN+SECSIZE/2.)/SECSIZE)+1) ! current sector IF(LBVAV>0) THEN ! sector average velocity stored IF(WEICOEF) THEN ! Weibull coefficients VSECT=AW*GAMMA(1.0+1.0/AKW) ! Mean value of Weibull distribution !!! VSECT=AW*LOG(2.0)**(1.0/AKW) ! Median value of Weibull distribution ELSE ! Probability table SECTP=WNDA(1,ISECT+1); VSECT=0.0 DO IB=2,NBINS+1 VI=0.5*(WNDA(IB,1)-WNDA(IB-1,1))+WNDA(IB-1,1) PROB=WNDA(IB,ISECT+1) VSECT=VSECT+VI*PROB ENDDO ENDIF QREF=VSECT; REFH=DHEIGHT ANG=WDIRN-AXDIR IF(ANG<0.0) THEN ANG=360.0+ANG C... if wind angle > 360, subtract 360 ELSEIF(ANG>360) THEN ANG=ANG-360.0 ENDIF ANGR=ANG*ATAN(1.)/45. ! make radians DO II=1,NBLIN(IDMN) F(L0QREF(IDMN)+II)=QREF F(L0HREF(IDMN)+II)=REFH C... Update values for BLIN patches for use as inlet values in Group 13 IUP=F(L0VDIR(IDMN)+II) IF(IUP==3) THEN ! Up Z VELX=-VSECT*SIN(ANGR) VELY=-VSECT*COS(ANGR) VELZ=0.0 ELSEIF(IUP==2) THEN ! Up Y VELX=-VSECT*COS(ANGR) VELY=0.0 VELZ=-VSECT*SIN(ANGR) ELSE ! Up X VELX=0.0 VELY=-VSECT*SIN(ANGR) VELZ=-VSECT*COS(ANGR) ENDIF C... Set inlet velocity components F(L0VELX(IDMN)+II)=VELX F(L0VELY(IDMN)+II)=VELY F(L0VELZ(IDMN)+II)=VELZ ENDDO ENDIF ENDIF C... Inlet profile tables IF(BLTY=='TABLE') THEN CLOSE(LU,IOSTAT=IOS); LU=111; VELFILE='NOTSET' CALL GETSDCLONG('BLIN','U-TABLE',VELFILE) ! name of velocity file LL=LENGZZ(VELFILE); CALL LOWCSE(VELFILE,-LL) IF(MIMD.AND.NPROC>1) THEN ! must copy remote file to local CALL SEND_FILE(LU,VELFILE,IERR) IF(IERR.NE.0) THEN WINDFILE=VELFILE; GOTO 110 ENDIF ENDIF OPEN(LU,FILE=VELFILE,STATUS='OLD',IOSTAT=IOS) IF(IOS/=0) THEN WINDFILE=VELFILE; GO TO 110 ENDIF NLINES=0; REWIND LU 1041 CONTINUE ! read file to count number of lines IF(LL==0) GO TO 1051 ! catch empty lines NLINES=NLINES+1; GO TO 1041 1051 CONTINUE ! 1st lines is header NLINEV=NLINES-1 ALLOCATE(VEL_TAB(NLINEV,4),STAT=IERR) REWIND LU ANG=WDIRN-AXDIR IF(ANG<0.0) THEN ANG=360.0+ANG C... if wind angle > 360, subtract 360 ELSEIF(ANG>360) THEN ANG=ANG-360.0 ENDIF ANGR=ANG*ATAN(1.)/45. ! make radians DO IL=1,NLINEV READ(LU,'(A)') LINE; LL=LENGZZ(LINE) ! read a line CALL SPLTZZZ(LINE,WD,NWDS,NCHARS,LL,DLIM,20) ! split into words IF(NWDS/=2) THEN WRITE(LUPR1,'('' ERROR reading '',A)') VELFILE CALL WRIT40 1 (' Velocity profile file must have 2 values per line.') CALL WRIT40('Item 1 - Height, Item 2 - velocity') CALL SET_ERR(583, 1 'Error. velocity profile must have 2 values per line',1) ENDIF VEL_TAB(IL,1)=RRDZZZ(1); VEL=RRDZZZ(2) ! get height and velocity IF(VDIR=='Z') THEN ! Up Z, get velocity components VEL_TAB(IL,2)=-VEL*SIN(ANGR) ! U VEL_TAB(IL,3)=-VEL*COS(ANGR) ! V VEL_TAB(IL,4)= 0.0 ! W ELSEIF(VDIR=='Y') THEN ! Up Y, get velocity components VEL_TAB(IL,2)=-VEL*COS(ANGR) ! U VEL_TAB(IL,3)= 0.0 ! V VEL_TAB(IL,4)=-VEL*SIN(ANGR) ! W ELSE ! Up X, get velocity components VEL_TAB(IL,2)= 0.0 ! U VEL_TAB(IL,3)=-VEL*SIN(ANGR) ! V VEL_TAB(IL,4)=-VEL*COS(ANGR) ! W ENDIF ENDDO QREF=SQRT(VEL_TAB(NLINEV,2)**2+VEL_TAB(NLINEV,3)**2+ 1 VEL_TAB(NLINEV,4)**2) DO II=1,NBLIN(IDMN) F(L0VELX(IDMN)+II)=VEL_TAB(NLINEV,2) ! fill these to get flow F(L0VELY(IDMN)+II)=VEL_TAB(NLINEV,3) ! direction at boundaries F(L0VELZ(IDMN)+II)=VEL_TAB(NLINEV,4) F(L0QREF(IDMN)+II)=QREF ENDDO CLOSE(LU,IOSTAT=IOS); LU=111; KEFILE='NOTSET' CALL GETSDCLONG('BLIN','K-TABLE',KEFILE) ! name of KE file LL=LENGZZ(KEFILE); CALL LOWCSE(KEFILE,-LL) IF(MIMD.AND.NPROC>1) THEN ! must copy remote file to local CALL SEND_FILE(LU,KEFILE,IERR) IF(IERR.NE.0) THEN WINDFILE=KEFILE; GOTO 110 ENDIF ENDIF OPEN(LU,FILE=KEFILE,STATUS='OLD',IOSTAT=IOS) IF(IOS/=0) THEN WINDFILE=KEFILE; GO TO 110 ENDIF NLINES=0; REWIND LU 1042 CONTINUE ! read file to count number of lines IF(LL==0) GO TO 1052 ! catch empty lines NLINES=NLINES+1; GO TO 1042 1052 CONTINUE ! 1st lines is header NLINEK=NLINES-1 ALLOCATE(KE_TAB(NLINEK,2),STAT=IERR) REWIND LU DO IL=1,NLINEK CALL SPLTZZZ(LINE,WD,NWDS,NCHARS,LL,DLIM,20) ! split into words IF(NWDS/=2) THEN WRITE(LUPR1,'('' ERROR reading '',A)') KEFILE CALL WRIT40 1 (' K profile file must have 2 values per line.') CALL WRIT40('Item 1 - Height, Item 2 - KE') CALL SET_ERR(583, 1 'Error. K profile must have 2 values per line',1) ENDIF KE_TAB(IL,1)=RRDZZZ(1); KE_TAB(IL,2)=RRDZZZ(2) ! save height & KE ENDDO CLOSE(LU,IOSTAT=IOS) ENDIF C CALL WRITST CALL WRIT40(' Wind profile data') CALL WRIT40(' -----------------') WRITE(LUPR1,'('' Up direction '',A1)') VDIR CH1=CHGR_13(REFH); LT=LENGZZ(CH1) IF(BLTY/='TABLE') 1 WRITE(LUPR1,'('' Reference height: '',A,'' m'')') 1 CH1(1:LT) CH1=CHGR_13(ZO); LT=LENGZZ(CH1) WRITE(LUPR1,'('' Roughness height: '',A,'' m'')') 1 CH1(1:LT) IF(NEZ(WALLB)) THEN CH1=CHGR_13(WALLB); LT=LENGZZ(CH1) WRITE(LUPR1,'('' Displacement height: '',A,'' m'')') 1 CH1(1:LT) ENDIF IF(BLTY=='POWL') THEN WRITE(LUPR1,'('' Profile type - Power Law'')') CH1=CHGR_13(ALPHA) WRITE(LUPR1,'('' Power law constant: '',A)') CH1 ELSEIF(BLTY=='LOGL') THEN WRITE(LUPR1,'('' Profile type - Logarithmic Law'')') ELSE WRITE(LUPR1,'('' Profile type - from tables'')') LL=LENGZZ(VELFILE) WRITE(LUPR1,'('' Velocity file used: '',A)') VELFILE(1:LL) LL=LENGZZ(KEFILE) WRITE(LUPR1,'('' KE file used: '',A)') KEFILE(1:LL) ENDIF CH1=CHGR_13(PCOEF) WRITE(LUPR1,'('' Pressure coeff at outflow boundaries: '', 1 A)') CH1 IF(EPWNAM/=' ') THEN ! Weather file specified LL=LENGZZ(EPWNAM) WRITE(LUPR1,'('' Using weather data file '',A)') 1 EPWNAM(1:LL) IF(NL==0) THEN CH1=CHGR_13(WDIRN); LT=LENGZZ(CH1) WRITE(LUPR1, 1 '('' Wind direction: '',A,A)') CH1(1:LT),CHAR(176) CH1=CHGR_13(QREF); LT=LENGZZ(CH1) IF(BLTY/='TABLE') WRITE(LUPR1, 1 '('' Wind speed: '',A,'' m/s'')') CH1(1:LT) IF(ITEM1>0) THEN CH1=CHGR_13(TAIR); LT=LENGZZ(CH1) WRITE(LUPR1, 1 '('' Air temperature: '',A,A1,''C'')') CH1(1:LT), 1 CHAR(176) ENDIF CH1=CHGR_13(PEXT); LT=LENGZZ(CH1) WRITE(LUPR1, 1 '('' External pressure: '',A,'' Pa'')') CH1(1:LT) IF(IHUM>0) THEN CH1=CHGR_13(RELH); LT=LENGZZ(CH1) WRITE(LUPR1, 1 '('' Relative humidity: '',A,'' %'')') CH1(1:LT) ENDIF CH1=CHGR_13(RHOIN); LT=LENGZZ(CH1) WRITE(LUPR1, 1 '('' Air density: '',A,'' kg/m^3'')') CH1(1:LT) ELSE WRITE(LUPR1, 1 '('' Time Direction Speed Temperature'', 1 '' Pressure Humidity'')') DO I=1,NL WRITE(LUPR1, 1 '(I4,'' '',1PE13.6,'' '',1PE13.6,'' '',1PE13.6,'' '',1PE13.6, 1 '' '',1PE13.6)') I-1, WDIR(I),WSPD(I),AIRTEMP(I),ATMPRES(I), 1 RELHUM(I) ENDDO ENDIF ELSEIF(STEADY) THEN ! manual wind settings IF(QNE(WDIRN,-999.0)) THEN CH1=CHGR_13(WDIRN); LT=LENGZZ(CH1) WRITE(LUPR1, 1 '('' Wind direction: '',A,A)') CH1(1:LT),CHAR(176) ENDIF CH1=CHGR_13(QREF); LT=LENGZZ(CH1) IF(BLTY/='TABLE') THEN WRITE(LUPR1, 1 '('' Wind speed: '',A,'' m/s'')') CH1(1:LT) ELSE CH2=CHGR_13(VEL_TAB(NLINEV,1)); L2=LENGZZ(CH2) WRITE(LUPR1, 1 '('' Wind speed: '',A, 1 '' m/s (at top of table '',A,'' m)'')') CH1(1:LT),CH2(1:L2) ENDIF IF(ITEM1>0) THEN CH1=CHGR_13(TAIR); LT=LENGZZ(CH1) WRITE(LUPR1, 1 '('' Air temperature: '',A,A1,''C'')') CH1(1:LT), 1 CHAR(176) ENDIF CH1=CHGR_13(PEXT); LT=LENGZZ(CH1) WRITE(LUPR1, 1 '('' External pressure: '',A,'' Pa'')') CH1(1:LT) IF(IHUM>0) THEN IF(IHUNIT==0) THEN CH1=CHGR_13(HUMIN); LT=LENGZZ(CH1) WRITE(LUPR1, 1 '('' H2O mass fraction: '',A,'' '')') CH1(1:LT) ELSEIF(IHUNIT==1) THEN CH1=CHGR_13(HUMIN*1000.); LT=LENGZZ(CH1) WRITE(LUPR1, 1 '('' Humidity ratio: '',A,'' g/kg'')') CH1(1:LT) ELSE CH1=CHGR_13(RELH); LT=LENGZZ(CH1) WRITE(LUPR1, 1 '('' Relative humidity: '',A,'' %'')') CH1(1:LT) ENDIF ENDIF CH1=CHGR_13(RHOIN); LT=LENGZZ(CH1) WRITE(LUPR1, 1 '('' Air density: '',A,'' kg/m^3'')') CH1(1:LT) ENDIF C... Pressure coefficient LBCP=LBNAME('CP') IF(SHOWCOMF) THEN CALL WRITBL WRITE(14,'('' Wind comfort input data'')') WRITE(14,'('' -----------------------'')') LL=LENGZZ(SITENAME) WRITE(14,'('' Site name: '',A)') SITENAME(1:LL) WRITE(14,'('' Using data for sector: '',I2)') ISECT CH1=CHGR_13(DHEIGHT); L1=LENGZZ(CH1) WRITE(14,'('' Mast height:'',A)') CH1(1:L1) IF(LBVAV>0) THEN WRITE(14,'('' ## Multi-sector run ##'')') CH1=CHGR_13(SECTP); L1=LENGZZ(CH1) CH2=CHGR_13(VSECT); L2=LENGZZ(CH2) WRITE(LUPR1,'('' Probability of wind in sector'',I3, 1 '' is: '',A)') ISECT,CH1(1:L1) WRITE(LUPR1, 1 '('' Measured average wind speed at mast height is: '',A, 1 '' m/s'')') CH2(1:L2) WRITE(14,'('' ## Wind speed was set to '', 1 ''average velocity at mast height: '',A,'' m/s'')') CH2(1:L2) CH1=CHGR_13(DHEIGHT); L1=LENGZZ(CH1) WRITE(14,'('' ## Reference height was set to '' 1 ''mast height: '',A,'' m'')') CH1(1:L1) ENDIF IF(BLTY=='POWL') THEN ! power-law profile WMAST=QREF*(DHEIGHT/REFH)**ALPHA ELSEIF(BLTY=='LOGL') THEN ! log profile IF(EQZ(WALLB)) THEN GHDZO=AMAX1(DHEIGHT/ZO,2.0) ELSE IF(LTZ(WALLB)) THEN GHDZO=(DHEIGHT-WALLB)/ZO ELSE GHDZO=AMAX1((DHEIGHT-WALLB)/ZO,2.0) ENDIF ENDIF WMAST=QREF*LOG(GHDZO)/LOG(REFH/ZO) ELSE ! tabular profile FACT=-999. ! initialize GH=DHEIGHT ! height from probability table file DO IL=1,NLINEV-1 ! loop over table IF(GH>=VEL_TAB(IL,1).AND.GH<=VEL_TAB(IL+1,1)) THEN FACT=(GH -VEL_TAB(IL,1))/ 1 (VEL_TAB(IL+1,1)-VEL_TAB(IL,1)) ILP1=IL+1; EXIT ENDIF ENDDO IF(QEQ(FACT,-999.)) THEN ! no value was found IF(GH>=VEL_TAB(NLINEV,1)) THEN ! above top of table IL=NLINEV ! use top value ELSE ! below bottom IL=1 ! use bottom value ENDIF FACT=0.0; ILP1=IL ENDIF UCOMP=VEL_TAB(IL,2)+FACT*(VEL_TAB(ILP1,2)-VEL_TAB(IL,2)) VCOMP=VEL_TAB(IL,3)+FACT*(VEL_TAB(ILP1,3)-VEL_TAB(IL,3)) WCOMP=VEL_TAB(IL,4)+FACT*(VEL_TAB(ILP1,4)-VEL_TAB(IL,4)) WMAST=SQRT(UCOMP*UCOMP+VCOMP*VCOMP+WCOMP*WCOMP) ENDIF CH2=CHGR_13(WMAST); L2=LENGZZ(CH2) WRITE(LUPR1, 1 '('' Wind-profile speed at mast height: '',A,'' m/s'')') 1 CH2(1:L2) IF(TYPECOMF=='NEN8100') THEN LINE='Dutch standard NEN8100' ELSEIF(TYPECOMF=='LAWSON') THEN LINE='Lawson Comfort Criteria ' ELSE CH1=CHGR_13(UTHR); LL=LENGZZ(CH1) LINE='probability of exceeding '//CH1(1:LL)//' m/s' ENDIF LL=LENGZZ(LINE) WRITE(LUPR1,'('' Comfort type: '',A)') LINE(1:LL) IF(TYPECOMF=='LAWSON') THEN WRITE(LUPR1, 1 '('' Level Probability (%) Max Speed (m/s)'')') DO IC=1,NCRIT CH1=CHGR_13(LTHRESH(IC)*100.) CH2=CHGR_13(LUTHR(IC)) WRITE(LUPR1,'(5X,I1,7X,A,1X,A)') IC,CH1(1:13),CH2(1:13) ENDDO ENDIF ENDIF ! end of SHOWCOMF section C... Wind amplification factor (1) LBWAMP=LBNAME('WAMP'); !LBVABS=LBNAME('VABS') IF(LBWAMP>0) THEN ! WAMP is stored WAMPH=1.5; CALL GETSDR('BLIN','WAMPH',WAMPH) ! get ref height IF(BLTY=='POWL') THEN ! power-law profile WAMPVR=QREF*(WAMPH/REFH)**ALPHA ELSEIF(BLTY=='LOGL') THEN ! log profile IF(EQZ(WALLB)) THEN GHDZO=AMAX1(WAMPH/ZO,2.0) ELSE IF(LTZ(WALLB)) THEN GHDZO=(WAMPH-WALLB)/ZO ELSE GHDZO=AMAX1((WAMPH-WALLB)/ZO,2.0) ENDIF ENDIF WAMPVR=QREF*LOG(GHDZO)/LOG(REFH/ZO) ELSE ! tabular profile FACT=-999. ! initialize GH=WAMPH DO IL=1,NLINEV-1 ! loop over table IF(GH>=VEL_TAB(IL,1).AND.GH<=VEL_TAB(IL+1,1)) THEN FACT=(GH -VEL_TAB(IL,1))/ 1 (VEL_TAB(IL+1,1)-VEL_TAB(IL,1)) ILP1=IL+1; EXIT ENDIF ENDDO IF(QEQ(FACT,-999.)) THEN ! no value was found IF(GH>=VEL_TAB(NLINEV,1)) THEN ! above top of table IL=NLINEV ! use top value ELSE ! below bottom IL=1 ! use bottom value ENDIF FACT=0.0; ILP1=IL ENDIF UCOMP=VEL_TAB(IL,2)+FACT*(VEL_TAB(ILP1,2)-VEL_TAB(IL,2)) VCOMP=VEL_TAB(IL,3)+FACT*(VEL_TAB(ILP1,3)-VEL_TAB(IL,3)) WCOMP=VEL_TAB(IL,4)+FACT*(VEL_TAB(ILP1,4)-VEL_TAB(IL,4)) WAMPVR=SQRT(UCOMP*UCOMP+VCOMP*VCOMP+WCOMP*WCOMP) ENDIF CH1=CHGR_13(WAMPH); CH2=CHGR_13(WAMPVR) WRITE(LUPR1, 1 '('' Wind speed at '',A,''m: '',A,'' m/s (used for WAMP)'')') 1 CH1(1:LENGZZ(CH1)),CH2(1:LENGZZ(CH2)) ENDIF CALL WRITST RETURN 110 CONTINUE LL=LENGZZ(WINDFILE) CALL WRIT40('Cannot open wind data file '//WINDFILE(1:LL)) CALL SET_ERR(583,'Error. Cannot open wind data file',1) C C... Group 1 Section 3 C ================= ELSEIF(ISC==3) THEN ENDIF ENDIF C***************************************************************** C--- GROUP 13. Boundary conditions and special sources C Index for Coefficient - CO C Index for Value - VAL IF(IGR==13.AND.ISC==8) THEN C------------------- SECTION 8 ------------------- coef = GRND7 C... Set diffusive coefficient at SKY C CO = density*turbulent viscosity/distance !!!C = density*(Q*)*K*z/dz C C The followed yet to be coded. C IF(ITPRO > 0 .AND. ITPRO /= 4) THEN C GPHIM = 1.0-PSIFT(GZMDH,GLMO) C ENUT=ENUT/GPHIM C ENDIF C IBLIN=NINT(F(L0IPAT(IDMN)+IPAT)) !!!C.. inlet velocity magnitude at reference height !!! QREF=F(L0QREF(IDMN)+IBLIN) !!!C.. reference height for wind reference velocity !!! REFH=F(L0HREF(IDMN)+IBLIN) !!!C.. effective roughness length !!! ZO=F(L0ZREF(IDMN)+IBLIN) C.. vertical coordinate direction IVDIR=NINT(F(L0VDIR(IDMN)+IBLIN)) C... store vertical height of grid nodes in L0H IF(IVDIR==1) THEN ! Up X !!! CALL FN0(-L0H,XU2D) GDH=0.5*F(L0F(DXU2D)+NX) ELSEIF(IVDIR==2) THEN ! Up Y !!! CALL FN0(-L0H,YV2D) GDH=0.5*F(L0F(DYV2D)+NY) ELSE ! Up Z !!! GH=F(L0F(ZWNZ)+IZ) GDH=0.5*F(L0F(DZWNZ)+NZ) !!! CALL FN1(-L0H,GH) ENDIF L0CO=L0F(CO) L0DEN=L0F(DEN1); L0VIST=L0F(VIST) !!!C#### JCL/MRM 19.08.14 allow for 'd' constant in log profile !!! QTAU = AK*QREF/(LOG((REFH-WALLB)/ZO)) ! friction velocity IF((INDVAR.EQ.KE.OR.INDVAR.EQ.LBOMEG).AND. 1 (IENUTA.GE.17.AND.IENUTA.LE.20)) THEN L0VIST=L0F(VIST); L0BF1 =L0F(LBBF1) IF(INDVAR.EQ.KE) THEN GSIG1=SIGK1 ; GSIG2=SIGK2 ELSE ! w GSIG1=SIGW1 ; GSIG2=SIGW2 ENDIF DO IX=IXF,IXL DO IY=IYF,IYL GBF1 = F(L0BF1+I) GPRTRB = (GBF1*GSIG1 + (1.-GBF1)*GSIG2) F(L0CO+I)=F(L0DEN+I)*F(L0VIST+I)/(GPRTRB*GDH) ENDDO ENDDO ELSE DO IX=IXF,IXL DO IY=IYF,IYL F(L0CO+I)=F(L0DEN+I)*F(L0VIST+I)/(PRT(INDVAR)*GDH) ENDDO ENDDO ENDIF ELSEIF(IGR==13.AND.ISC==9) THEN C------------------- SECTION 8 ------------------- coef = GRND8 C... Set COefficient for mass-flow boundaries. C Set to FIXFLU if inflow, set to PCOEF if outflow. IF(INDVAR==P1) THEN IBLIN=NINT(F(L0IPAT(IDMN)+IPAT)) VELX=F(L0VELX(IDMN)+IBLIN) VELY=F(L0VELY(IDMN)+IBLIN) VELZ=F(L0VELZ(IDMN)+IBLIN) C check cell face to check inflow/outflow C e=2, w=3, n=4, s=5, h=6, l=7 IF(INTTYP==2.OR.INTTYP==3) THEN ! E or W VELIN=VELX ELSEIF(INTTYP==4.OR.INTTYP==5) THEN ! N or S VELIN=VELY ELSEIF(INTTYP==6.OR.INTTYP==7) THEN ! H or L VELIN=VELZ ENDIF IF(ABS(VELIN)<=1.0E-6) VELIN=0.0 IF(INTTYP==2.OR.INTTYP==4.OR.INTTYP==6) THEN IFLO=ITWO(0,1,VELIN<0.0) ELSE IFLO=ITWO(0,1,VELIN>0.0) ENDIF IF(IFLO==0) THEN ! inflow at this face CALL FN1(CO,FIXFLU) ELSE ! outflow, treat as fixed pressure CALL FN1(CO,F(L0PCOE(IDMN)+IBLIN)) ENDIF ENDIF ELSEIF(IGR==13.AND.ISC==19) THEN C------------------- SECTION 19 ------------------- value = GRND7 C power-law form: Uz=Uh*(z/h)**a C log-law form: Uz=Uh*log(z/zo)/log(h/zo) c C.. inlet velocity vector at reference height IBLIN=NINT(F(L0IPAT(IDMN)+IPAT)) VELX=F(L0VELX(IDMN)+IBLIN) VELY=F(L0VELY(IDMN)+IBLIN) VELZ=F(L0VELZ(IDMN)+IBLIN) C.. inlet velocity magnitude at reference height QREF=F(L0QREF(IDMN)+IBLIN) C.. inlet temperature TAIR=F(L0TAIR(IDMN)+IBLIN) C.. inlet humidity HUMIN=F(L0RELH(IDMN)+IBLIN) C.. vertical coordinate direction IVDIR=NINT(F(L0VDIR(IDMN)+IBLIN)) C.. inlet density RHOIN=F(L0DENS(IDMN)+IBLIN) C.. profile type = TABLE (3) for table, POWL (2) for power law & = LOGL (1) for log law IBLTY=NINT(F(L0BLTY(IDMN)+IBLIN)) C.. effective roughness length ZO=F(L0ZREF(IDMN)+IBLIN) C.. reference height for wind reference velocity REFH=F(L0HREF(IDMN)+IBLIN) C IF(IBLTY==2) ALPHA=F(L0POWR(IDMN)+IBLIN) C ISKY=NINT(F(L0SKY(IDMN)+IBLIN)) C... store vertical height of grid nodes in L0H IF(IVDIR==1) THEN ! up X IF(ISKY==1) THEN CALL FN0(-L0H,XU2D) ELSE CALL FN0(-L0H,XG2D) ENDIF ELSEIF(IVDIR==2) THEN ! up Y IF(ISKY==1) THEN CALL FN0(-L0H,YV2D) ELSE CALL FN0(-L0H,YG2D) ENDIF ELSEIF(IVDIR==3) THEN ! up Z IF(ISKY==1) THEN GH=F(L0F(ZWNZ)+IZ) ELSE GH=F(L0F(ZGNZ)+IZ) ENDIF CALL FN1(-L0H,GH) ENDIF C C... Inlet mass flux or pressure boundary C ==================================== IF(INDVAR==P1) THEN C check cell face to calculate flux C e=2, w=3, n=4, s=5, h=6, l=7 IF(INTTYP==2.OR.INTTYP==3) THEN ! E or W VELIN=VELX ELSEIF(INTTYP==4.OR.INTTYP==5) THEN ! N or S VELIN=VELY ELSEIF(INTTYP==6.OR.INTTYP==7) THEN ! H or L VELIN=VELZ ENDIF IF(ABS(VELIN)<=1.0E-6) VELIN=0.0 IF(INTTYP==2.OR.INTTYP==4.OR.INTTYP==6) THEN IFLO=ITWO(0,1,VELIN<0.0) ELSE IFLO=ITWO(0,1,VELIN>0.0) ENDIF IF(IFLO==1) THEN ! fixed pressure boundary CALL FN1(VAL,0.0) RETURN ENDIF CALL GETCV(IPAT,INDVAR,GCO,GVAL) IF(QEQ(GCO,GRND8)) THEN AMULT=1./FIXFLU ELSE AMULT=1. ENDIF C... Set sign of mass-flux IF(INTTYP==2.OR.INTTYP==4.OR.INTTYP==6) THEN ! E, N or H IF(VELIN>0) THEN ! +ve vel points out, so mass is -ve VELIN=-VELIN ELSE ! -ve vel points in, so mass is +ve VELIN=ABS(VELIN) ENDIF ENDIF C... Inlet mass flux L0VAL=L0F(VAL); IPBC=0 IF(IBLTY==3) THEN ! Table profile IF(INTTYP==2.OR.INTTYP==3) THEN ! E or W IVELIN=2 ! U component ELSEIF(INTTYP==4.OR.INTTYP==5) THEN ! N or S IVELIN=3 ! V component ELSEIF(INTTYP==6.OR.INTTYP==7) THEN ! H or L IVELIN=4 ! W component ENDIF DO IX=IXF,IXL DO IY=IYF,IYL IF(.NOT.SLD(I)) THEN IJKDM=I+(IZ-1)*NX*NY IF(PARSOL) IPBC = IPBSEQ(IDMN,IJKDM) ! get cut-cell number IF(IPBC>0.AND.ISKY==0) THEN ! dealing with cut cell IT1= ISHPB(IDMN,IPBC) GH=F(IT1+PBC_SNYDIST) ! dist from sunny sub-cell cen. ! to cut-plane ELSE IF(IVDIR==1) THEN ! up X IF(ISKY==0) THEN IB=ITWO(IZ,IY,ITYPE==4.OR.ITYPE==5) ELSE IB=(IY-1)*NZ+IZ ENDIF ELSEIF(IVDIR==2) THEN ! up Y IF(ISKY==0) THEN IB=ITWO(IZ,IX,ITYPE==2.OR.ITYPE==3) ELSE IB=(IX-1)*NZ+IZ ENDIF ELSE ! up Z IF(ISKY==0) THEN IB=ITWO(IY,IX,ITYPE==2.OR.ITYPE==3) ELSE IB=I ENDIF ENDIF GH=AMAX1(F(L0H+I)-F(L0HI(IDMN,IBLIN)+IB),0.0) ENDIF VELIN=-999. ! initialize DO IL=1,NLINEV-1 ! loop over table IF(GH>=VEL_TAB(IL,1).AND.GH<=VEL_TAB(IL+1,1)) THEN ! linear VELIN=VEL_TAB(IL,IVELIN)+(GH-VEL_TAB(IL,1)) ! interpolation 1 *(VEL_TAB(IL+1,IVELIN)-VEL_TAB(IL,IVELIN))/ 1 (VEL_TAB(IL+1,1)-VEL_TAB(IL,1)) EXIT ENDIF ENDDO IF(QEQ(VELIN,-999.)) THEN ! no value was found IF(GH>=VEL_TAB(NLINEV,1)) THEN ! above top of table VELIN=VEL_TAB(NLINEV,IVELIN) ! use top value ELSE ! below bottom VELIN=VEL_TAB(1,IVELIN) ! use bottom value ENDIF ENDIF C... Set sign of mass-flux IF(INTTYP==2.OR.INTTYP==4.OR.INTTYP==6) THEN ! E, N or H IF(VELIN>0) THEN ! +ve vel points out, so mass is -ve VELIN=-VELIN ELSE ! -ve vel points in, so mass is +ve VELIN=ABS(VELIN) ENDIF ENDIF F(L0VAL+I)=RHOIN*VELIN*AMULT ELSE F(L0VAL+I)=0.0 ENDIF ENDDO ENDDO ELSEIF(IBLTY==2) THEN ! Power-Law profile VELCON=RHOIN*VELIN/REFH**ALPHA DO IX=IXF,IXL DO IY=IYF,IYL IF(.NOT.SLD(I)) THEN IJKDM=I+(IZ-1)*NX*NY IF(PARSOL) IPBC = IPBSEQ(IDMN,IJKDM) ! get cut-cell number IF(IPBC>0) THEN ! dealing with cut cell IT1= ISHPB(IDMN,IPBC) GH=F(IT1+PBC_SNYDIST) ! dist from sunny sub-cell cen. ! to cut-plane ELSE IF(IVDIR==1) THEN ! up X IF(ISKY==0) THEN IB=ITWO(IZ,IY,ITYPE==4.OR.ITYPE==5) ELSE IB=(IY-1)*NZ+IZ ENDIF ELSEIF(IVDIR==2) THEN ! up Y IF(ISKY==0) THEN IB=ITWO(IZ,IX,ITYPE==2.OR.ITYPE==3) ELSE IB=(IX-1)*NZ+IZ ENDIF ELSE ! up Z IF(ISKY==0) THEN IB=ITWO(IY,IX,ITYPE==2.OR.ITYPE==3) ELSE IB=I ENDIF ENDIF GH=AMAX1(F(L0H+I)-F(L0HI(IDMN,IBLIN)+IB),0.0) ENDIF C... divide by FIXFLU if CO was GRND8 F(L0VAL+I)=VELCON*(GH**ALPHA)*AMULT ELSE F(L0VAL+I)=0.0 ENDIF ENDDO ENDDO ELSEIF(IBLTY==1) THEN C... log-law profile C --------------- C.. VELCON = RHOIN*(QTAU/KAPPA) IF(ITPRO > 0 .AND. ITPRO /= 4) THEN VELCON=RHOIN*VELIN/(LOG(REFH/ZO)-PSIF(REFH,GLMO)) ELSE ! Neutral VELCON=RHOIN*VELIN/LOG(REFH/ZO) ENDIF DO IX=IXF,IXL DO IY=IYF,IYL IF(.NOT.SLD(I)) THEN IJKDM=I+(IZ-1)*NX*NY IF(PARSOL) IPBC = IPBSEQ(IDMN,IJKDM) ! get cut-cell number IF(IPBC>0) THEN ! dealing with cut cell IT1= ISHPB(IDMN,IPBC) GH=F(IT1+PBC_SNYDIST) ! dist from sunny sub-cell cen. ! to cut-plane ELSE IF(IVDIR==1) THEN ! up X IF(ISKY==0) THEN IB=ITWO(IZ,IY,ITYPE==4.OR.ITYPE==5) ELSE IB=(IY-1)*NZ+IZ ENDIF ELSEIF(IVDIR==2) THEN ! up Y IF(ISKY==0) THEN IB=ITWO(IZ,IX,ITYPE==2.OR.ITYPE==3) ELSE IB=(IX-1)*NZ+IZ ENDIF ELSE ! up Z IF(ISKY==0) THEN IB=ITWO(IY,IX,ITYPE==2.OR.ITYPE==3) ELSE IB=I ENDIF ENDIF GH=AMAX1(F(L0H+I)-F(L0HI(IDMN,IBLIN)+IB),0.0) ENDIF IF(EQZ(WALLB)) THEN GHDZO=AMAX1(GH/ZO,2.0) ELSE IF(LTZ(WALLB)) THEN GHDZO=(GH-WALLB)/ZO ELSE GHDZO=AMAX1((GH-WALLB)/ZO,2.0) ENDIF ENDIF C... divide by FIXFLU if CO was GRND8 C.. VELCON = RHOIN*(QTAU/KAPPA) IF(ITPRO > 0 .AND. ITPRO /= 4) THEN F(L0VAL+I)=VELCON*(LOG(GHDZO)-PSIF(GH,GLMO))*AMULT ELSE ! Neutral F(L0VAL+I)=VELCON*LOG(GHDZO)*AMULT ENDIF ELSE F(L0VAL+I)=0. ENDIF ENDDO ENDDO ENDIF C C... Inlet velocity components C ========================= ELSEIF(INDVAR==U1.OR.INDVAR==V1.OR.INDVAR==W1) THEN L0VAL=L0F(VAL); l0velin=0 IF(INDVAR==U1) THEN VELIN=VELX lbvelin=lbname('UIN') ELSEIF(INDVAR==V1) THEN VELIN=VELY lbvelin=lbname('VIN') ELSEIF(INDVAR==W1) THEN VELIN=VELZ lbvelin=lbname('WIN') ENDIF if(lbvelin>0) l0velin=l0f(lbvelin) C IPBC=0 IF(IBLTY==3) THEN ! Table profile IVELIN=(INDVAR-1)/2+1 ! get component in table, column 2,3 or 4 DO IX=IXF,IXL DO IY=IYF,IYL IF(.NOT.SLD(I)) THEN IJKDM=I+(IZ-1)*NX*NY IF(PARSOL) IPBC = IPBSEQ(IDMN,IJKDM) ! get cut-cell number IF(IPBC>0.AND.ISKY==0) THEN ! dealing with cut cell IT1= ISHPB(IDMN,IPBC) GH=F(IT1+PBC_SNYDIST) ! dist from sunny sub-cell cen. ! to cut-plane ELSE IF(IVDIR==1) THEN ! up X IF(ISKY==0) THEN IB=ITWO(IZ,IY,ITYPE==4.OR.ITYPE==5) ELSE IB=(IY-1)*NZ+IZ ENDIF ELSEIF(IVDIR==2) THEN ! up Y IF(ISKY==0) THEN IB=ITWO(IZ,IX,ITYPE==2.OR.ITYPE==3) ELSE IB=(IX-1)*NZ+IZ ENDIF ELSE ! up Z IF(ISKY==0) THEN IB=ITWO(IY,IX,ITYPE==2.OR.ITYPE==3) ELSE IB=I ENDIF ENDIF GH=AMAX1(F(L0H+I)-F(L0HI(IDMN,IBLIN)+IB),0.0) ENDIF VELIN=-999. ! initalise DO IL=1,NLINEV-1 ! loop over table IF(GH>=VEL_TAB(IL,1).AND.GH<=VEL_TAB(IL+1,1)) THEN ! linear VELIN=VEL_TAB(IL,IVELIN)+(GH-VEL_TAB(IL,1)) ! interpolation 1 *(VEL_TAB(IL+1,IVELIN)-VEL_TAB(IL,IVELIN))/ 1 (VEL_TAB(IL+1,1)-VEL_TAB(IL,1)) EXIT ENDIF ENDDO IF(QEQ(VELIN,-999.)) THEN ! no value found IF(GH>=VEL_TAB(NLINEV,1)) THEN ! above top of table VELIN=VEL_TAB(NLINEV,IVELIN) ! use top value ELSE ! below bottom of table VELIN=VEL_TAB(1,IVELIN) ! use bottom value ENDIF ENDIF F(L0VAL+I)=VELIN ELSE F(L0VAL+I)=0.0 ENDIF if(l0velin>0) f(l0velin+i)=f(l0val+i) ENDDO ENDDO ELSEIF(IBLTY==2) THEN ! Power-Law profile VELCON=VELIN/REFH**ALPHA DO IX=IXF,IXL DO IY=IYF,IYL IF(.NOT.SLD(I)) THEN IJKDM=I+(IZ-1)*NX*NY IF(PARSOL) IPBC = IPBSEQ(IDMN,IJKDM) ! get cut-cell number IF(IPBC>0.AND.ISKY==0) THEN ! dealing with cut cell IT1= ISHPB(IDMN,IPBC) GH=F(IT1+PBC_SNYDIST) ELSE IF(IVDIR==1) THEN ! up X IF(ISKY==0) THEN IB=ITWO(IZ,IY,ITYPE==4.OR.ITYPE==5) ELSE IB=(IY-1)*NZ+IZ ENDIF ELSEIF(IVDIR==2) THEN ! up Y IF(ISKY==0) THEN IB=ITWO(IZ,IX,ITYPE==2.OR.ITYPE==3) ELSE IB=(IX-1)*NZ+IZ ENDIF ELSE ! up Z IF(ISKY==0) THEN IB=ITWO(IY,IX,ITYPE==2.OR.ITYPE==3) ELSE IB=I ENDIF ENDIF GH=AMAX1(F(L0H+I)-F(L0HI(IDMN,IBLIN)+IB),0.0) ENDIF F(L0VAL+I)=VELCON*GH**ALPHA ELSE F(L0VAL+I)=0.0 ENDIF ENDDO ENDDO C... log-law profile C --------------- ELSEIF(IBLTY==1) THEN C.. VELCON = QTAU/KAPPA IF(ITPRO > 0 .AND. ITPRO /= 4) THEN VELCON=VELIN/(LOG((REFH-WALLB)/ZO)-PSIF(REFH,GLMO)) ELSE ! Neutral VELCON=VELIN/LOG((REFH-WALLB)/ZO) ENDIF DO IX=IXF,IXL DO IY=IYF,IYL IF(.NOT.SLD(I)) THEN IJKDM=I+(IZ-1)*NX*NY IF(PARSOL) IPBC = IPBSEQ(IDMN,IJKDM) ! get cut-cell number IF(IPBC>0.AND.ISKY==0) THEN ! dealing with cut cell IT1= ISHPB(IDMN,IPBC) GH=F(IT1+PBC_SNYDIST) ELSE IF(IVDIR==1) THEN ! up X IF(ISKY==0) THEN IB=ITWO(IZ,IY,ITYPE==4.OR.ITYPE==5) ELSE IB=(IY-1)*NZ+IZ ENDIF ELSEIF(IVDIR==2) THEN ! up Y IF(ISKY==0) THEN IB=ITWO(IZ,IX,ITYPE==2.OR.ITYPE==3) ELSE IB=(IX-1)*NZ+IZ ENDIF ELSE ! up Z IF(ISKY==0) THEN IB=ITWO(IY,IX,ITYPE==2.OR.ITYPE==3) ELSE IB=I ENDIF ENDIF GH=AMAX1(F(L0H+I)-F(L0HI(IDMN,IBLIN)+IB),0.0) ENDIF IF(EQZ(WALLB)) THEN GHDZO=AMAX1(GH/ZO,2.0) ELSE IF(LTZ(WALLB)) THEN GHDZO=(GH-WALLB)/ZO ELSE GHDZO=AMAX1((GH-WALLB)/ZO,2.0) ENDIF ENDIF C.. VELCON = QTAU/KAPPA IF(ITPRO > 0 .AND. ITPRO /= 4) THEN F(L0VAL+I)=VELCON*(LOG(GHDZO)-PSIF(GH,GLMO)) ELSE ! Neutral F(L0VAL+I)=VELCON*LOG(GHDZO) ENDIF ELSE F(L0VAL+I)=0.0 ENDIF if(l0velin>0) f(l0velin+i)=f(l0val+i) ENDDO ENDDO ENDIF C C... inlet k and ep/omeg values C ========================== C ELSEIF(INDVAR==KE.OR.INDVAR==EP.OR.INDVAR.EQ.LBOMEG) THEN L0VAL=L0F(VAL) IPBC=0 IF(IBLTY==1.OR.IBLTY==2) THEN ! log and power law REFHMD = REFH-WALLB IF(ITPRO > 0 .AND. ITPRO /= 4) THEN QTAU = ABS(AK*QREF/(LOG((REFHMD)/ZO)-PSIF(REFHMD,GLMO))) ELSE QTAU = ABS(AK*QREF/(LOG((REFHMD)/ZO))) ENDIF QTAU2 = QTAU*QTAU C... Taudke=cmucd**0.5 GKEIN = QTAU2/TAUDKE ! Nb: Taudke=sqrt(CmuCd) GEPCON = QTAU2*QTAU/AK IF(INDVAR==KE) THEN CALL FN1(VAL,GKEIN) IF(ITPRO > 0 .AND. ITPRO /= 4) THEN lbkein=lbname('KEIN') ; if(lbkein>0) l0kein=l0f(lbkein) DO IX=IXF,IXL DO IY=IYF,IYL IF(.NOT.SLD(I)) THEN ! open cell IJKDM=I+(IZ-1)*NX*NY IF(PARSOL) IPBC = IPBSEQ(IDMN,IJKDM) ! get cut-cell number IF(IPBC>0.AND.ISKY==0) THEN ! dealing with cut cell IT1= ISHPB(IDMN,IPBC) GH=F(IT1+PBC_SNYDIST) ELSE IF(IVDIR==1) THEN ! up X IF(ISKY==0) THEN IB=ITWO(IZ,IY,ITYPE==4.OR.ITYPE==5) ELSE IB=(IY-1)*NZ+IZ ENDIF ELSEIF(IVDIR==2) THEN ! up Y IF(ISKY==0) THEN IB=ITWO(IZ,IX,ITYPE==2.OR.ITYPE==3) ELSE IB=(IX-1)*NZ+IZ ENDIF ELSE ! up Z IF(ISKY==0) THEN IB=ITWO(IY,IX,ITYPE==2.OR.ITYPE==3) ELSE IB=I ENDIF ENDIF GH=AMAX1(F(L0H+I)-F(L0HI(IDMN,IBLIN)+IB),0.0) ENDIF IF(LTZ(WALLB)) THEN GZMDH=(GH-WALLB) ELSE GZMDH=AMAX1(2.0*ZO,(GH-WALLB)) ENDIF GPHIE = PHIE(GZMDH,GLMO) GPHIM = 1.0-PSIFT(GZMDH,GLMO) F(L0VAL+I)=GKEIN*(GPHIE/GPHIM)**0.5 ELSE F(L0VAL+I)=0.0 ENDIF if(lbkein>0) f(l0kein+i)=f(l0val+i) ENDDO ENDDO ENDIF ENDIF IF(INDVAR==EP.OR.INDVAR.EQ.LBOMEG) THEN IF(INDVAR==LBOMEG) THEN lbomin=lbname('OMIN') ; if(lbomin>0) l0omin=l0f(lbomin) L0OMEG=L0F(LBOMEG) GOMCON=QTAU/(AK*TAUDKE) ELSE lbepin=lbname('EPIN') ; if(lbepin>0) l0epin=l0f(lbepin) L0EP=L0F(EP) ENDIF DO IX=IXF,IXL DO IY=IYF,IYL IF(.NOT.SLD(I)) THEN ! open cell IJKDM=I+(IZ-1)*NX*NY IF(PARSOL) IPBC = IPBSEQ(IDMN,IJKDM) ! get cut-cell number IF(IPBC>0.AND.ISKY==0) THEN ! dealing with cut cell IT1= ISHPB(IDMN,IPBC) GH=F(IT1+PBC_SNYDIST) ELSE IF(IVDIR==1) THEN ! up X IF(ISKY==0) THEN IB=ITWO(IZ,IY,ITYPE==4.OR.ITYPE==5) ELSE IB=(IY-1)*NZ+IZ ENDIF ELSEIF(IVDIR==2) THEN ! up Y IF(ISKY==0) THEN IB=ITWO(IZ,IX,ITYPE==2.OR.ITYPE==3) ELSE IB=(IX-1)*NZ+IZ ENDIF ELSE ! up Z IF(ISKY==0) THEN IB=ITWO(IY,IX,ITYPE==2.OR.ITYPE==3) ELSE IB=I ENDIF ENDIF GH=AMAX1(F(L0H+I)-F(L0HI(IDMN,IBLIN)+IB),0.0) ENDIF IF(LTZ(WALLB)) THEN GZMDH=(GH-WALLB) ELSE GZMDH=AMAX1(2.0*ZO,(GH-WALLB)) ENDIF IF(INDVAR==EP) THEN GEPIN=GEPCON/GZMDH ! e = ustar^3/(ak*(z-d)) IF(ITPRO > 0 .AND. ITPRO /= 4) THEN GEPIN=GEPIN*PHIE(GZMDH,GLMO) ENDIF F(L0VAL+I)=GEPIN ELSE IF(INDVAR==LBOMEG) THEN GOMEGI=GOMCON/GZMDH ! f = ustar/(sqrt(CmuCD)*ak*(z-d)) IF(ITPRO > 0 .AND. ITPRO /= 4) THEN GPHIE = PHIE(GZMDH,GLMO) GPHIM = 1.0-PSIFT(GZMDH,GLMO) F(L0VAL+I)=GOMEGI*(GPHIE*GPHIM)**0.5 ELSE F(L0VAL+I)=GOMEGI ENDIF ENDIF ELSE F(L0VAL+I)=0.0 ENDIF if(indvar==ep.and.lbepin>0) f(l0epin+i)=f(l0val+i) if(indvar==lbomeg.and.lbomin>0) f(l0omin+i)=f(l0val+i) ENDDO ENDDO ENDIF ELSE ! TABLE profile lbkein=lbname('KEIN') if(lbkein>0) l0kein=l0f(lbkein) lbepin=lbname('EPIN') if(lbepin>0) l0epin=l0f(lbepin) lbomin=lbname('OMIN') if(lbomin>0) l0omin=l0f(lbomin) DO IX=IXF,IXL DO IY=IYF,IYL IF(.NOT.SLD(I)) THEN ! open cell IJKDM=I+(IZ-1)*NX*NY IF(PARSOL) IPBC = IPBSEQ(IDMN,IJKDM) ! get cut-cell number IF(IPBC>0.AND.ISKY==0) THEN ! dealing with cut cell IT1= ISHPB(IDMN,IPBC) GH=F(IT1+PBC_SNYDIST) ELSE IF(IVDIR==1) THEN ! up X IF(ISKY==0) THEN IB=ITWO(IZ,IY,ITYPE==4.OR.ITYPE==5) ELSE IB=(IY-1)*NZ+IZ ENDIF ELSEIF(IVDIR==2) THEN ! up Y IF(ISKY==0) THEN IB=ITWO(IZ,IX,ITYPE==2.OR.ITYPE==3) ELSE IB=(IX-1)*NZ+IZ ENDIF ELSE ! up Z IF(ISKY==0) THEN IB=ITWO(IY,IX,ITYPE==2.OR.ITYPE==3) ELSE IB=I ENDIF ENDIF GH=AMAX1(F(L0H+I)-F(L0HI(IDMN,IBLIN)+IB),0.0) ENDIF GKEIN=-999. ! initialise DO IL=1,NLINEK-1 IF(GH>=KE_TAB(IL,1).AND.GH<=KE_TAB(IL+1,1)) THEN ! linear GKEIN=KE_TAB(IL,2)+(GH-KE_TAB(IL,1)) ! interpolation 1 *(KE_TAB(IL+1,2)-KE_TAB(IL,2))/ 1 (KE_TAB(IL+1,1)-KE_TAB(IL,1)) EXIT ENDIF ENDDO IF(QEQ(GKEIN,-999.)) THEN ! no value found IF(GH>=KE_TAB(NLINEK,1)) THEN ! above top of table GKEIN=KE_TAB(NLINEK,2) ! use top value ELSE ! below bottom of table GKEIN=KE_TAB(1,2) ! use bottom value ENDIF ENDIF IF(INDVAR==KE) THEN F(L0VAL+I)=GKEIN ELSEIF (INDVAR==EP) THEN ! Nb: Cd = (CmuCd)^0.75 F(L0VAL+I)=CD*GKEIN**1.5/(AK*GH) ! e = Cd*k^1.5/(ak*(z-d)) ELSEIF (INDVAR==LBOMEG) THEN ! Nb: RTTDKE=SQRT(TAUDKE) & TAUDKE=SQRT(CM F(L0VAL+I)=SQRT(GKEIN)/(AK*RTTDKE*GH) ! f = sqrt(k)/(ak*(z-d)*(CmuCd)^0.25) ENDIF ELSE F(L0VAL+I)=0.0 ENDIF if(indvar==ke.and.lbkein>0) f(l0kein+i)=f(l0val+i) if(indvar==ep.and.lbepin>0) f(l0epin+i)=f(l0val+i) if(indvar==lbomeg.and.lbomin>0) f(l0omin+i)=f(l0val+i) ENDDO ENDDO ENDIF C C... External air temperature C ======================== ELSEIF(INDVAR==ITEM1) THEN IF(ITPRO==0.OR.ITPRO==4) THEN ! Uniform profile or Neutral CALL FN1(VAL,TAIR) ELSE IF(ITPRO > 0 ) THEN C... Pasquill temperature profiles lbtin=lbname('TIN') if(lbtin>0) l0tin=l0f(lbtin) IF(ITPRO > 0 .AND. ITPRO /= 4) THEN C.. friction velocity REFHMD = REFH-WALLB ! (z-d) QTAU = AK*QREF/(LOG((REFHMD)/ZO)-PSIF(REFHMD,GLMO)) C... Friction temperature GTSTAR= -GQWALL/(RHOIN*CP1*QTAU) ; GTSDAK=GTSTAR/AK ENDIF L0VAL=L0F(VAL) IPBC=0 DO IX=IXF,IXL DO IY=IYF,IYL IF(.NOT.SLD(I)) THEN ! open cell IJKDM=I+(IZ-1)*NX*NY IF(PARSOL) IPBC = IPBSEQ(IDMN,IJKDM) ! get cut-cell number IF(IPBC>0.AND.ISKY==0) THEN ! dealing with cut cell IT1= ISHPB(IDMN,IPBC) GH=F(IT1+PBC_SNYDIST) ELSE IF(IVDIR==1) THEN ! up X IF(ISKY==0) THEN IB=ITWO(IZ,IY,ITYPE==4.OR.ITYPE==5) ELSE IB=(IY-1)*NZ+IZ ENDIF ELSEIF(IVDIR==2) THEN ! up Y IF(ISKY==0) THEN IB=ITWO(IZ,IX,ITYPE==2.OR.ITYPE==3) ELSE IB=(IX-1)*NZ+IZ ENDIF ELSE ! up Z IF(ISKY==0) THEN IB=ITWO(IY,IX,ITYPE==2.OR.ITYPE==3) ELSE IB=I ENDIF ENDIF GH=AMAX1(F(L0H+I)-F(L0HI(IDMN,IBLIN)+IB),0.0) ENDIF C IF(ITPRO==) THEN C.. linear profile:- T = T0 +gam*(z-z0) C GTLIN = GT0 - GALR*(GH-GZT0) C F(L0VAL+I)= GTLIN IF(ITPRO > 0 .AND. ITPRO /= 4) THEN ! Pasquill - Stable (LMO > 0) C.. log profile:- T = T0-gam*(z-z0)+(T*/k)*log(z/z0) IF(EQZ(WALLB)) THEN GHDZO=AMAX1(GH/ZO,2.0) ELSE IF(LTZ(WALLB)) THEN GHDZO=(GH-WALLB)/ZO ELSE GHDZO=AMAX1((GH-WALLB)/ZO,2.0) ENDIF ENDIF GTLIN = GT0 - GALR*(GH-GZT0) GPSIT = PSIFT(GH-WALLB,GLMO) F(L0VAL+I)=GTLIN+GTSDAK*(LOG(GHDZO)-GPSIT) ENDIF ELSE F(L0VAL+I)=0.0 ENDIF if(lbtin>0) f(l0tin+i)=f(l0val+i) ENDDO ENDDO ENDIF C C... External air humidity C ===================== ELSEIF(INDVAR==IHUM) THEN CALL FN1(VAL,HUMIN) ENDIF C C... Group 13, Section 14 Density-difference buoyancy forces C C... Buoyancy Forces C =============== ELSEIF(IGR==13.AND.ISC==14) THEN ! Density difference C.. Entry here is from GXBUOY with VAL=-g ; Rhoref = f(height) C for all velocities, so need to identify active gravity direction IF(INDVAR==3) THEN L0VAL= L0F(VAL) ; L0D9 = L0F(LD9) C DO I=1,NXNY C F(L0VAL+I) = F(L0VAL+I)*(F(L0D9+I) - BUOYD)/F(L0D9+I) C ENDDO L0DEN=L0F(DEN1) IF(LBRHIN>0) THEN L0RHIN=L0F(LBRHIN) ELSE L0RHIN=(IZ-1)*NXNY+L0RHIN0 ENDIF GGRAVY=BUOYA DO JX=1,NX-1 DO JY=1,NY IF(SLD(J)) THEN F(L0VAL+J)=0.0 ELSE GRHO=F(L0DEN+J) GRHON=F(L0DEN+J+NY) GRHOAV=0.5*(GRHO+GRHON) GRHRAV=0.5*(F(L0RHIN+J)+F(L0RHIN+J+NY)) C GRHOREF = BUOYD F(L0VAL+J)=(1.-GRHRAV/GRHOAV)*GGRAVY ENDIF ENDDO ENDDO ELSEIF(INDVAR==5) THEN L0VAL=L0F(VAL) L0DEN=L0F(DEN1) IF(LBRHIN>0) THEN L0RHIN=L0F(LBRHIN) ELSE L0RHIN=(IZ-1)*NXNY+L0RHIN0 ENDIF GGRAVY=BUOYB DO JX=1,NX DO JY=1,NY-1 IF(SLD(J)) THEN F(L0VAL+J)=0.0 ELSE GRHO=F(L0DEN+J) GRHON=F(L0DEN+J+1) GRHOAV=0.5*(GRHO+GRHON) GRHRAV=0.5*(F(L0RHIN+J)+F(L0RHIN+J+1)) C GRHOREF = BUOYD F(L0VAL+J)=(1.-GRHRAV/GRHOAV)*GGRAVY ENDIF ENDDO ENDDO ELSEIF(INDVAR==7) THEN L0VAL=L0F(VAL) ; L0D9 = L0F(LD9) L0DEN=L0F(DEN1); L0DENH=L0F(HIGH(DEN1)) IF(LBRHIN>0) THEN L0RHIN=L0F(LBRHIN); L0RHINH=L0F(HIGH(LBRHIN)) ELSE L0RHIN=(IZ-1)*NXNY+L0RHIN0 L0RHINH=L0RHIN+NXNY ENDIF GGRAVY=BUOYC DO JX=1,NX DO JY=1,NY IF(SLD(J)) THEN F(L0VAL+J)=0.0 ELSE GRHO=F(L0DEN+J) ; GRHON=F(L0DENH+J) GRHOAV=0.5*(GRHO+GRHON) C.. NB: 1st visit rhin,high=0.0 GRHRAV=0.5*(F(L0RHIN+J)+F(L0RHINH+J)) F(L0VAL+J)=(1.-GRHRAV/GRHOAV)*GGRAVY ENDIF ENDDO ENDDO ENDIF C------------------------------------------------------------------------- C... Group 13, Section 15 Boussinesq buoyancy forces C ELSEIF(IGR==13.AND.ISC==15) THEN ! Boussinesq buoyancy L0TEM1=L0F(ITEM1) IF(LBTREF>0) THEN L0TREF=L0F(LBTREF) ELSE L0TREF=L0TREF0+(IZ-1)*NXNY ENDIF L0VAL= L0F(VAL) ;L0D9 = L0F(LD9) IF(INDVAR==3) THEN ! U1 C DO I=1,NXNY C F(L0VAL+I) = F(L0VAL+I)*(F(L0D9+I) - BUOYD)/F(L0D9+I) C ENDDO GGRAVY=BUOYA DO JX=1,NX-1 DO JY=1,NY IF(SLD(J)) THEN F(L0VAL+J)=0.0 ELSE GTEM=F(L0TEM1+J) GTEMN=F(L0TEM1+J+NY) GTEMAV=0.5*(GTEM+GTEMN) GTRFAV=0.5*(F(L0TREF+J)+F(L0TREF+J+NY)) F(L0VAL+J)=(GTEMAV-GTRFAV)*GGRAVY*DVO1DT ENDIF ENDDO ENDDO ELSEIF(INDVAR==5) THEN ! V1 GGRAVY=BUOYB DO JX=1,NX DO JY=1,NY-1 IF(SLD(J)) THEN F(L0VAL+J)=0.0 ELSE GTEM=F(L0TEM1+J) GTEMN=F(L0TEM1+J+1) GTEMAV=0.5*(GTEM+GTEMN) GTRFAV=0.5*(F(L0TREF+J)+F(L0TREF+J+1)) C GRHOREF = BUOYD F(L0VAL+J)=(GTEMAV-GTRFAV)*GGRAVY*DVO1DT ENDIF ENDDO ENDDO ELSEIF(INDVAR==7) THEN ! W1 L0TEMH=L0F(HIGH(ITEM1)) IF(LBTREF>0) THEN L0TREFH=L0F(HIGH(LBTREF)) ELSE L0TREFH=L0TREF+NXNY ENDIF GGRAVY=BUOYC DO JX=1,NX DO JY=1,NY IF(SLD(J)) THEN F(L0VAL+J)=0.0 ELSE GTEM=F(L0TEM1+J) ; GTEMN=F(L0TEMH+J) GTEMAV=0.5*(GTEM+GTEMN) C.. NB: 1st visit rhin,high=0.0 GTRFAV=0.5*(F(L0TREF+J)+F(L0TREFH+J)) F(L0VAL+J)=(GTEMAV-GTRFAV)*GGRAVY*DVO1DT ENDIF ENDDO ENDDO ENDIF C----------------------------------------------------------------------------- ELSEIF(IGR==19.AND.ISC==1) THEN C... Group 19, Section 1. Start of time step C... Echo transient variation of wind parameters IF(ASSOCIATED(WDIR)) THEN C... A weather file is in use. Scan through data and interpolate DO I=1,NL TIM1=(I-1)*3600/NPHOUR; TIM2=I*3600/NPHOUR; DELT=TIM2-TIM1 IF(QGT(TIM,TIM1).AND.QLE(TIM,TIM2)) THEN ANG=WDIR(I)+(WDIR(I+1)-WDIR(I))*(TIM-TIM1)/DELT VEL=WSPD(I)+(WSPD(I+1)-WSPD(I))*(TIM-TIM1)/DELT TAIR=AIRTEMP(I)+(AIRTEMP(I+1)-AIRTEMP(I))*(TIM-TIM1)/DELT PAIR=ATMPRES(I)+(ATMPRES(I+1)-ATMPRES(I))*(TIM-TIM1)/DELT IF(IHUM>0) THEN HUMIN=RELHUM(I)+(RELHUM(I+1)-RELHUM(I))*(TIM-TIM1)/DELT ENDIF EXIT ENDIF ENDDO ANG=ANG-AXDIR IF(ANG<0.0) THEN ANG=360.0+ANG C... if wind angle > 360, subtract 360 ELSEIF(ANG>360) THEN ANG=ANG-360.0 ENDIF ANGR=ANG*ATAN(1.)/45. ! make radians IF(IHUM>0) THEN IF(IHUNIT>0) THEN IF(IHUNIT==1) THEN ! convert from humidity ratio HUMIN=1.E-3*HUMIN ELSEIF(IHUNIT==2) THEN ! convert from relative humidity PVAP = 1.E-2*HUMIN*PVH2O(TAIR) GWRAT = 18.015/28.96 RELH=HUMIN ! save for printout HUMIN = GWRAT*PVAP/(PAIR-PVAP) ENDIF HUMIN=HUMIN/(1+HUMIN) ! convert to mass fraction ENDIF ENDIF C... Reset buoyancy parameters IF(INIBUOY==1) THEN IF(QEQ(RHO1,GRND5)) THEN ! Ideal Gas used IF(NEZ(RHO1A).AND.IHUM==C1) THEN GASCON=RHO1A*(1-HUMIN)+RHO1B*HUMIN ELSE GASCON=1./RHO1B ENDIF RHOIN=PAIR/(GASCON*(TAIR+TEMP0)) C.... reset reference density for buoyancy to inlet density BUOYD=RHOIN ELSEIF(RHO1>0.0) THEN ! density is constant C.... reset reference temperature BUOYE=TAIR ENDIF C.... reset PRESS0 to external pressure PRESS0=PAIR ENDIF C... Update values for BLIN patches DO IDOM=1,NUMDMN DO IBL=1,NBLIN(IDOM) IUP=F(L0VDIR(IDOM)+IBL) IF(IUP==3) THEN ! Up Z VELX=-VEL*SIN(ANGR) VELY=-VEL*COS(ANGR) VELZ=0.0 ELSEIF(IUP==2) THEN ! Up Y VELX=-VEL*COS(ANGR) VELY=0.0 VELZ=-VEL*SIN(ANGR) ELSE ! Up X VELX=0.0 VELY=-VEL*SIN(ANGR) VELZ=-VEL*COS(ANGR) ENDIF C... Set inlet velocity components F(L0VELX(IDOM)+IBL)=VELX F(L0VELY(IDOM)+IBL)=VELY F(L0VELZ(IDOM)+IBL)=VELZ C.. Set inlet velocity magnitude at reference height F(L0QREF(IDMN)+IBL)=VEL F(L0TAIR(IDMN)+IBL)=TAIR F(L0PVAL(IDMN)+IBL)=PAIR IF(QEQ(RHO1,GRND5)) F(L0DENS(IDMN)+IBL)=RHOIN IF(IHUM>0) F(L0RELH(IDMN)+IBL)=HUMIN ENDDO ENDDO ELSE C... No Weather file - search for first BLIN active on this time step DO IP=1,NUMPAT IBLIN=NINT(F(L0IPAT(IDMN)+IP)) IF(IBLIN>0) THEN CALL GETPAT(IP,IREG,TYPE,JXF,JXL,JYF,JYL,JZF,JZL,JTF,JTL) IF(ISTEP>=JTF.AND.ISTEP<=JTL) THEN ! active this time step PAIR=F(L0PVAL(IDMN)+IBLIN); RHOIN=F(L0DENS(IDMN)+IBLIN) TAIR=F(L0TAIR(IDMN)+IBLIN); HUMIN=F(L0RELH(IDMN)+IBLIN) VEL=F(L0QREF(IDMN)+IBLIN); RELH=-999.0 CALL GETSDR(NAMPAT(IP),'WDIR',ANG);ANG=ANG-AXDIR IF(INIBUOY==1) THEN PRESS0=PAIR ! update reference pressure IF(QEQ(RHO1,GRND5)) THEN ! Ideal Gas BUOYD=RHOIN ! update reference density ELSEIF(RHO1>0.0) THEN ! Constant density BUOYE=TAIR ! update reference temperature ENDIF ENDIF EXIT ENDIF ENDIF ENDDO C... No wind boundary found for this step, so nothing to print IF(IP>NUMPAT) THEN WRITE(14,'(''No wind boundary conditions found at step '', 1 I6)') ISTEP RETURN ENDIF ENDIF C... Echo inputs to RESULT CALL WRITST CH1=CHGR_13(TIM); LT=LENGZZ(CH1) WRITE(LUPR1,'('' Wind profile data for time step '',I6, 1 '' at time '',A,''s'')') ISTEP, CH1(1:LT) IHR=INT(TIM/3600.); TIM1=TIM-IHR*3600.; IMN=INT(TIM1/60.) WRITE(LUPR1,'('' Elapsed time: '',I4,'' hrs '',I2,'' mins'')' 1 ) IHR,IMN CH1=CHGR_13(ANG+AXDIR); LT=LENGZZ(CH1) WRITE(LUPR1, 1 '('' Wind direction: '',A,A)') CH1(1:LT),CHAR(176) CH1=CHGR_13(VEL); LT=LENGZZ(CH1) WRITE(LUPR1, 1 '('' Wind speed: '',A,'' m/s'')') CH1(1:LT) IF(ITEM1>0) THEN CH1=CHGR_13(TAIR); LT=LENGZZ(CH1) WRITE(LUPR1, 1 '('' Air temperature: '',A,A1,''C'')') CH1(1:LT),CHAR(176) ENDIF CH1=CHGR_13(PAIR); LT=LENGZZ(CH1) WRITE(LUPR1, 1 '('' External pressure: '',A,'' Pa'')') CH1(1:LT) IF(IHUM>0) THEN IF(IHUNIT==0) THEN ! mass fraction CH1=CHGR_13(HUMIN); LT=LENGZZ(CH1) WRITE(LUPR1, 1 '('' H2O mass fraction: '',A,'' '')') CH1(1:LT) ELSEIF(IHUNIT==1) THEN ! humidity ratio CH1=CHGR_13(HUMIN*1000.); LT=LENGZZ(CH1) WRITE(LUPR1, 1 '('' Humidity ratio: '',A,'' g/kg'')') CH1(1:LT) ELSE ! relative humidity IF(QEQ(RELH,-999.0)) THEN ! recover from mass fraction HUMIN=AMIN1(1.0,AMAX1(0.0,HUMIN)) IF(QEQ(HUMIN,1.0)) THEN HRAT=1./TINY ELSE HRAT = HUMIN/(1.-HUMIN+1.E-10) ENDIF PVSAT = PVH2O(TAIR) ! Water Vapour Saturation Pressure (Pa) GWRAT = 18.015/28.96 PVAP = HRAT*PAIR/(GWRAT+HRAT) ! Water vapour Pressure (Pa) RELH = AMAX1(0.0, AMIN1(100.0, 100.*PVAP/PVSAT)) ! Relative humidity (%) ENDIF CH1=CHGR_13(RELH); LT=LENGZZ(CH1) WRITE(LUPR1, 1 '('' Relative humidity: '',A,'' %'')') CH1(1:LT) ENDIF ENDIF CH1=CHGR_13(RHOIN); LT=LENGZZ(CH1) WRITE(LUPR1, 1 '('' Air density: '',A,'' kg/m^3'')') CH1(1:LT) CALL WRITST ELSEIF(IGR==19.AND.ISC==4) THEN C============================================================ C... Group 19, Section 4. End of IZ slab C Compute Pasquill-F Buoyancy-reference profiles for use C in Group 13 buoyancy source terms C Entry here from Grex3 only if PASQBUOY=T C============================================================ IBLIN=1 C.. inlet velocity magnitude at reference height QREF=F(L0QREF(IDMN)+IBLIN) C.. inlet temperature TAIR=F(L0TAIR(IDMN)+IBLIN) C.. vertical coordinate direction IVDIR=NINT(F(L0VDIR(IDMN)+IBLIN)) C.. inlet density RHOIN=F(L0DENS(IDMN)+IBLIN) C.. effective roughness length ZO=F(L0ZREF(IDMN)+IBLIN) C.. reference height for wind reference velocity REFH=F(L0HREF(IDMN)+IBLIN) C ISKY=NINT(F(L0SKY(IDMN)+IBLIN)) C... store vertical height of grid nodes in L0H IF(IVDIR==1) THEN ! up X IF(ISKY==1) THEN CALL FN0(-L0H,XU2D) ELSE CALL FN0(-L0H,XG2D) ENDIF ELSEIF(IVDIR==2) THEN ! up Y IF(ISKY==1) THEN CALL FN0(-L0H,YV2D) ELSE CALL FN0(-L0H,YG2D) ENDIF ELSEIF(IVDIR==3) THEN ! up Z IF(ISKY==1) THEN GH=F(L0F(ZWNZ)+IZ) ELSE GH=F(L0F(ZGNZ)+IZ) ENDIF CALL FN1(-L0H,GH) ENDIF C C.. Compute inlet (reference) temperature profile C ============================================= IF(LBTREF>0) THEN L0TREF=L0F(LBTREF) ELSE L0TREF=L0TREF0+(IZ-1)*NXNY ENDIF C... Reference Temperature c... Datum is at y=0 ; p=p0 rho=rhoo T=To GY0 = 0.0 ; GP0 = 0.0 GRGAS= 286.7 GRHO0= (PRESS0+GP0)/(GRGAS*(GT0+TEMP0)) C GZG = GH REFHMD = REFH-WALLB ! (z-d) QTAU = AK*QREF/(LOG((REFHMD)/ZO)-PSIF(REFHMD,GLMO)) GTSTAR= -GQWALL/(RHOIN*CP1*QTAU) ; GTSDAK=GTSTAR/AK IPBC=0 DO IX=1,NX DO IY=1,NY IF(.NOT.SLD(I)) THEN ! open cell IJKDM=I+(IZ-1)*NX*NY IF(PARSOL) IPBC = IPBSEQ(IDMN,IJKDM) ! get cut-cell number IF(IPBC>0.AND.ISKY==0) THEN ! dealing with cut cell IT1= ISHPB(IDMN,IPBC) GH=F(IT1+PBC_SNYDIST) ELSE IF(IVDIR==1) THEN ! up X IF(ISKY==0) THEN IB=ITWO(IZ,IY,ITYPE==4.OR.ITYPE==5) ELSE IB=(IY-1)*NZ+IZ ENDIF ELSEIF(IVDIR==2) THEN ! up Y IF(ISKY==0) THEN IB=ITWO(IZ,IX,ITYPE==2.OR.ITYPE==3) ELSE IB=(IX-1)*NZ+IZ ENDIF ELSE ! up Z IF(ISKY==0) THEN IB=ITWO(IY,IX,ITYPE==2.OR.ITYPE==3) ELSE IB=I ENDIF ENDIF GH=AMAX1(F(L0H+I)-F(L0HI(IDMN,IBLIN)+IB),0.0) ENDIF GTLIN = GT0 - GALR*(GH-GZT0) GPSIT = PSIFT(GH-WALLB,GLMO) IF(EQZ(WALLB)) THEN GHDZO=AMAX1(GH/ZO,2.0) ELSE IF(LTZ(WALLB)) THEN GHDZO=(GH-WALLB)/ZO ELSE GHDZO=AMAX1((GH-WALLB)/ZO,2.0) ENDIF ENDIF ENDIF F(L0TREF+I)=GTLIN+GTSDAK*(LOG(GHDZO)-GPSIT) ENDDO ENDDO C IF(BUOSSG) RETURN C C Density-difference buoyancy profiles C C.. Compute inlet (reference) pressure & density profiles C ===================================================== IF(LBPIN>0) THEN L0PIN = L0F(LBPIN) L0PINL = L0F(LOW(LBPIN)) ELSE L0PIN = (IZ-1)*NXNY+L0PIN0 L0PINL = L0PIN-NXNY ENDIF IF(LBRHIN>0) THEN L0RHIN = L0F(LBRHIN) L0RHINL = L0F(LOW(LBRHIN)) ELSE L0RHIN = (IZ-1)*NXNY+L0RHIN0 L0RHINL = L0RHIN-NXNY ENDIF IF(IVDIR==1) THEN ! Up X GGRAVY=BUOYA L0XG = L0F(XG2D) ; L0DXG=L0F(DXG2D) C... Pressure & density at first vertical cell JX=1 DO JY=1,NY J=(JX-1)*NY+JY GDYP=F(L0XG+1) GTERM1 = 0.5*GRHO0*GGRAVY*GDYP GTERM2 = 0.5*GGRAVY*GDYP/(GRGAS*(F(L0TREF+J)+TEMP0)) F(L0PIN+J)=(GP0+GTERM1)/(1.-GTERM2) F(L0RHIN+J)=(PRESS0+F(L0PIN+J))/(GRGAS*(F(L0TREF+J)+TEMP0)) ENDDO C... Remainder of reference pressure & density profiles DO JX=2,NX DO JY=1,NY JS=J-NY GDYGS=F(L0DXG+JS) GTERM1 = 0.5*F(L0RHIN+JS)*GGRAVY*GDYGS GTERM2 = 0.5*GGRAVY*GDYGS/(GRGAS*(F(L0TREF+J)+TEMP0)) F(L0PIN+J)=(F(L0PIN+JS)+GTERM1)/(1.-GTERM2) F(L0RHIN+J)=(PRESS0+F(L0PIN+J))/(GRGAS*(F(L0TREF+J)+TEMP0)) ENDDO ENDDO C ELSEIF(IVDIR==2) THEN ! Up y C GGRAVY=BUOYB L0YG = L0F(YG2D) ; L0DYG=L0F(DYG2D) C... Pressure & density at first vertical cell DO JX=1,NX J=(JX-1)*NY+1 GDYP=F(L0YG+1) GTERM1 = 0.5*GRHO0*GGRAVY*GDYP GTERM2 = 0.5*GGRAVY*GDYP/(GRGAS*(F(L0TREF+J)+TEMP0)) F(L0PIN+J)=(GP0+GTERM1)/(1.-GTERM2) F(L0RHIN+J)=(PRESS0+F(L0PIN+J))/(GRGAS*(F(L0TREF+J)+TEMP0)) ENDDO C... Remainder of reference pressure & density profiles DO JX=1,NX DO JY=2,NY JS=J-1 GDYGS=F(L0DYG+JS) GTERM1 = 0.5*F(L0RHIN+JS)*GGRAVY*GDYGS GTERM2 = 0.5*GGRAVY*GDYGS/(GRGAS*(F(L0TREF+J)+TEMP0)) F(L0PIN+J)=(F(L0PIN+JS)+GTERM1)/(1.-GTERM2) F(L0RHIN+J)=(PRESS0+F(L0PIN+J))/(GRGAS*(F(L0TREF+J)+TEMP0)) ENDDO ENDDO C ELSEIF(IVDIR==3) THEN ! Up z C GGRAVY=BUOYC L0ZG = L0F(ZGNZ) ; L0DZG=L0F(DZGNZ) C... Pressure & density at first vertical cell IF(IZ.EQ.1) THEN DO J=1,NXNY GDZP=F(L0ZG+IZ) GTREF=F(L0TREF+J) GTERM1 = 0.5*GRHO0*GGRAVY*GDZP GTERM2 = 0.5*GGRAVY*GDZP/(GRGAS*(F(L0TREF+J)+TEMP0)) F(L0PIN+J)=(GP0+GTERM1)/(1.-GTERM2) F(L0RHIN+J)=(PRESS0+F(L0PIN+J))/(GRGAS*(GTREF+TEMP0)) ENDDO C... Remainder of reference pressure & density profiles ELSE DO J=1,NXNY GDZGS=F(L0DZG+IZ-1) GTERM1 = 0.5*F(L0RHINL+J)*GGRAVY*GDZGS GTERM2 = 0.5*GGRAVY*GDZGS/(GRGAS*(F(L0TREF+J)+TEMP0)) F(L0PIN+J)=(F(L0PINL+J)+GTERM1)/(1.-GTERM2) F(L0RHIN+J)=(PRESS0+F(L0PIN+J))/(GRGAS*(F(L0TREF+J)+TEMP0)) ENDDO ENDIF C ENDIF C------------------------------------------------------------------------- ELSEIF(IGR==19.AND.ISC==6) THEN C... Group 19, Section 6. End of IZ slab C... get velocity amplification factor IF(LBWAMP>0.OR.LBWAF>0.OR.LBWAT>0.OR.LBCP>0.OR.SHOWCOMF) THEN C... Calculate and store Wind Velocity Amplification Factor C... Factor is defined as Absolute velocity / wind speed IBL=1 ELSE IBL=IBLIN ENDIF ZO=F(L0ZREF(IDMN)+IBL) IBLTY=NINT(F(L0BLTY(IDMN)+IBL)) QREF=F(L0QREF(IDMN)+IBL) REFH=F(L0HREF(IDMN)+IBL) IF(IBLTY==2) ALPHA=F(L0POWR(IDMN)+IBL) IF(LBWAMP>0) THEN L0WAMP=L0F(LBWAMP) ! index for amplification store ENDIF IF(LBWAF>0) THEN L0WAF=L0F(LBWAF); L0ZG=L0F(ZGNZ) ENDIF IF(LBWAT>0) THEN L0WAT=L0F(LBWAT); L0ZG=L0F(ZGNZ) ENDIF IF(LBWAF>0.OR.LBWAT>0) THEN lbvref=lbname('VREF'); lbhref=lbname('HREF') if(lbvref>0) l0vrf=l0f(lbvref) if(lbhref>0) l0hrf=l0f(lbhref) ENDIF IF(LBVABS>0) THEN ! VABS is stored, so just use it L0VABS=L0F(LBVABS) ELSE L0VABS=L0VAB(IDMN) L0U=0; L0V=0; L0W=0; L0WL=0 IF(NX>1) L0U=L0F(U1) IF(NY>1) L0V=L0F(V1) IF(NZ>1) THEN L0W=L0F(W1); IF(IZ>1) L0WL=L0F(LOW(W1)) ENDIF ENDIF lbghh=lbname('GHH') if(lbghh>0) l0ghh=l0f(lbghh) DO IX=1,NX DO IY=1,NY I=(IX-1)*NY+IY IF(LBVABS>0) THEN VABS=F(L0VABS+I) ELSE DU=TINY; UVEL=0.; DV=TINY; VVEL=0.; DW=TINY; WVEL=0. IF(.NOT.SLD(I)) THEN ! current cell fluid C... get cell centre velocity as average of E/W, N/S, H/L faces where not blocked IF(L0U>0) THEN IF(IX 1) THEN ! not at IX=1 IF(.NOT.SLD(I-NY)) THEN ! West cell fluid UVEL=UVEL+F(L0U+I-NY); DU=DU+1. ENDIF ENDIF ENDIF IF(L0V>0) THEN IF(IY 1) THEN ! not at IY=1 IF(.NOT.SLD(I-1)) THEN ! South cell fluid VVEL=VVEL+F(L0V+I-1); DV=DV+1. ENDIF ENDIF ENDIF IF(L0W>0) THEN IF(IZ 1) THEN ! not at IZ=1 IF(.NOT.SLD(I-NXNY)) THEN ! Low cell fluid WVEL=WVEL+F(L0WL+I); DW=DW+1. ENDIF ENDIF UVEL=UVEL/DU; VVEL=VVEL/DV; WVEL=WVEL/DW ! average front/back ENDIF ENDIF C... now get absolute velocity and then amplification factor VABS=(UVEL**2+VVEL**2+WVEL**2)**.5 F(L0VABS+I)=VABS ENDIF IF(LBWAMP>0) F(L0WAMP+I)=VABS/(WAMPVR+TINY) IF(LBWAF>0.OR.LBWAT>0) THEN GH=F(L0ZG+IZ)-F(L0HIWAF(IDMN)+I) if(lbghh>0) f(l0ghh+i)=F(L0HIWAF(IDMN)+I) IF(IBLTY==2) THEN ! power-law profile WAFVR=QREF*(GH/REFH)**ALPHA ELSEIF(IBLTY==1) THEN ! log profile IF(EQZ(WALLB)) THEN GHDZO=AMAX1(GH/ZO,2.0) ELSE IF(LTZ(WALLB)) THEN GHDZO=(GH-WALLB)/ZO ELSE GHDZO=AMAX1((GH-WALLB)/ZO,2.0) ENDIF ENDIF WAFVR=QREF*LOG(GHDZO)/LOG(REFH/ZO) ELSE FACT=-999. ! initialize DO IL=1,NLINEV-1 ! loop over table IF(GH>=VEL_TAB(IL,1).AND.GH<=VEL_TAB(IL+1,1)) THEN FACT=(GH -VEL_TAB(IL,1))/ 1 (VEL_TAB(IL+1,1)-VEL_TAB(IL,1)) ILP1=IL+1; EXIT ENDIF ENDDO IF(QEQ(FACT,-999.)) THEN ! no value was found IF(GH>=VEL_TAB(NLINEV,1)) THEN ! above top of table IL=NLINEV ! use top value ELSE ! below bottom IL=1 ! use bottom value ENDIF FACT=0.0; ILP1=IL ENDIF UCOMP=VEL_TAB(IL,2)+FACT*(VEL_TAB(ILP1,2)- 1 VEL_TAB(IL,2)) VCOMP=VEL_TAB(IL,3)+FACT*(VEL_TAB(ILP1,3)- 1 VEL_TAB(IL,3)) WCOMP=VEL_TAB(IL,4)+FACT*(VEL_TAB(ILP1,4)- 1 VEL_TAB(IL,4)) WAFVR=SQRT(UCOMP*UCOMP+VCOMP*VCOMP+WCOMP*WCOMP) ENDIF IF(LBWAF>0) F(L0WAF+I)=VABS/(WAFVR+TINY) IF(LBWAT>0) F(L0WAT+I)=(VABS/(WAFVR+TINY))-1.0 if(lbvref>0) f(l0vrf+i)=WAFvr if(lbhref>0) f(l0hrf+i)=gh ENDIF ENDDO ENDDO C... Pressure coefficient IF(LBCP>0) THEN QREFSQ=F(L0QREF(IDMN)+IBL)**2 ! current wind speed L0CP=L0F(LBCP); L0P1=L0F(P1); L0D1=L0F(DEN1) DO I=1,NXNY IF(.NOT.SLD(I)) THEN F(L0CP+I)=F(L0P1+I)/(0.5*F(L0D1+I)*QREFSQ) ELSE F(L0CP+I)=0.0 ENDIF ENDDO ENDIF ENDIF c... Wind comfort IF(SHOWCOMF) THEN IF(LBWAMP>0) L0WAMP=L0F(LBWAMP) IF(LBVAV>0) L0VAV=L0F(LBVAV) lbwrf=lbname('WRF') if(lbwrf>0) l0wrf=l0f(lbwrf) IF(LBPRO>0) L0PRO=L0F(LBPRO) NEN=TYPECOMF=='NEN8100'; LAWS=TYPECOMF=='LAWSON' IF(NEN) THEN L0DAN=L0F(LBDAN); L0NEN=L0F(LBNEN) UTHR=5.0; DTHR=15.0 ELSEIF(LAWS) THEN L0LAWS=L0F(LBLAWS) DO IC=1,6 IF(LBPRB(IC)>0) L0PRB(IC)=L0F(LBPRB(IC)) ENDDO ENDIF IF(WEICOEF) THEN ! Wind data in Weibul coefficients DO I=1,NXNY PRO=0.0; IF(NEN) PDAN=0.0 IF(.NOT.SLD(I)) THEN WRF=F(L0VABS+I)/WMAST IF(LAWS) THEN ! Lawson criterion DO IC=1,NCRIT VINC=LUTHR(IC)/WRF LPRO(IC,I)=1.-(1.-EXP(-(VINC/AW)**AKW)) if(lbprb(ic)>0) f(l0prb(ic)+i)=LPRO(IC,I) ENDDO ELSE ! probability of exceeding or NEN VINC=UTHR/WRF PRO=1.-(1.-EXP(-(VINC/AW)**AKW)) IF(NEN) THEN DINC=DTHR/WRF; PDAN=1.-(1.-EXP(-(DINC/AW)**AKW)) ENDIF ENDIF IF(LBVAV>0) THEN ! Sector velocities are stored. Prepare for summing over sectors F(L0VAV+I)=WRF*VSECT*SECTP ! SECTP is the sector probability IF(LAWS) THEN ! Lawson DO IC=1,NCRIT LPRO(IC,I)=LPRO(IC,I)*SECTP if(lbprb(ic)>0) f(l0prb(ic)+i)=LPRO(IC,I) ENDDO ELSE ! Probability of exceeding or NEN F(L0PRO+I)=F(L0PRO+I)*SECTP IF(NEN) F(L0DAN+I)=F(L0DAN+I)*SECTP ENDIF ENDIF ! end of SECTP block ENDIF ! end of .NOT.SLD IF(.NOT.LAWS) F(L0PRO+I)=PRO; IF(NEN) F(L0DAN+I)=PDAN ENDDO ELSE ! Wind data in probability table form IF(LAWS) LPRO=0.0 DO I=1,NXNY PRO=0.0; IF(NEN) PDAN=0.0 IF(.NOT.SLD(I)) THEN WRF=F(L0VABS+I)/WMAST if(lbwrf>0) f(l0wrf+i)=wrf DO IB=2,NBINS+1 ! loop over wind speed bins VI=0.5*(WNDA(IB,1)-WNDA(IB-1,1))+WNDA(IB-1,1) ! mid-bin speed PROB=WNDA(IB,ISECT+1) ! bin probability IF(LAWS) THEN ! Lawson DO IC=1,NCRIT ! loop over criteria IF(VI*WRF>=LUTHR(IC)) LPRO(IC,I)=LPRO(IC,I)+PROB ! sum probability if(lbprb(ic)>0) f(l0prb(ic)+i)=LPRO(IC,I) ENDDO ELSE ! Probability or NEN IF(VI*WRF>=UTHR) PRO=PRO+PROB ! sum probability IF(NEN) THEN; IF(VI*WRF>=DTHR) PDAN=PDAN+PROB; ENDIF ENDIF ENDDO ! end loop ovr windspeed bins IF(.NOT.LAWS) F(L0PRO+I)=PRO; IF(NEN) F(L0DAN+I)=PDAN IF(LBVAV>0) THEN ! Sector velocities are stored. Prepare for summing over sectors F(L0VAV+I)=WRF*VSECT*SECTP ! SECTP is the sector probability IF(LAWS) THEN ! Lawson DO IC=1,NCRIT LPRO(IC,I)=LPRO(IC,I)*SECTP if(lbprb(ic)>0) f(l0prb(ic)+i)=LPRO(IC,I) ENDDO ELSE ! Probability of exceeding or NEN F(L0PRO+I)=F(L0PRO+I)*SECTP IF(NEN) F(L0DAN+I)=F(L0DAN+I)*SECTP ENDIF ENDIF ! end of SECTP block ENDIF ! end of .NOT.SLD block ENDDO ! end of loop over slab ENDIF ! end of probability table block IF(NEN) THEN ! assign NEN values DO I=1,NXNY PRO=F(L0PRO+I); PDAN=F(L0DAN+I) IF(PRO<0.025) THEN INEN=1 ELSEIF(PRO>=0.025.AND.PRO<0.05) THEN INEN=2 ELSEIF(PRO>=0.05.AND.PRO<0.1) THEN INEN=3 ELSEIF(PRO>=0.1.AND.PRO<0.2) THEN INEN=4 ELSEIF(PRO>=0.2) THEN INEN=5 ENDIF IF(PDAN>=0.05.AND.PDAN<=0.3) THEN INEN=6 ELSEIF(PDAN>0.3) THEN INEN=7 ENDIF F(L0NEN+I)=INEN ENDDO ELSEIF(LAWS) THEN ! assign Lawson Criterion valus DO I=1,NXNY F(L0LAWS+I)=1 DO IC=NCRIT,1,-1 ! start with most comfortable and work up IF(LPRO(IC,I)>=LTHRESH(IC)) F(L0LAWS+I)=NCRIT-IC+1 ENDDO ENDDO ENDIF ! end of NEN or Lawson ENDIF ! end of SHOWCOMF block ELSEIF(IGR==19.AND.ISC==7) THEN C... Group 19, Section 7. End of sweep IF(LBWAF>0.AND.IERR1==0) THEN ! IERR1 is error flag from setting InForm below CALL HILO3D(LBWAF) ! get domain HI and LO values IF(MIMD.AND.NPROC>1) THEN CALL PAR_MAXR(HI3D); CALL PAR_MINR(RLO3D) ENDIF CALL SET_INF('WAFM',HI3D,IERR1) IHI=(IXHI3D-1)*NY+IYHI3D CALL SET_INF('XWFM',F(L0F(XG2D)+IHI),IERR) CALL SET_INF('YWFM',F(L0F(YG2D)+IHI),IERR) CALL SET_INF('ZWFM',F(L0F(ZGNZ)+IZHI3D),IERR) ENDIF IF(LBCP>0.AND.IERR2==0) THEN CALL HILO3D(LBCP) ! get domain HI and LO values IF(MIMD.AND.NPROC>1) THEN CALL PAR_MAXR(HI3D); CALL PAR_MINR(RLO3D) ENDIF CALL SET_INF('CPM',HI3D,IERR2) IHI=(IXHI3D-1)*NY+IYHI3D CALL SET_INF('XCPM',F(L0F(XG2D)+IHI),IERR) CALL SET_INF('YCPM',F(L0F(YG2D)+IHI),IERR) CALL SET_INF('ZCPM',F(L0F(ZGNZ)+IZHI3D),IERR) ENDIF ELSEIF(IGR==19.AND.ISC==8) THEN C... Group 19, Section 8. End of time step IF(ALLOCATED(WNDA)) DEALLOCATE(WNDA,STAT=IERR) IF(ALLOCATED(VEL_TAB)) DEALLOCATE(VEL_TAB,STAT=IERR) IF(ALLOCATED(KE_TAB)) DEALLOCATE(KE_TAB,STAT=IERR) ENDIF ENDIF NAMSUB='gxblin' END C--------------------------------------------------------------------- SUBROUTINE SEND_FILE(LU,FILNAM,IERR) INCLUDE 'parear' CHARACTER FILNAM*(*),CVAL*256, DELIM*1 LOGICAL*4 EXISTS OPEN(LU,FILE=FILNAM,STATUS='OLD',IOSTAT=IERR) CALL GLSUMI(IERR) IF(IERR==0) THEN CLOSE(LU); RETURN ! file exists on all processors, no need to copy ENDIF CALL GETSYSID(ISYSID) DELIM='/'; IF(ISYSID.LE.2) DELIM=CHAR(92) II=LAST_INDEX(FILNAM,DELIM) ! find last delimiter IF(II>0) THEN ! there was one, remove local copies CVAL=FILNAM(II+1:) ! name without path i.e. local name OPEN(90,FILE=CVAL,STATUS='OLD',ERR=102) ! does it exist CLOSE(90,STATUS='DELETE',IOSTAT=IOS) ! delete if it did 102 CONTINUE EXISTS=.FALSE. ! flag to copy to master as well as slaves ELSE ! no delimiter, file is already local CVAL=FILNAM IF(MYID>0) THEN ! delete copies on slaves OPEN(90,FILE=CVAL,STATUS='OLD',ERR=1022) ! does it exist CLOSE(90,STATUS='DELETE',IOSTAT=IOS) ! delete if it did 1022 CONTINUE ENDIF EXISTS=.TRUE. ! flag to only copy to slaves ENDIF II=1 ! now copy central file to local CALL COPYMOFFILE(II,DELIM,CVAL,FILNAM,IERR,IOS,EXISTS) END C-------------------------------------- C Monin-Obukhov similarity-parameter:- psif (momentum) FUNCTION PSIF(GH,GLMO) ZETA=GH/GLMO IF(GLMO.GT.0.0) THEN ! Stable PSIF = -5.*ZETA ELSE ! Unstable PI = 3.141592654 X = (1.0-16.*ZETA)**0.25 T1 = 2.*LOG(0.5*(1.0+X)) T2 = LOG(0.5*(1.0+X*X)) PSIF = T1 + T2 - 2.0*ATAN(X)+0.5*PI ENDIF END C Monin-Obukhov similarity-parameter:- psi fif= 1+5*z/L FUNCTION FIF (GH,GLMO) ZETA=GH/GLMO FIF=1+5.*GH/GLMO ! Stable only END C Monin-Obukhov similarity-parameter:- psift (energy) FUNCTION PSIFT(GH,GLMO) ZETA=GH/GLMO IF(GLMO.GT.0.0) THEN ! Stable PSIFT = -5*ZETA ELSE ! Unstable X=(1.0-16.*ZETA)**0.25 PSIFT=2.*LOG(0.5*(1.0+X*X)) ENDIF END C Monin-Obukhov similarity-parameter:- phie (turbulence) FUNCTION PHIE(GH,GLMO) ZETA=GH/GLMO IF(GLMO.GT.0.0) THEN ! Stable PHIE = 1.0 + 4.0*ZETA ELSE ! Unstable PHIE = 1.0 - ZETA ENDIF END ```
37,575
89,389
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.578125
3
CC-MAIN-2024-22
latest
en
0.762941
http://www.stumblingrobot.com/tag/transitivity/
1,596,991,956,000,000,000
text/html
crawl-data/CC-MAIN-2020-34/segments/1596439738562.5/warc/CC-MAIN-20200809162458-20200809192458-00363.warc.gz
169,410,476
10,926
Home » Transitivity # Less than or equal relation is transitive. Prove that if and , then . Proof. By the transitivity of (Theorem I.17), we have that if and , then . Then, if and we have by substitution. If and , then by substitution. If and , then by transitivity of the relation. Hence, by definition of . Thus, in all cases and implies
86
342
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.609375
3
CC-MAIN-2020-34
latest
en
0.923085
https://nursingschoolessays.com/b-hi1-4-1-1-12-in-above-figures-r1-1-hmm-ry-3-hm-i-100-ma-12-50-ma-and-20-ma-a-2-pts-draw-the-magnetic-field-lines/
1,696,006,386,000,000,000
text/html
crawl-data/CC-MAIN-2023-40/segments/1695233510520.98/warc/CC-MAIN-20230929154432-20230929184432-00606.warc.gz
464,919,213
12,568
(B ) Hi1 4 1 1 12 In above figures R1 = 1 Hmm , Ry = 3 Hm , I = 100 MA, 12 = 50 mA, and }} = 20 MA. a ) (2 pts ) Draw the magnetic field lines Please see the attached and show any work that you can. (B )Hi141 112In above figures R1 = 1 Hmm , Ry = 3 Hm , I = 100 MA, 12 = 50 mA, and }} = 20 MA.a ) (2 pts ) Draw the magnetic field lines produced by each wire in figure A .6) 16 pts ) What is the magnitude of the total magnetic field at point P ! ?C ) 12 pts ) Is the total field at FI directeda. Into the pageb .Out of the page[ .Leftd. RightE. None of the above*\$1 16 pts ) In figure &amp; , a wire with current Iz is placed passing through { , and parallel to wires I and2 . What is the magnitude of the total force per unit length acting on wire 3 [email protected] ) 12 pts ) What direction is the force pointed that is acting on wire 3 .Into the page*EDo .Out of the pageTo the left1 . To the rightE . None of the above\$1 12 pts ) For what current through Iz would the force per unit length on wire 3 from part ( d ) beZERO ?
317
1,034
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.921875
3
CC-MAIN-2023-40
latest
en
0.847973
http://forums.wolfram.com/mathgroup/archive/2004/Sep/msg00584.html
1,721,292,333,000,000,000
text/html
crawl-data/CC-MAIN-2024-30/segments/1720763514826.9/warc/CC-MAIN-20240718064851-20240718094851-00527.warc.gz
9,214,990
8,356
Re: FindRoot for an oscillating function • To: mathgroup at smc.vnet.net • Subject: [mg50961] Re: FindRoot for an oscillating function • From: Paul Abbott <paul at physics.uwa.edu.au> • Date: Wed, 29 Sep 2004 07:09:26 -0400 (EDT) • Organization: The University of Western Australia • References: <cj86qq\$78r\$1@smc.vnet.net> <cjat26\$nsu\$1@smc.vnet.net> • Sender: owner-wri-mathgroup at wolfram.com ```In article <cjat26\$nsu\$1 at smc.vnet.net>, drbob at bigfoot.com (Bobby R. Treat) wrote: > Here's an approach that takes advantage of the Plot itself. It finds > consecutive data points that bracket roots, averages the x-values, > uses those as guesses in FindRoot, and finally graphs the original > function with roots superimposed. It will only find roots internal to > the plotted interval, so I reduced the lower limit to get the root at > zero. > > Needs["Graphics`"] > p = 1.234; > q = .7654; > gr[x_] = Sin[p x]/p + Sin[q x]/q; > plot = Plot[gr@x, {x, -1, 25}, DisplayFunction -> Identity]; > points = First@Cases[plot, Line[a_] -> a, Infinity]; > guesses = Mean /@ Extract[Partition[points[[All, 1]], 2, 1], > Position[Partition[points[[All, -1]], 2, > 1], _?(Times @@ # <= 0 &), {1}]] > roots = x /. FindRoot[gr@x, {x, #}] & /@ guesses > rootPts = {#, gr@#} & /@ roots > DisplayTogether[plot, Graphics at {PointSize[0.02], > Red, Point /@ rootPts}, DisplayFunction -> \$DisplayFunction]; This is similar to the RootsInRange function that appeared in "Finding Roots in an Interval" in The Mathematica Journal 7(2), 1998. The code there has also appear on this group: Needs["Utilities`FilterOptions`"] RootsInRange[d_, {l_, lmin_, lmax_}, opts___] := Module[{s, p, x, f = Function[l, Evaluate[d]]}, s = Plot[f[l], {l, lmin, lmax}, Compiled -> False, Evaluate[FilterOptions[Plot, opts]]]; p = Cases[s, Line[{x__}] -> x, Infinity]; p = Map[First, Select[Split[p, Sign[Last[#2]] == -Sign[Last[#1]] & ], Length[#1] == 2 & ], {2}]; Apply[FindRoot[f[l] == 0, {l, ##1}, Evaluate[FilterOptions[FindRoot, opts]]] &, p, {1}] ] Cheers, Paul -- Paul Abbott Phone: +61 8 6488 2734 School of Physics, M013 Fax: +61 8 6488 1014 The University of Western Australia (CRICOS Provider No 00126G) 35 Stirling Highway Crawley WA 6009 mailto:paul at physics.uwa.edu.au AUSTRALIA http://physics.uwa.edu.au/~paul ``` • Prev by Date: Re: wavelet transform • Next by Date: File Menu, 5 last notebooks • Previous by thread: Re: FindRoot for an oscillating function • Next by thread: Polynomial functions and equations discovered
812
2,654
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.0625
3
CC-MAIN-2024-30
latest
en
0.705616
https://www.coursehero.com/file/5886222/420/
1,521,304,219,000,000,000
text/html
crawl-data/CC-MAIN-2018-13/segments/1521257645248.22/warc/CC-MAIN-20180317155348-20180317175348-00686.warc.gz
773,239,291
78,044
{[ promptMessage ]} Bookmark it {[ promptMessage ]} # 420 - STAT 410 Brief STAT 400 Review Fall 2008 Hypotheses... This preview shows pages 1–3. Sign up to view the full content. STAT 410 Brief STAT 400 Review Fall 2008 Hypotheses Testing for the Population Mean μ Null Alternative H 0 : μ μ 0 vs. H 1 : μ < μ 0 Left - tailed. H 0 : μ μ 0 vs. H 1 : μ > μ 0 Right - tailed. H 0 : μ = μ 0 vs. H 1 : μ μ 0 Two - tailed. H 0 true H 0 false Accept H 0 ( Do NOT Reject H 0 ) ° Type II Error Reject H 0 Type I Error ° α = significance level = P ( Type I Error ) = P ( Reject H 0 | H 0 is true ) β = P ( Type II Error ) = P ( Do Not Reject H 0 | H 0 is NOT true ) Power = 1 – P ( Type II Error ) = P ( Reject H 0 | H 0 is NOT true ) Test Statistic: n ° ± 0 X Z - = OR n s 0 ± X T - = OR X The P-value ( observed level of significance ) is the probability, computed assuming that H 0 is true, of obtaining a value of the test statistic as extreme as, or more extreme than, the observed value. (The smaller the p-value is, the stronger is evidence against H 0 provided by the data.) P-value > α Do Not Reject H 0 ( Accept H 0 ) . P-value < α Reject H 0 . This preview has intentionally blurred sections. Sign up to view the full version. View Full Document 2 Computing P-value. H 0 : μ μ 0 H 1 : μ < μ 0 Left - tailed. H 0 : μ μ 0 H 1 : μ > μ 0 Right - tailed. H 0 : μ = μ 0 H 1 : μ μ 0 Two - tailed. Area to the left of the observed test statistic Area to the right of the observed test statistic 2 × area of the tail Rejection Region: H 0 : μ μ 0 H 1 : μ < μ 0 Left - tailed. H 0 : μ μ 0 H 1 : μ > μ 0 Right - tailed. H 0 : μ = μ 0 H 1 : μ μ 0 Two - tailed. This is the end of the preview. Sign up to access the rest of the document. {[ snackBarMessage ]} ### Page1 / 6 420 - STAT 410 Brief STAT 400 Review Fall 2008 Hypotheses... This preview shows document pages 1 - 3. Sign up to view the full document. View Full Document Ask a homework question - tutors are online
637
1,973
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.71875
4
CC-MAIN-2018-13
latest
en
0.71563
https://www.physicsforums.com/threads/at-what-value-of-h-does-the-flow-stop.48382/
1,540,314,887,000,000,000
text/html
crawl-data/CC-MAIN-2018-43/segments/1539583516480.46/warc/CC-MAIN-20181023153446-20181023174946-00419.warc.gz
1,041,769,637
13,095
# Homework Help: At what value of h does the flow stop? 1. Oct 17, 2004 ### jaidon I am new here so I hope that someone may have some advice. I am having trouble with a homework question which is quite lengthy. -- A large tank of water has a hose connected to it as shown in the figure. The tank is sealed at the top and has compressed air between the water surface and the top. When the water height, h, has the value 3.50m, the absolute pressure p of the compressed air is 4.20x10^5 Pa. Assume that the air above the water expands at a constant temperature, and take the atmospheric pressure to be 1.00x10^5 Pa. a)What is the speed with which the water flows out of the hose when h=3.50m? b)As water flows out of the tank, h decreases. Calculate the speed of flow for h=3.00m and h=2.00m c)At what value of h does the flow stop?-- The figure shows a tank of height=4.00m and the water level is lower than 4.00m giving the air gap. The hose is on the side in a "Z" shape with the top of the "Z" being 1.00m from ground level. I don't know if anyone can help, but I would appreciate the input. 2. Oct 18, 2004 ### Fredrik Staff Emeritus You will have to show us what you've done so far. (Read the sticky post at the top of the forum). 3. Oct 18, 2004 ### jaidon i believe that i figured out a) so for b) i was trying to determine the pressure at 3.00m. i used PV=nRT for 3.00m and PV=nRT for 3.50m and equated them. The nRTcancel and i get PV=PV. The tank is a cylinder so i said PAh=PAh. The A's cancel. P1h1=P2h2 with P1=4.20x10^5Pa, h1=3.50m, h2=3.00m solve for P2. I have done something wrong because this gives an increase in pressure when it should decrease. Anyone have some help for me? 4. Oct 18, 2004 ### copperboy Why not use your knowledge of calculus? 5. Oct 18, 2004 ### jaidon i think i just figured it out. thanks to anyone who considered the problem
559
1,881
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.953125
4
CC-MAIN-2018-43
latest
en
0.951172
https://www.enotes.com/homework-help/determine-angle-t-from-identity-sin-2t-4-25-253933
1,675,935,228,000,000,000
text/html
crawl-data/CC-MAIN-2023-06/segments/1674764501555.34/warc/CC-MAIN-20230209081052-20230209111052-00631.warc.gz
756,145,329
17,973
# Determine the angle t from the identity sin^2t=4/25 . ## Expert Answers You mean determine t from the equation (sin t)^2 = 4/25 (sin t)^2 = 4/25 => sin t = 2/5 and sin t = -2/5 sin t = 2/5 => t = arc sin (2/5) => t = 23.57 degrees sin t = -2/5 => t = arc sin (-2/5) => t = -23.57 degrees As sine is periodic, there are an infinite solutions for t. The angle t is 23.57 + n*360 degrees and -23.57 + n*360 degrees. Approved by eNotes Editorial Team
164
461
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4
4
CC-MAIN-2023-06
latest
en
0.73588
http://www.sqlservercentral.com/Forums/Topic1404342-392-2.aspx
1,386,511,712,000,000,000
text/html
crawl-data/CC-MAIN-2013-48/segments/1386163065834/warc/CC-MAIN-20131204131745-00008-ip-10-33-133-15.ec2.internal.warc.gz
548,293,546
24,632
Recent PostsRecent Posts Popular TopicsPopular Topics Home Search Members Calendar Who's On Best way to Calculate Human Age from a Birthdate Rate Topic Display Mode Topic Options Author Message Posted Tuesday, January 08, 2013 3:07 PM SSChampion Group: General Forum Members Last Login: Wednesday, December 04, 2013 2:23 PM Points: 10,854, Visits: 10,012 Actually I think that a negative age has some real value if you think of age not as only the age of a person. What you are calculating is the total years elapsed between two dates and if you were looking at something like project delivery dates a negative data might have some real worth. Maybe a company that works with long term deliverables like construction. It might be nice to see that something was delivered more than a year ahead of schedule. Your fine code would work for such a scenario if it included negatives. We have now seen several different approaches to the same problem. Most of them handle leap years correctly too. _______________________________________________________________Need help? Help us help you. Read the article at http://www.sqlservercentral.com/articles/Best+Practices/61537/ for best practices on asking questions.Need to split a string? Try Jeff Moden's splitter.Cross Tabs and Pivots, Part 1 – Converting Rows to Columns Cross Tabs and Pivots, Part 2 - Dynamic Cross Tabs Understanding and Using APPLY (Part 1)Understanding and Using APPLY (Part 2) Post #1404459 Posted Tuesday, January 08, 2013 3:34 PM Hall of Fame Group: General Forum Members Last Login: 2 days ago @ 9:18 AM Points: 3,022, Visits: 10,988 Sean Lange (1/8/2013)Actually I think that a negative age has some real value if you think of age not as only the age of a person. What you are calculating is the total years elapsed between two dates and if you were looking at something like project delivery dates a negative data might have some real worth. Maybe a company that works with long term deliverables like construction. It might be nice to see that something was delivered more than a year ahead of schedule. Your fine code would work for such a scenario if it included negatives. We have now seen several different approaches to the same problem. Most of them handle leap years correctly too. Going by the dictionary definition of age, "The length of time during which a being or thing has existed.", I have to say that negative age does not make much sense to me.I tested the solutions from the other posts using the test data I posted (leaving out the negative ages), and found every one had at least one difference with the solution I posted, especially with the handling of Feb 29 birthdays, or when the time of day for CURR_DATE was before the time of day for DOB when they were both the same day of the year ( Example: DOB = 2013-01-08 16:52:54.810 and CURR_DATE = 2023-01-08 16:52:54.710 ). I believe most calculations of Age ignore time of day, so my solution is coded to ignore it. Post #1404472 Posted Wednesday, January 09, 2013 7:22 AM SSChampion Group: General Forum Members Last Login: Wednesday, December 04, 2013 2:23 PM Points: 10,854, Visits: 10,012 Going by the dictionary definition of age, "The length of time during which a being or thing has existed.", I have to say that negative age does not make much sense to me.I agree that as age negative doesn't make any sense. I was trying to point out the calculation could be used in other scenarios where a negative would make sense. Given that the thread is to calculate human age it certainly doesn't make any sense in that context. _______________________________________________________________Need help? Help us help you. Read the article at http://www.sqlservercentral.com/articles/Best+Practices/61537/ for best practices on asking questions.Need to split a string? Try Jeff Moden's splitter.Cross Tabs and Pivots, Part 1 – Converting Rows to Columns Cross Tabs and Pivots, Part 2 - Dynamic Cross Tabs Understanding and Using APPLY (Part 1)Understanding and Using APPLY (Part 2) Post #1404776 Posted Wednesday, January 09, 2013 8:03 AM SSC-Dedicated Group: General Forum Members Last Login: Yesterday @ 3:45 PM Points: 34,540, Visits: 28,709 AndrewSQLDBA (1/8/2013)Hello EveryoneHappy New Year!!!I am fooling around with some code, and was wondering if there is a really great function to calculate the birthdate of a person. One that will take into count things like leap year. I am not getting into such fine details such as if the person was born on the west coast or the east coast type of calculations. Just a very good birthdate age calculation function.If I pass in the birthdate, what is the persons age today. Most that I have used or tried, come up a bit short when it comes to returning a perfect calculationThanks in AdvanceAndrew SQLDBASo, which "standard" are you going to use for someone that is born on a leap year day? Feb 28th or Mar 1st? --Jeff Moden"RBAR is pronounced "ree-bar" and is a "Modenism" for "Row-By-Agonizing-Row".First step towards the paradigm shift of writing Set Based code: Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column." "Change is inevitable. Change for the better is not." -- 04 August 2013(play on words) "Just because you CAN do something in T-SQL, doesn't mean you SHOULDN'T." --22 Aug 2013Helpful Links:How to post code problemsHow to post performance problems Post #1404809 Posted Wednesday, January 09, 2013 9:05 AM Hall of Fame Group: General Forum Members Last Login: 2 days ago @ 9:18 AM Points: 3,022, Visits: 10,988 Jeff Moden (1/9/2013)AndrewSQLDBA (1/8/2013)Hello EveryoneHappy New Year!!!I am fooling around with some code, and was wondering if there is a really great function to calculate the birthdate of a person. One that will take into count things like leap year. I am not getting into such fine details such as if the person was born on the west coast or the east coast type of calculations. Just a very good birthdate age calculation function.If I pass in the birthdate, what is the persons age today. Most that I have used or tried, come up a bit short when it comes to returning a perfect calculationThanks in AdvanceAndrew SQLDBASo, which "standard" are you going to use for someone that is born on a leap year day? Feb 28th or Mar 1st? As usual, no one agrees on this, but different countries do at least have some standard.http://en.wikipedia.org/wiki/February_29"...in England and Wales or in Hong Kong, a person born on February 29, 1996, will have legally reached 18 years old on March 1, 2014. If he or she was born in the United States, Taiwan or New Zealand, he or she legally becomes 18 on February 28, 2014, a day earlier..."I prefer the Feb 28 date for birthdays in non-leap years because that's the "standard" for the US. And, it's easier to code in TSQL. Post #1404853 Posted Wednesday, January 09, 2013 11:40 PM Grasshopper Group: General Forum Members Last Login: Friday, September 20, 2013 6:55 PM Points: 14, Visits: 51 Simple way to compute your age accurately.A lot of query not exactly compute the age? yes! because sometime they only compute the datediff between DOB and datenow and divide it to 365.25 days and as a result they get a number with decimal something like 25 is the age with .06 decimal (age=25.06).In this query you exactly get the age as is it.Example 1DOB = 11/15/1987 and datenow =11/15/2012 the result would be AGE=25 Example 2DOB = 11/16/1987 and datenow =11/15/2012 the result would be AGE=24SO HERE ARE THE QUERYDECLARE @DOB SMALLDATETIMESELECT @DOB = '11/15/1987'SELECT CASE WHEN MONTH(@DOB) >= MONTH(GETDATE()) AND DAY(@DOB) >=DAY(GETDATE()) THEN DATEDIFF(YY,@DOB,GETDATE()) ELSE DATEDIFF(YY,@DOB,GETDATE())-1END AS AGEHOPE I CAN HELP! Post #1405204 Posted Thursday, January 10, 2013 12:28 AM SSC-Insane Group: General Forum Members Last Login: Yesterday @ 7:15 PM Points: 22,086, Visits: 28,996 How about these solutions?`declare @DOB date = '20080229', @CurDate date = '20130228';select @DOB as DateOfBirth, @CurDate as CurrentDate, datediff(yy, @DOB, @CurDate) - case when dateadd(yy, -datediff(yy, @DOB, @CurDate), @CurDate) < @DOB then 1 else 0 end as TurnsYearOlderFirstOfMarch, datediff(yy, @DOB, @CurDate) - case when dateadd(yy, datediff(yy, @DOB, @CurDate), @DOB) <= @CurDate then 0 else 1 end as TurnsYearOlderLastOfFebruary;set @CurDate = '20130301';select @DOB as DateOfBirth, @CurDate as CurrentDate, datediff(yy, @DOB, @CurDate) - case when dateadd(yy, -datediff(yy, @DOB, @CurDate), @CurDate) < @DOB then 1 else 0 end as TurnsYearOlderFirstOfMarch, datediff(yy, @DOB, @CurDate) - case when dateadd(yy, datediff(yy, @DOB, @CurDate), @DOB) <= @CurDate then 0 else 1 end as TurnsYearOlderLastOfFebruary;set @CurDate = '20130227';select @DOB as DateOfBirth, @CurDate as CurrentDate, datediff(yy, @DOB, @CurDate) - case when dateadd(yy, -datediff(yy, @DOB, @CurDate), @CurDate) < @DOB then 1 else 0 end as TurnsYearOlderFirstOfMarch, datediff(yy, @DOB, @CurDate) - case when dateadd(yy, datediff(yy, @DOB, @CurDate), @DOB) <= @CurDate then 0 else 1 end as TurnsYearOlderLastOfFebruary;` Post #1405233 Posted Thursday, January 10, 2013 8:25 AM Grasshopper Group: General Forum Members Last Login: Thursday, April 11, 2013 2:09 PM Points: 14, Visits: 97 How about this.............. DECLARE @BIRTH_DATE DATETIME, @STPR_START_DATE DATETIMESET @BIRTH_DATE = '2008-02-29'SET @STPR_START_DATE = '2013-02-28'SELECT CASE WHEN YEAR(@STPR_START_DATE)%400 != 0 AND DAY(@STPR_START_DATE)= 28 THENDATEDIFF(YEAR, @BIRTH_DATE, @STPR_START_DATE) - CASE WHEN MONTH(@BIRTH_DATE)*100 + (DAY(@BIRTH_DATE) -1) > MONTH(@STPR_START_DATE)*100 + DAY(@STPR_START_DATE) THEN 1 ELSE 0 END ELSEDATEDIFF(YEAR, @BIRTH_DATE, @STPR_START_DATE) - CASE WHEN MONTH(@BIRTH_DATE)*100 + DAY(@BIRTH_DATE) > MONTH(@STPR_START_DATE)*100 + DAY(@STPR_START_DATE) THEN 1 ELSE 0 END END AS AGE Post #1405491 Posted Thursday, January 10, 2013 11:47 AM Right there with Babe Group: General Forum Members Last Login: Thursday, December 05, 2013 12:55 PM Points: 751, Visits: 2,158 Michael Valentine Jones (1/9/2013)Jeff Moden (1/9/2013)So, which "standard" are you going to use for someone that is born on a leap year day? Feb 28th or Mar 1st? As usual, no one agrees on this, but different countries do at least have some standard.http://en.wikipedia.org/wiki/February_29"...in England and Wales or in Hong Kong, a person born on February 29, 1996, will have legally reached 18 years old on March 1, 2014. If he or she was born in the United States, Taiwan or New Zealand, he or she legally becomes 18 on February 28, 2014, a day earlier..."I prefer the Feb 28 date for birthdays in non-leap years because that's the "standard" for the US. And, it's easier to code in TSQL.I hate to say it, but there is not actually a "standard" the U.S. - it depends on the precise purpose you're doing the date calculations for, and who needs them. Check with the business users about this, every time, particularly in regulated industries for for regulated/legal purposes. Post #1405580 Posted Thursday, January 10, 2013 7:04 PM SSC-Insane Group: General Forum Members Last Login: Yesterday @ 7:15 PM Points: 22,086, Visits: 28,996 mdsharif532 (1/10/2013)How about this.............. DECLARE @BIRTH_DATE DATETIME, @STPR_START_DATE DATETIMESET @BIRTH_DATE = '2008-02-29'SET @STPR_START_DATE = '2013-02-28'SELECT CASE WHEN YEAR(@STPR_START_DATE)%400 != 0 AND DAY(@STPR_START_DATE)= 28 THENDATEDIFF(YEAR, @BIRTH_DATE, @STPR_START_DATE) - CASE WHEN MONTH(@BIRTH_DATE)*100 + (DAY(@BIRTH_DATE) -1) > MONTH(@STPR_START_DATE)*100 + DAY(@STPR_START_DATE) THEN 1 ELSE 0 END ELSEDATEDIFF(YEAR, @BIRTH_DATE, @STPR_START_DATE) - CASE WHEN MONTH(@BIRTH_DATE)*100 + DAY(@BIRTH_DATE) > MONTH(@STPR_START_DATE)*100 + DAY(@STPR_START_DATE) THEN 1 ELSE 0 END END AS AGEA lot of extra work for what can be done easily with a couple of datetime functions and a case statement. Post #1405721 Permissions
3,278
11,978
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.71875
3
CC-MAIN-2013-48
longest
en
0.932545
http://metrohorse.com/page/fica-taxable-wages-for-2014-113078127.html
1,600,861,288,000,000,000
text/html
crawl-data/CC-MAIN-2020-40/segments/1600400210996.32/warc/CC-MAIN-20200923113029-20200923143029-00652.warc.gz
75,580,535
5,186
### Fica taxable wages for 2014 Article Sources. Simply stated, the benefit is received by those eligible for these benefits, while the withholding is taken from the pay of workers in the U. Here are the maximum wages subject to Social Security for the past few years:. Doing the Withholding Calculations. The Social Security cap is the maximum amount that your employer will withhold from your paychecks during the year. This amount includes:. Calculate Social Security and Medicare withholding separately, because they are included on the employee's paycheck and in the employee's W-2 in different places. Social Security tax is one of the payroll taxes paid by employees, employers, and self-employed individuals each year. You must report FICA tax withholding:. • Maximum Social Security Withholding UPDATED • FICA & SECA Tax Rates • Social Security and Medicare Tax Withholding Rates and Limits • Earnings up to \$, hit by Social Security FICA tax; revise payroll 1, Based on the increase in average wages, the maximum. Inhigh earners will find more of their compensation subject to Social Earnings up to \$, hit by Social Security FICA tax; more will. The taxable wage base for the Social Security portion of FICA is \$, a % hike over the wage base. The % Social. Eliminate any amounts that are not subject to these taxes. Video: Fica taxable wages for 2014 FICA tax Social Security Administration Fact Sheet. The FICA tax is shared by employees and employers, so one half of the tax is deducted from employee paychecks each payday. It is used to pay the cost of benefits for elderly recipients, survivors of recipients, and disabled individuals OASDI Insurance. This is the employee's portion of the Social Security payment. ## Maximum Social Security Withholding UPDATED Accessed Oct. Multiply the current Social Security tax rate by the amount of gross wages subject to Social Security. Fica taxable wages for 2014 Begin your calculation with the employee's gross pay amount for a given pay period, then calculate the Social Security and Medicare withholding. The total self-employment tax rate is The following provides a step-by-step guide to calculating FICA taxes. Continue Reading. This is the employee's portion of the Social Security payment. ## FICA & SECA Tax Rates The Medicare Tax Rate applies to all taxable wages and remains at The FICA Tax Rate, which is the combined Social Security rate of 8,\$,\$7, None, *, *, * Additional % tax on Medicare wages of \$, or more for wages below \$, the rate is. The FICA tax is shared by employees and employers, so one half of the tax is This is the maximum wages or salary amount for Social Security withholding for. Be sure you don't deduct Social Security from this check! If the Social Security maximum has not been met, then income from self-employment is used up to the maximum. File a Corrected Accessed Oct. Medicare wages will be the same as the total amount of pay. Video: Fica taxable wages for 2014 Payroll Tax: Social Security Taxes The Medicare tax rate is 1. The other half, an amount equal to the amount deducted from employee paychecksmust be paid by you as an employer. ## Social Security and Medicare Tax Withholding Rates and Limits Hid flood light bulbs The 0. Calculate the Medicare Withholding. Article Sources. The Social Security Maximum. The Medicare rates are 1. ## 1 thoughts on “Fica taxable wages for 2014” 1. Faugore: By using The Balance Small Business, you accept our.
740
3,484
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.5625
3
CC-MAIN-2020-40
latest
en
0.936662
https://gmatclub.com/forum/the-average-of-four-positive-integers-is-30-how-many-2480.html
1,498,469,424,000,000,000
text/html
crawl-data/CC-MAIN-2017-26/segments/1498128320695.49/warc/CC-MAIN-20170626083037-20170626103037-00519.warc.gz
757,632,803
46,364
It is currently 26 Jun 2017, 02:30 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # The average of four positive integers is 30, how many Author Message CEO Joined: 15 Aug 2003 Posts: 3454 The average of four positive integers is 30, how many [#permalink] ### Show Tags 18 Sep 2003, 12:39 00:00 Difficulty: (N/A) Question Stats: 0% (00:00) correct 100% (00:00) wrong based on 2 sessions ### HideShow timer Statistics This topic is locked. If you want to discuss this question please re-post it in the respective forum. The average of four positive integers is 30, how many integers of these four are larger than 30? (1) None of these four integers is more than 60; (2) Two integers are 9 and 10, respectively. Intern Joined: 17 Sep 2003 Posts: 21 Location: GMAT Maze, Chaos. Re: DS : Four Positive Integers [#permalink] ### Show Tags 18 Sep 2003, 15:55 praetorian123 wrote: The average of four positive integers is 30, how many integers of these four are larger than 30? (1) None of these four integers is more than 60; (2) Two integers are 9 and 10, respectively. I think the answer is C. Since the average is 30, the sum of the four +ve integers is 120. (1) is not sufficient. Given (2), 120 - 19 = 101. To get 101, u can have combination of 1 + 100 or 50 + 51. So u cannot determine the answer. Given (1) and (2), u can say for sure that there are 2 integers greater than 30. Re: DS : Four Positive Integers   [#permalink] 18 Sep 2003, 15:55 Display posts from previous: Sort by
549
2,020
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.09375
4
CC-MAIN-2017-26
longest
en
0.901278
https://rigtriv.wordpress.com/2010/06/29/ictp-day-12-end-of-the-summer-school/
1,571,698,540,000,000,000
text/html
crawl-data/CC-MAIN-2019-43/segments/1570987795253.70/warc/CC-MAIN-20191021221245-20191022004745-00190.warc.gz
680,584,784
28,031
## ICTP Day 12 – End of the Summer School The summer school is over. Tomorrow begins the conference, including talks from such luminaries as Arapura, Illusie, Griffiths…and that’s just the first three talks tomorrow. I’ll be talking on Thursday at 17:05, and until then, it’s polishing time for my talk, and working on a new lead on the research it’s expositing. Meanwhile, today’s notes are somewhat sparse. I’d lost track of the Shimura Varieties thread days ago, and today I lost the Beilinson-Bloch thread, sadly. Hopefully I’ll work it out later, but my lack of arithmetic background has caught up with me, I couldn’t keep everything straight…well. Here’s the notes from today, tomorrow’s should be more modular: 1. Kerr – Deligne’s Theorem on Abelian Varieties, Part II Let ${A}$ is a CM abelian variety, that is, an abelian variety such that ${MT(H^1(A))}$ is abelian. Now, if ${t\in H^{2p}(A^{an},\mathbb{Q})\cap F^pH^{2p}_{dR}(A)}$ and ${\sigma\in \mathrm{Aut}(\mathbb{C}/\mathbb{Q})}$ then we want to show that ${t^\sigma\in F^pH^{2p}_{dR}(A^\sigma)}$ lives in ${H^{2p}(A^{an,\sigma},\mathbb{Q})}$. Let ${E/\mathbb{Q}}$ be a CM field of degree ${2e}$ such that ${E}$ is totally imaginary and there exists ${p\in Gal(E/\mathbb{Q})}$ with ${p^2=\mathrm{id}}$, ${\phi\circ p=\bar{\phi}}$ for all ${\phi\in \hom(E,\mathbb{C})}$. Now, take ${F}$ to be the totally real fixed field, and ${\xi}$ such that ${E=F(\xi)}$, and ${\xi^2\in F}$ and ${\sqrt{-1}\phi_i(\xi)>0}$ for ${i=1,\ldots,e}$ with ${\hom(E,\mathbb{C})}$ generated by ${\Phi=\{\phi_1,\ldots,\phi_e,\bar{\phi}_1,\ldots,\bar{\phi}_e\}}$. We call ${(E,\Phi)}$ the CM type of ${E}$. Now, consider ${A/\mathbb{C}}$ an abelian variety with ${E\rightarrow \mathrm{End}(A)\otimes\mathbb{Q}=\mathscr{E}}$. Then ${V=H^1(A,\mathbb{Q})}$ is an ${E}$-vector space of even dimension ${d}$ and ${\dim A=ed=D}$. Now, ${V}$ is self-dual, and so ${E}$ acts on ${V^\vee}$ and we have natrual quotient map ${\bigwedge^d V^\vee\rightarrow \bigwedge_E^d V^\vee}$, and the dual is an inclusion defined over ${E}$. ${E}$ is a ${\mathbb{Q}}$-vector space of dimension ${2e}$ and it acts on ${E\otimes_\mathbb{Q}\mathbb{C}=\oplus_{\phi\in \hom(E,\mathbb{C})} \mathbb{C}_\phi}$ and similarly for ${V}$, and we have $\displaystyle \begin{array}{ccccc}(\bigwedge^d_E V)_\mathbb{C}\cong \oplus \bigwedge_\mathbb{C}^d V_{\phi_i} & & \to & & \oplus_{\sum d_i=d} (\otimes_i \bigwedge_\mathbb{C}^{d_i} V_{\phi_i})\cong (\bigwedge^d V)_\mathbb{C} \\ \uparrow &&&& \uparrow\\ \bigwedge^d_E V & & \to & & \bigwedge^d V\end{array}$ The HS on ${V}$ may be viewed as ${\phi:\mathbb{U}\rightarrow GL(V)}$ taking ${zz}$ to the ${\mathbb{C}}$-linear endomorphism of multiplciation by ${z^{1-0}}$ on ${V^{1,0}}$ and ${z^{0-1}}$ on ${V^{0,1}}$ and this must commute with ${v(E)}$. Therefore, ${V_{\phi_i}=(V_{\phi_i}\cap V^{1,0})\oplus (V_{\phi_i}\cap V^{0,1})=V^{1,0}_{\phi_i}\oplus V^{0,1}_{\phi_i}}$, and are of dimension ${a_i}$ and ${b_i}$ with ${a_i+b_i=d_i}$. So the Hodge type of ${\bigwedge^d_\mathbb{C} V_{\phi_i}\cong \bigwedge_\mathbb{C}^{a_i}V^{1,0}_{\phi_i}\otimes \bigwedge^{b_i}_\mathbb{C} V^{0,1}_{\phi_i}}$ is ${(a_i,b_i)}$. Conclusion: If ${\dim(V^{1,0}_{\phi_i})=d/2}$ for each ${i=1,\ldots,2e}$, then ${\bigwedge^d_E V\subset \bigwedge^d_\mathbb{Q} V}$ consists of Hodge classes (the Weil classes). If ${A_0}$ is an abelian variety of dimension ${d/2}$ and ${A=A_0\otimes_\mathbb{Q} E=A_0\times\ldots\times A_0}$ ${2e}$ times, this is then ${\mathbb{C}^{d/2}\otimes \mathbb{C}^{2e}/\Lambda\otimes\mathscr{O}_E}$. Let ${V=H^1(A,\mathbb{Q})}$, this is just ${H^1(A_0,\mathbb{Q})\otimes_\mathbb{Q} E}$, and so taking ${E}$ to act on the factor of ${E}$, we get ${V_{\phi_i}\cong V_0\otimes \mathbb{C}_{\phi_i}\cong V_{i,\mathbb{C}}}$. This gives us that ${\bigwedge^d V_{\phi_i}\cong \bigwedge^d V_{i,\mathbb{C}}=H^d(A_0,\mathbb{C})\cong H^{d/2,d/2}(A_0)}$. Moreover, ${\mathrm{Aut}(\mathbb{C})}$ changes neither the product structure on ${A}$, the endomorphisms (which are defined by cycles in ${A\times A}$) nor the class of ${[p]}$ on ${A_0}$. Thus, ${\bigwedge^d_E V}$ in this cases consists of absolute Hodge clases. Now, think of ${V}$ as a fixed ${\mathbb{Q}}$-vector space of dimension ${D}$ with nondegenerate alternating form ${Q:V\times V\rightarrow \mathbb{Q}}$. Let ${\phi}$ be any weight 1 Hodge structure on ${V}$ polarized by ${Q}$ and ${E\rightarrow \mathrm{End}(V,\phi)}$ an isomorphism (in such a way that ${Q}$ gives ${V^{1,0}_{\phi_i}}$ and ${V^{0,1}_{\phi_i}}$. We impose the condition that ${\dim V^{1,0}_{\phi_i}=d/2}$ for all ${i}$. Then there exists a unique ${E}$-Hermitian form ${\psi:V\times V\rightarrow E}$ with ${Q=\mathrm{tr}_{E/\mathbb{Q}}(\mathscr{E}\cdot \psi)}$ and ${\phi}$ stabilizes ${\psi}$ and commutes with ${i(E)}$. Hence, ${M_\phi\subset \mathrm{Aut}_E V\cap Sp(V,\mathbb{Q})=\mathrm{Res}_{F/\mathbb{Q}} U_E(V,\psi)}$ and ${X=M_\phi(\mathbb{R})^+}$, ${\phi\subset h^D}$ is a MT domain which precisely classifies the abelian varieties (or HS’s) satisfying the above conditions which are precisely that the HS for which ${\bigwedge^d_EV\subset\bigwedge^d V}$ consists of Hodge classes. Now, ${\mathcal{A}\rightarrow \Gamma\backslash X}$ a torsion free congruence subgroup is by the Baily-Borel theorem a quasi-projective algebraic variety parameterizing such ${A}$. Applying Principle B again leads to Weil classes on “Veil algebraic varieties” are absolute Hodge The rest of Deligne’s proof: Let ${M}$ be cut out by ${Hg'_A}$ and ${\check{M}}$ be cut out by ${{AH_g}'_A}$ (the Hodge and absolute Hodge tensors) then If a tensor ${t\in T^{k,\ell} H^1(A,\mathbb{Q})}$ is fixed by ${\check{M}}$, then it is absolute Hodge. For CM abelian varieties, Deligne shows that ${\check{M}\supseteq M}$ is an equality by producing enough absolute Hodge classes to push ${\check{M}}$ inside ${M}$. He does this by looking at endomorphisms of the CM field, ${A_{\sigma\Phi}\rightarrow A_\Pi}$ and Weil Hodge classes. This is dense on ${\prod_{\Phi_i} A_{\Phi_i}}$. 2. Green 4 Today we’re going to look at cycles over ${k}$ on a variety ${X}$ defined over ${\mathbb{Q}}$. Let ${X}$ be defined over ${\mathbb{Q}}$. Then ${CH^p(X(\mathbb{Q}))_\mathbb{Q}}$ is captured by cycles classes and ${\mathcal{AJ}^p_X\otimes\mathbb{Q}}$. That is, ${F^2=0}$. The Conjecture in fact says that ${F^m=0}$ for ${m\geq}$ the transcendence degree of ${k}$, plus two, for ${X}$ defind over ${k}$. Now, if ${X}$ is defined over ${\mathbb{Q}}$ and ${k}$ a finitely generated extension of ${\mathbb{Q}}$, we can find ${S}$ with ${k=\mathbb{Q}(S)}$, and set ${\mathcal{X}=X\times S}$, and for any cycle ${Z\in Z^p(X(k))}$ we can spread it to ${\mathcal{Z}\in Z^p(X\times S(\mathbb{Q}))}$, and there will exist a ${W\subset S}$ a proper subvariety, defined over ${\mathbb{Q}}$ of lower dimension with ${\mathcal{W}\in Z^{p-1}(X\times W)}$ such that ${\mathcal{Z}\rightarrow \mathcal{Z}+\mathcal{W}}$. If the conjecture on varieties over ${\mathbb{Q}}$ is ok, then ${Z\cong 0}$ in rational equivalence over ${\mathbb{Q}}$ for some ${\mathcal{W}}$, so ${[\mathcal{Z}+\mathcal{W}]}$ is torsion, and so its Abel-Jacobi image is also zero after tensoring with ${\mathbb{Q}}$. (I lost track of the lecture here) 3. Kerr 5 No notes taken 4. Green 5 No notes taken
2,672
7,339
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 143, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.65625
3
CC-MAIN-2019-43
latest
en
0.831395
https://ohio-acreage-for-lease.com/qa/question-what-is-the-distance-around-1-acre.html
1,628,198,970,000,000,000
text/html
crawl-data/CC-MAIN-2021-31/segments/1627046157039.99/warc/CC-MAIN-20210805193327-20210805223327-00219.warc.gz
440,046,371
9,000
# Question: What Is The Distance Around 1 Acre? ## How many laps around 5 acres is a mile? 3.19 lapsAssuming a circular enclosure, the smallest enclosure for the 5 acres, the circumference is 1,654 feet. 1 mile is 5,280 feet. There are 3.19 laps to a mile on a circular track.. ## What is the cheapest fence to install? Though yard fencing can be expensive, we’ve rounded up some cheap fence ideas to fit nearly any budget.Vinyl fencing. … Split rail and mesh. … Concrete fencing. … Barbed wire. … Living fences. … Lattice fencing. … Hog wire. … Chicken wire. A chicken wire garden fence is likely the best-known affordable fencing.More items…•Nov 10, 2020 ## How many linear feet is 5 acres? The distance around a 5 acre property depends on the shape, not just the area. If this property is a square then each side is of length 466.7 feet and would require 4 × 466.7 = 1,866.8 feet of fencing. ## How many yards long is 10 acres? 48,400 square yardsIf one acre is 43,560 square feet, then 10 acres of land is a whopping 435,600 square feet. That’s the same as 48,400 square yards! ## What is a linear foot? A linear foot is exactly what it sounds like: a measurement that is 12 inches (one foot) long and extends in a straight (or linear) line. ## How many feet is the perimeter of 1 acre? 43,560 square feetIt is still pretty easy to figure the perimeter if you know the area of your property. One acre of land comprises 43,560 square feet. If you assume that your property is 4 equal sides, then you can take the square root (√) of 43,560 and find out that each side would measure 209′. ## What lot size is 1/2 acre? 1/2 acre? An acre is 43560 square feet so half an acre is 43560/2 = 21780 square feet. If your 1/2 acre plot of land is a square with area 21780 square feet then each side is of length √21780 feet. ## How many acres is 1 mile by 1 mile? 640 Acres1 Square Mile = 640 Acres Square mile is an imperial and United States Customary area unit. 1 square mile = 640 acres. ## How much does a 100 foot fence cost? Yard Fence Costs Per FootLinear FootCheaper (Wire or Electric)Moderate (Wood)8\$10 – \$50\$100 – \$200100\$100 – \$600\$1,000 – \$2,000150\$150 – \$1,000\$1,500 – \$3,000300\$300 – \$1,800\$3,000 – \$6,0001 more row ## How many linear feet is 4 acres? If one acre is a square (two-dimensional figure with four straight sides, whose four interior angles are right angles and whose four sides are of equal length), each side is 208.7103 linear feet (square root of 43,560). The perimeter would be 834.8413 linear feet (208.7103*4). ## How much does it cost to fence 5 acres? For a 5 acre lot of any shape, you will need a total length of 1866.8 feet of fence. If you have a perfectly square-shaped yard then each side will be 1866.8 / 4 = 466.7 feet. ## What is the distance around an acre of land? An acre is a unit of area and it can be in any shape. An acre is 43560 square feet in an acres so 2 acres is 87120 square feet. If it is in the shape of a square then each side is √87120 = 295.16 feet so once around is 1180.64 feet. there are 5280 feet on a mile so approximately four and a half times around is a mile. ## How much does it cost to fence 1 acre? Fencing Cost Per Acre The cost to fence 1 acre runs a minimum of \$1,050 and a maximum of \$33,400 with most homeowners spending an average price of \$2,016 to \$9,011. The cheapest backyard fence is barbed wire which costs as little as \$1,050 an acre, wheres a split rail wood fence costs about \$7,000 for 1 acre. ## What is the perimeter length of an acre? 834.84 feetAn acre is 43560 square feet o it it’s a square of side length s feet then s2 = 43560. Hence s = √43560 = 208.71 feet. There are 4 sides to the square so the perimeter is 4 × 208.71 = 834.84 feet. ## What is the perimeter of 1/2 acre? Well, a half acre is: 21,780 square feet. if that was a square plot, each side would be about 147.58′, making the perimeter 590.32 feet. ## Do fences increase property value? A fence itself does not add as much value to the home when compared to material and construction costs. It will enhance the value of the home only if there is a true need for such an outdoor structure. ## What is the perimeter of 1 hectare? The hectare (/ˈhɛktɛər, -tɑːr/; SI symbol: ha) is a non-SI metric unit of area equal to a square with 100-metre sides (1 hm2), or 10,000 m2, and is primarily used in the measurement of land. There are 100 hectares in one square kilometre. ## What is the circumference of 1 acre? 834.8 feetA perfectly square acre would have a perimeter of = 834.8 feet. A long, thin strip of land 43560 feet long and one foot wide would have a perimeter of 87122 feet.
1,299
4,696
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.828125
4
CC-MAIN-2021-31
latest
en
0.913525
https://www.testpreppractice.net/practice-tests/numeric-entry/ne4.html?testname=GRE
1,563,297,111,000,000,000
text/html
crawl-data/CC-MAIN-2019-30/segments/1563195524679.39/warc/CC-MAIN-20190716160315-20190716182315-00143.warc.gz
876,613,457
7,179
# Numeric Entry Practice Test 4 #### Q.1 From a rectangular cardboard of length 21 cm and breadth 12 cm, a square whose diagonal is of length 3*sqrt(2) cm is cut out. Find the area of the remaining cardboard. • Explanation: Area of the cardboard = length*breath = 21*12 = 252 sq.cm. Diagonal of a square of side x cm is equal to x*sqrt(2) cm. Hence, side of square = 3 cm Area of square = side*side = 3*3 = 9 sq.cm. Area of the remaining cardboard = 252-9 = 243 sq.cm. #### Q.2 The height of a cone is trebled. The radius of the cone is doubled. By how many times will the volume of the cone increase? • Explanation: Let the original height be h and radius of the base be r. Original volume will be = (1/3)*pi*r^2*h New height will be 3h New radius will be 2r New volume will be = (1/3)*pi*(2r)^2*(3h) = (1/3)*pi*r^2*h = (1/3)*pi*r^2*h*12 New volume = 12*original volume Hence, the volume increases 12 times. [pi=22/7, r^2=r*r] #### Q.3 A rectangular reservoir is 120 m long and 75 m high. The cross-sectional area of the pipe through which water flows is 0.04 sq.m. and the water level in the reservoir rises 2.4 meters in 18 hours. Find the speed of water flowing into the reservoir. • Explanation: Cross-sectional area of the pipe = 0.04 sq.m. Volume of water put into the reserviour in 18 hours = 120*75*2.4 = 21600 cubic m Volume of water put into the reserviour per hour = 21600/18 cubic m/hr Required speed = Volume/area = (21600/18)/0.04 = 30000 cubic m/hr #### Q.4 P(x-1,3) : P(x,4) = 1: 9, find x. • Explanation: P(x-1,3) : P(x,4) = 1: 9 (x-1)!/(x-1-3)! : x!/(x-4)! = 1:9 (x-1)!/(x-4)! : x(x-1)!/(x-4)! = 1:9 1:x = 1:9 x = 9 #### Q.5 If P(10,r) = 30240, then find r. • Explanation: P(10,r) = 30240 10!/(10-r)! = 30240 10!/(10-r)! = 3024*10 10!/(10-r)! = 336*9*10 10!/(10-r)! = 42*8*9*10 10!/(10-r)! = 6*7*8*9*10 10!/(10-r)! = 10!/5! 10!/(10-r)! = 10!/(10-5)! Hence, r = 5 #### Q.6 Find the sum of the first 20 terms of the AP 1, 4, 7, 10... • Explanation: The first term of the AP, a, is 1 and the common difference, d, is 3. The sum of n terms is given by Sn = (n/2)[2a+(n-1)d] Sum of 20 terms = (20/2)[2*1+(20-1)*3] = 10(2+19*3) = 10(59) = 590 #### Q.7 Tim buys two items at different rates for a total of Rs. 410 and sells one at a loss of 20% and the other at a gain of 25%. If both the items were sold at the same price, then find the cost price of the cheaper item. • Explanation: Let CP and SP be cost price and selling price respectively. Let the CP of the first item be Rs.x and that of the second item be Rs.(410-x) Selling price of the first item SP = CP(100-loss%)/100 = x(100-20)/100 = 8x/10 Selling price of the second item SP = CP(100+gain%)/100 = (410-x)(100+25)/100 = (410-x)(125/100) Since both the items were sold at the same price, we have 8x/10 = (410-x)(125/100) 80x = 410*125 - 125x 80x + 125x = 51250 205x=51250 x = 250 The second item would cost 410-250 = 160 The cheaper item costs Rs.160 #### Q.8 Find the length of a parallel side of the trapezium if one of its parallel sides is 70 m long and its area is 2500 sq.m. The distance between the parallel sides is 40m. • Explanation: Let the unknown length be x m. Area of trapezium = (1/2)*sum of parallel sides*distance between parallel sides 2500 = (1/2)*(70+x)*40 70+x = 2500*2/40 70+x = 125 x = 125-70 = 55 The other side of the trapezium is 55 m long. #### Q.9 Find (b-a) when x^4+ax^2+bx+5 is exactly divisible be x^2+3x+2. [x^4=x*x*x*x] • Explanation: x^2+3x+2=(x+1)(x+2) (x+1) and (x+2) should be the factors of x^4+ax^2+bx+5 Putting x = -1 and x = -2, we get x^4+ax^2+bx+5 = (-1)^4+a(-1)^2+b(-1)+5=0 1+a-b+5=0 a-b = -6 ...(1) x^4+ax^2+bx+5 = (-2)^4+a(-2)^2+b(-2) + 5=0 16 + 4a -2b+5=0 4a-2b = -21...(2) Multiplying (1) by 4 and subtracting (2) from it, we get 4a - 4b -4a +2b = -24 + 21 -2b = -3 b = 3/2 a = b-6 = 3/2-6 = (3-12)/2 = -9/2 b -a = 3/2+9/2 = 12/2 = 6 #### Q.10 x+1/x = 5. Find x^3+1/x^3. [x^3=x*x*x] • Explanation: x+1/x = 5 (x+1/x)^3 = 5^3 x^3+1/x^3+3(x+1/x) = 125 x^3+1/x^3 = 125 -3*5 =125-15 = 110 x^3+1/x^3 = 110 #### Q.11 Pam lent Rs. 1800 to Ritu. She also lent Rs 2250 to Tammy for four years. They both returned the same interest charged at the same rate of interest of 3% per annum. For how many years did Ritu borrow the money? • Explanation: Let P, R and T be the principle, rate and time for the simple interest SI SI = P*R*T/100 Since the SI is the same for both of them, we have 1800*3*T/100 = 2250*3*4/100 T = 2250*3*4/(1800*3) = 5 The money was lent for 5 years #### Q.12 The length of the tangent drawn from an exterior point P to the circle is 63 cm. The point P is at a distance of 65 cm from the centre of the circle. Find the radius of the circle. • Explanation: Let the centre of the circle be the point O and let the point at which the tangent drawn from point P meets the circle be T. PTO will be a right triangle right angled at T. PT = 63 cm and PO = 65 cm. Applying Pythagoras theorem to the triangle PTO and squaring both the sides, we get PO^2 = PT^2+TO^2 65^2=63^2+TO^2 4225 = 3969 + TO^2 TO^2 = 4225-3969 = 256 TO = 16 cm The radius of the circle is 16 cm. [PO^2=PO*PO] #### Q.13 A hollow cylinder of height 3m is melted to form a solid cylinder of the same height. The external and internal radii of the hollow cylinder are 29 cm and 20 cm respectively. Find the radius of the solid cylinder. • Explanation: Volume of cylinder = pi*r^2*h, where r is the radius and h is the height. Internal radius = 20 cm and external radius = 29 cm Volume of metal used = pi*(29)^2*h - pi(20)^2*h = pi*h[841-400] = pi*h*441 Volume of solid cylinder = pi*h*441 = pi*h*R^2, where R is the radius of the solid cylinder R^2 = 441 R = sqrt(441) = 21 The radius of the solid cylinder is 21 cm. [pi=22/7, r^2=r*r] #### Q.14 A shopkeeper sells each book for Rs 1134 after giving a discount of 19% on the marked price. Had he sold the books at the printed price, he would have earned a profit of 40%. Find the cost price of each book.
2,189
5,990
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.5625
5
CC-MAIN-2019-30
latest
en
0.801888
https://www.esaral.com/q/draw-a-right-triangle-in-which-the-sides-other-than-hypotenuse-are-of-lengths-5-cm-and-4-cm-86699
1,721,553,762,000,000,000
text/html
crawl-data/CC-MAIN-2024-30/segments/1720763517663.24/warc/CC-MAIN-20240721091006-20240721121006-00851.warc.gz
655,490,954
12,268
# Draw a right triangle in which the sides (other than hypotenuse) are of lengths 5 cm and 4 cm. Question: Draw a right triangle in which the sides (other than hypotenuse) are of lengths 5 cm and 4 cm. Then construct another triangle whose sides are 5/3 times the corresponding sides of the given triangle. Solution: Given that Construct a right triangle of sides let $A B=5 \mathrm{~cm}, A C=4 \mathrm{~cm}$, and $\angle A=90^{\circ}$ and then a triangle similar to it whose sides are $(5 / 3)^{\text {th }}$ of the corresponding sides of $\triangle A B C$. We follow the following steps to construct the given Step of construction Step: I- First of all we draw a line segment. Step: II- With as centre and draw an angle. Step: III- With as centre and radius. Step: IV -Join BC to obtain. Step: V -Below AB, makes an acute angle. Step: VI -Along $A X$, mark off five points $A_{1}, A_{2}, A_{3}, \mathrm{~A}_{4}$ and $\mathrm{A}_{5}$ such that $A A_{1}=A_{1} A_{2}=A_{2} A_{3}=A_{3} A_{4}=A_{4} A_{5}$ Step: VII -Join $A_{3} B$. Step: VIII -Since we have to construct a triangle each of whose sides is $(5 / 3)^{\text {th }}$ of the corresponding sides of $\triangle A B C$. So, we draw a line $A_{5} B$ on $A X$ from point $A_{5}$ which is $A_{5} B^{\prime} \| A_{3} B$, and meeting $A B$ at $B$ '. Step: IX -From $B^{\prime}$ point draw $B^{\prime} C^{\prime} \| B C$, and meeting $A C$ at $C^{\prime}$ Thus, $\triangle A B^{\prime} C^{\prime}$ is the required triangle, each of whose sides is $(5 / 3)^{\text {th }}$ of the corresponding sides of $\triangle A B C$.
512
1,587
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
5
5
CC-MAIN-2024-30
latest
en
0.761399
https://www.weegy.com/?ConversationId=5585178E
1,628,106,438,000,000,000
text/html
crawl-data/CC-MAIN-2021-31/segments/1627046154897.82/warc/CC-MAIN-20210804174229-20210804204229-00490.warc.gz
1,022,737,099
10,356
How do I solve this math problem 1/3 of 18 Question Updated 4/12/2014 1:05:12 PM Confirmed by sujaysen [4/12/2014 1:05:12 PM], Edited by debnjerry [4/12/2014 1:07:30 PM] s Original conversation User: How do I solve this math problem 1/3 of 18 Weegy: To solve 1/3 of 18, You divide 18 by 3. The answer is 6. User: Thank you Question Updated 4/12/2014 1:05:12 PM Confirmed by sujaysen [4/12/2014 1:05:12 PM], Edited by debnjerry [4/12/2014 1:07:30 PM] Rating 34,217,626 * Get answers from Weegy and a team of really smart live experts. Popular Conversations Earthquakes can only occur at a fault if the fault experiences ... Weegy: The area in a fault where stress builds up is called the hypocentre zone. The _______ biome has trees that lose leaves once each year. The ... Weegy: Intertidal zone is where the ocean meets the land. Earthquakes can only occur at a fault if the fault experiences ... Weegy: The area in a fault where stress builds up is called the hypocentre zone. Which of the following is not considered a behavior? A. eating B. ... Weegy: Heart beating is not considered a behavior. What is a collective noun Weegy: A word used to describe a person, place or a thing. User: Which word in the sentence is an example of an ... In vertebrates, chemical messengers called hormones are secreted by ... Weegy: User: Describe how and where viruses reproduce and the function of RNA and DNA in this process. The Eighteenth Amendment, "Prohibition," was reversed by the Weegy: The Eighteenth Amendment, "Prohibition," was reversed by the Twenty-first Amendment. User: Controls given ... NIMS is applicable to all stakeholders with incident related ... Weegy: NIMS is applicable to all stakeholders with incident related responsibilities. TRUE. User: The _________ is ... Elements of tone include _____. details dialogue imagery setting Weegy: Elements of tone include dialogue and imagery User: The use of _____ in a story attaches deeper meaning to a ... S L P Points 99 [Total 1740] Ratings 1 Comments 89 Invitations 0 Offline S L Points 59 [Total 582] Ratings 0 Comments 59 Invitations 0 Offline S L Points 57 [Total 213] Ratings 2 Comments 37 Invitations 0 Offline S L 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 P 1 L P 1 Points 55 [Total 5460] Ratings 2 Comments 35 Invitations 0 Offline S L Points 43 [Total 442] Ratings 0 Comments 43 Invitations 0 Offline S L Points 38 [Total 512] Ratings 0 Comments 38 Invitations 0 Offline S L Points 35 [Total 138] Ratings 2 Comments 15 Invitations 0 Offline S L 1 1 1 1 1 Points 30 [Total 2112] Ratings 3 Comments 0 Invitations 0 Offline S Points 23 [Total 23] Ratings 0 Comments 13 Invitations 1 Offline S L 1 1 1 Points 11 [Total 3136] Ratings 1 Comments 1 Invitations 0 Online * Excludes moderators and previous winners (Include) Home | Contact | Blog | About | Terms | Privacy | © Purple Inc.
862
2,844
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.59375
3
CC-MAIN-2021-31
longest
en
0.931233
https://business-finance.blurtit.com/1205864/how-to-calculate-the-recoverable-amount-of-the-assets
1,620,591,847,000,000,000
text/html
crawl-data/CC-MAIN-2021-21/segments/1620243989012.26/warc/CC-MAIN-20210509183309-20210509213309-00147.warc.gz
177,910,833
11,670
# How To Calculate The Recoverable Amount Of The Assets? The recoverable amount of assets is the higher figure of an asset's fair value minus costs to sell, otherwise known as net selling price, and its value in use where fair value is the amount obtainable from the sale of an asset in a transaction between knowledgeable parties. Recoverable amount of assets can be calculated in a number of steps: • Determining Recoverable Amount • Fair value minus costs to sell • Value in use • Cash flow projections Determining the recoverable amount of assets begins if fair value minus costs to sell or value in use exceeds the carrying amount. If fair value minus costs to sell cannot be calculated, then the recoverable amount is the value in use amount. For assets you need to get rid of, the recoverable amount is calculated with fair value minus costs to sell. When working out fair value minus costs to sell you must consider if there is a binding sale agreement or sale contract. If this is the case then use the price made under the agreement minus costs of disposal. However, if there is a readily active market for that type of asset, you should use market price minus costs of disposal. Market price is the current bid price if it's available; otherwise use the price in the most recent transaction. If there is no active market, then use the best guess of the asset's selling price minus costs of disposal. The costs of disposal are the direct added costs. You must then calculate the value in use. The calculation of value in use should involve the estimate of future cash flows expected from the asset, the expectations of variations in timing or future cash flows and the price for bearing the uncertainty inherent in the asset. Other factors including liquidity, should be included. Lastly, cash flow projections should be calculated on reasonable assumptions. Five years is the cut off point for budgets and forecasts. The most recent budgets and forecasts should be used if possible. The differences between past cash flow projections and actual cash flows should be determined. Estimates of cash flows in the suture should not include cash inflows/outflows from financing activities, or income tax receipts or payments. thanked the writer.
441
2,255
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.78125
3
CC-MAIN-2021-21
longest
en
0.926221
https://math.stackexchange.com/questions/3050972/proof-that-xn4-bmod-10-xn-bmod-10-for-n-ge-1/3050985
1,563,572,190,000,000,000
text/html
crawl-data/CC-MAIN-2019-30/segments/1563195526359.16/warc/CC-MAIN-20190719202605-20190719224605-00091.warc.gz
466,321,360
43,190
# Proof that $x^{(n+4)} \bmod 10 = x^n \bmod 10\,$ for $\,n\ge 1$ While solving a programming challenge in which one should efficiently compute the last digit of $$a^b$$, I noticed that apparently the following holds (for $$n > 0$$) $$x^{(n+4)} \mod 10 = x^n \mod 10$$ How can this be proven? • the standard notation is just $x^{n+4}\equiv x^n\bmod 10$. When working with the modulus we only write $\bmod 10$ at the end of each line of identities, you dont need to repeat $\bmod 10$ next to every quantity – Masacroso Dec 24 '18 at 5:59 • Sorry for a digression to the notation issues. @Masacroso I'd say that both notations are quite common: mod is sometimes used to denote a binary operation - in which case \bmod is used: $x^n \bmod 10$ $x^n \bmod 10$. But if we user congruences, the command \pmod is used to get something like $x^{n+4}\equiv x^n \pmod{10}$ $x^{n+4}\equiv x^n \pmod{10}$. – Martin Sleziak Dec 24 '18 at 8:59 Consider how the difference, i.e., $$x^{n + 4} - x^{n} = x^{n} \left( x^4 - 1 \right) = x^{n} \left( x^2 - 1 \right) \left( x^2 + 1 \right)$$ behaves for all cases of $$x \mod 10$$ from $$0$$ to $$9$$, inclusive. For $$x$$ being $$0$$, the result is $$0$$. For $$x$$ being an even positive value, then $$x^n$$ is a multiple of 2, while either $$x^2 + 1$$ or $$x^2 - 1$$ is a multiple of 5, so together are a multiple of $$10$$. Finally, for $$x$$ being an odd value, consider the cases: $$1$$: $$x^2 - 1$$ is $$0$$ $$3$$: $$x^2 + 1$$ is $$10$$, i.e., $$0 \mod 10$$ $$5$$: $$x^n$$ is a multiple of $$5$$ and $$x^2 - 1$$ is a multiple of $$2$$, so together shows congruent to $$0$$ $$7$$: $$x^2 + 1$$ is $$50$$ $$9$$: $$x^2 - 1$$ is $$80$$ This shows that the result is congruent to $$0$$ in all cases. The other answers here are generally shorter and simpler, so they're better if you're able to use them. However, this is a fairly general way to check basically any congruence operation where ever it can be relatively easily used (e.g., where the mod divisor is not a variable or too large). • Actually, a general method requires a general result like Fermat or Euler's theorem (I show one way in my answer). The above method works only in very special cases. – Bill Dubuque Dec 25 '18 at 19:03 • @Bill Dubuque I meant "general" in a different way, in that directly or indirectly all results boil down to ensuring all the congruence values provide desired result such as I showed. You are right that my method will not work well, or at all, in many cases such as where the modulo value is variable or a very large value. – John Omielan Dec 25 '18 at 23:35 • Why do you only consider $x$ values 0..9? – Björn Lindqvist Dec 26 '18 at 2:15 • @Björn Lindqvist I only consider $x$ values $0\ldots9$ since those are the only residue values $\mod 10$, i.e., any other integer has a remainder which is one of these values when divided by 10. Also, only the last digit of any one of the number affects the final last value last digit, which is why we are working with $\mod 10$. – John Omielan Dec 26 '18 at 2:17 $$x^4\equiv1\pmod{10}$$ for $$x$$ and $$10$$ coprime, by Euler's theorem, since $$\varphi (10)=4$$. (The proof for $$x$$ not coprime to $$10$$ can be found in some of the other answers here.) • But $x$ is not assumed coprime to $10$ so the proof is either incomplete or incorrect. – Bill Dubuque Dec 25 '18 at 17:42 • @BillDubuque. It's incomplete. It is apparent from the other answers that this is actually true in general. – Chris Custer Dec 25 '18 at 17:45 • If your proof relies on other answers then you should explicitly mention that, Otherwise the answer could wrongly mislead readers into thinking that nothing more needs to be said (given the nuymber of votes I suspect this did actually occur). – Bill Dubuque Dec 25 '18 at 17:46 • @BillDubuque. Ok, will do. – Chris Custer Dec 25 '18 at 17:48 • Thanks for adding the note. Btw, note that your answer was posted first so readers could not have relied on other answers for this crucial fact. – Bill Dubuque Dec 25 '18 at 17:55 This is true because of the periodicity of the last digits of numbers raised to powers. if you check cor $$x=2,3,4\ldots n$$ you will see that after every $$4$$th power the last digit is the same, $$2^1 = 2$$ and $$2^5 = 32$$ thus the last digit is the same. $$x^n \mod 10$$ is essentially asking for the last digit. Thus, $$x^{n+4} \mod 10 = x^n \mod 10$$ Some numbers have a periodicity of $$2$$ when raised to powers, for example, $$9$$ but the unit digit is still the same as the first power as the first. • While your argument is true, this isn't a proof per se. – Hubble Dec 24 '18 at 5:51 • Can it be written in such a way to make it a proof, or is just a logic? – Prakhar Nagpal Dec 24 '18 at 5:52 We want to show that $$x^{n+4}-x^n=x^n(x^4-1)$$ is a multiple of $$10$$. We first notice that if $$x^n$$ is odd then $$x^4-1$$ will be even and vice versa. Consequently, the product will always be even. If $$x$$ is divisible by $$5$$ then the product is divisible by $$10$$ because we have an even product which is divisible by $$5$$. If $$x$$ is not divisible by $$5$$ then $$x=5m+k$$ for some $$m$$ and $$k$$ where $$k\in\{1,2,3,4\}$$. Now, $$x^4-1=(5m)^4+4(5m)^3k+6(5m)^2k^2+4(5m)k^3+k^4-1$$ and every term is divisible by $$5$$ with the possible exception of $$k^4-1$$. But there are only four values of $$k$$ and if you raise each to the fourth power and subtract $$1$$ you get a multiple of $$5$$. Once again, this gives us an even number divisible by $$5$$ which is a multiple of $$10$$. It is the special case $$\,p,q,k = 2,5,4\,$$ below. Lemma $$\,p\neq q\,$$ primes & $$\, \color{#c00}{p\!-\!1,\,q\!-\!1\mid k}\,\Rightarrow\, pq\mid\smash[t]{\overbrace{ x^n(x^k\!-\!1)}^{\large a}}\,$$ for all $$\,x\,$$ & $$\,n>0$$ Proof $$\$$ $$\,p,q\,$$ are coprime so $$\,pq\mid a\iff p,q\mid a,\,$$ by Euclid / unique factorization. When $$\, p\mid x\,$$ then $$\,p\mid x\mid x^n\mid a\,$$ by $$\,n>0,\,$$ hence $$\,p\mid a\,$$ by transitivity of divisibility, else: $$\,\ p\nmid x\,$$ so $$\!\bmod p\!:\ x\not\equiv 0\,$$ so $$\,x^k \equiv (\color{#0a0}{x^{p-1}})^{\smash[t]{\Large\color{#c00}{\frac{k}{p-1}}}}\!\!\equiv\color{#0a0} 1\,$$ by $$\rm\color{#0a0}{Fermat},\,$$ so $$\,p\mid x^k\!-1\mid a$$. So in every case $$\,p\mid a,\,$$ so $$\,q\mid a\,$$ by $$\,p,q\,$$ symmetry (i.e. same proof works for $$q). \$$ Remark Above is a special case of this generalization of Euler-Fermat - which often proves handy. Theorem $$\$$ Suppose that $$\ m\in \mathbb N\$$ has the prime factorization $$\:m = p_1^{e_{1}}\cdots\:p_k^{e_k}\$$ and suppose that for all $$\,i,\,$$ $$\ e\ge e_i\$$ and $$\ \phi(p_i^{e_{i}})\mid f.\$$ Then $$\ m\mid a^e(a^f-1)\$$ for all $$\: a\in \mathbb Z.$$ Proof $$\$$ Notice that if $$\ p_i\mid a\$$ then $$\:p_i^{e_{i}}\mid a^e\$$ by $$\ e_i \le e.\:$$ Else $$\:a\:$$ is coprime to $$\: p_i\:$$ so by Euler's phi theorem, $$\!\bmod q = p_i^{e_{i}}:\, \ a^{\phi(q)}\equiv 1 \Rightarrow\ a^f\equiv 1\,$$ by $$\: \phi(q)\mid f.\$$ Since all $$\ p_i^{e_{i}}\ |\ a^e (a^f - 1)\$$ so too does their lcm = product = $$m$$. Examples $$\$$ You can find many illuminating examples in prior questions, e.g. below $$24\mid a^3(a^2-1)$$ $$40\mid a^3(a^4-1)$$ $$88\mid a^5(a^{20}-1)$$ $$6p\mid a\,b^p - b\,a^p$$ If you can prove $$x^4 \equiv 1 \pmod {10}$$ that's enough as $$x^{n+4}\equiv x^n\cdot x^4 \equiv x^n\cdot 1\equiv x^n\pmod {10}$$ If you google Euler's theorem that will be true for all numbers relatively prime to $$10$$. Intuitively (but hand wavy) if $$x$$ and $$10$$ are relatively prime then $$x^n$$ will be relatively prime. There are only $$4$$ possible last digits that are relatively prime to $$10$$ so $$x^k$$ will cycle through the same four digits. (More or less.) If $$x$$ is even or divisible by $$5$$. Well, if it's both then $$x \equiv 0 \mod 10$$ so $$x^k \equiv 0 \pmod {10}$$ and $$x^{n+4} \equiv x^n \equiv 0\pmod {10}$$. Likewise if it's odd but divisible by $$5$$ then $$x^k \equiv 5 \pmod{10}$$ and so $$x^{n+4} \equiv x^n \equiv 5 \pmod {10}$$. Now if $$x$$ is even but not divisible by $$5$$ then. Well, $$x^n$$ is even and there are only $$4$$ even digits it could end with and it cycles through them. More to the point $$2*j \pmod 10 \equiv 2*(j \pmod 5)$$ and there are only $$4$$ non-zero classes $$\pmod 5$$ so $$x^k$$ just cycles through those. Because: $$x^{n+4} = x^{n-1}x^5$$ $$x^n = x^{n-1}x$$ ($$ab\mod 10) = (a(b\mod 10) \mod 10)$$ ... you only have to prove that $$(x^5 \mod 10)=(x\mod 10)$$. ... which means: $$((x \mod 10)^5\mod 10) = (x \mod 10)$$. Because there are only 10 possible values for $$(x \mod 10)$$ these 10 values can be calculated directly: • $$0^5\mod 10=0 \mod 10=0$$ • $$1^5\mod 10=1 \mod 10=1$$ • $$2^5\mod 10=32 \mod 10=2$$ • $$3^5\mod 10=243 \mod 10=3$$ ... • $$9^5\mod 10=59049 \mod 10=9$$
3,120
8,851
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 155, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.0625
4
CC-MAIN-2019-30
longest
en
0.814174
https://www.teacherspayteachers.com/Product/Math-Task-Cards-Full-Year-Bundle-6th-Grade-Math-2273242
1,519,361,805,000,000,000
text/html
crawl-data/CC-MAIN-2018-09/segments/1518891814393.4/warc/CC-MAIN-20180223035527-20180223055527-00569.warc.gz
909,468,589
21,752
Subject Resource Type Common Core Standards Product Rating File Type PDF (Acrobat) Document File 3 MB|186 pages Share Product Description This product includes 6th grade math task cards for the 12 different math units shown below. It consists of 1-4 problems for each of the following topics (270 total problems). An answer key IS included! To see a free sample of the type of problems included, download the first unit for free. You can get this resource and many of my other upper elementary/middle school resources at a discount as a part of the following mega bundle! Math Mega Bundle (For Upper Elementary/Middle School Math) PART 2 DECIMALS AND EXPONENTS (1) Multiply Decimals by Whole Numbers (2) Multiply Decimals by Decimals (3) Estimate Quotients (4) Divide Decimals by Whole Numbers (5) Divide Decimals by Decimals (6) Exponents FRACTION MULTIPLICATION AND DIVISION (1) Estimate Products of Fractions (2) Multiply Fractions and Whole Numbers (3) Multiply Fractions (4) Multiply Mixed Numbers (5) Divide Whole Numbers by Fractions (6) Divide Fractions (7) Divide Mixed Numbers RATES AND RATIOS (1) Ratios (2) Unit Rates (3) Ratio Tables (4) Equivalent Ratios (5) Ratio and Rate Problems FRACTIONS, DECIMALS, AND PERCENTS (1) Decimals as Fractions (2) Fractions as Decimals (3) Percents as Fractions (4) Fractions as Percents (5) Percents and Decimals (6) Percents Greater than 100% and Less than 1% (7) Compare and Order Fractions (8) Compare and Order Fractions, Decimals, and Percents (9) Estimate with Percents (10) Percent of a Number ALGEBRAIC EXPRESSIONS AND PROPERTIES (1) Numerical Expressions (2) Variables and Expressions (3) Write Expressions (4) Algebra: Properties (5) Distributive Property EQUATIONS (1) Equations (2) Solve and Write Addition Equations (3) Solve and Write Subtraction Equations (4) Solve and Write Multiplication Equations (5) Solve and Write Division Equations (6) Solve and Write Two-Step Equations FUNCTIONS, INEQUALITIES, AND ABSOLUTE VALUE (1) Graphing Relations (2) Function Tables (3) Function Rules (4) Functions, Equations, and Graphs (5) Inequalities (6) Write and Graph Inequalities (7) One-Step Inequalities (8) Two-Step Inequalities (9) Integers and Absolute Value (1) Points, Lines, and Planes (2) Measuring Angles (3) Angle Relationships (4) Triangles (6) Properties of Polygons (7) Similar and Congruent Figures (8) Translations (9) Reflections (10) Rotations PERIMETER, AREA, AND VOLUME (1) Area of Parallelograms (2) Area of Triangles (3) Area of Trapezoids (4) Circumference (5) Area of Circles (6) Perimeter of Composite Figures (7) Area of Composite Figures (8) Volume of Rectangular Prisms (9) Surface Area of Rectangular Prisms VOLUME AND SURFACE AREA (1) Volume of Triangular Prisms (2) Volume of Pyramids (3) Volume of Cylinders (4) Volume of Cones (5) Surface Area of Cylinders (6) Volume of Composite Figures DATA AND GRAPHS (1) Mean (2) Median, Mode, and Range (3) Appropriate Measures (4) Frequency Tables (5) Histograms (6) Circle Graphs PROBABILITY (1) Probability of Simple Events (2) Sample Spaces (3) Fundamental Counting Principle (4) Probability of Independent Events (5) Probability of Dependent Events (6) Make Predictions Each page has two copies of the problems for that topic, making it easy to print, copy, and cut into half pieces of paper for students to use. Problems are not meant to be a lesson in itself, but can be used in lots of different ways. Great to use in math centers, as additional problems for students who are finished with work, or as exit slips. These problems are intended for use in 5th-6th grade, but could also be used for advanced 4th grade students or 7th grade review. Purchase units individually by visiting my store! Enjoy! Also be sure to check out my Math Enrichment Task Cards for upper elementary/middle school math at the following link. Math Enrichment Full-Year Bundle - 6th Grade (Includes 12 units) Get many of my math products at a large discount by purchasing them as a part of my Math Mega Bundle (Part 1). These Math Task Cards are not included in the mega bundle. Math Mega Bundle (For Upper Elementary/Middle School Math) PART 1 Total Pages 186 pages Included Teaching Duration N/A Report this Resource \$19.00 List Price: \$24.75 You Save: \$5.75 More products from Middle School Math Man \$0.00 \$0.00 \$0.00 \$0.00 \$0.00 \$19.00 List Price: \$24.75 You Save: \$5.75 Teachers Pay Teachers is an online marketplace where teachers buy and sell original educational materials.
1,270
4,527
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.71875
4
CC-MAIN-2018-09
latest
en
0.818254
https://learningperspectives.in/blog/what-is-inflation/
1,723,113,302,000,000,000
text/html
crawl-data/CC-MAIN-2024-33/segments/1722640726723.42/warc/CC-MAIN-20240808093647-20240808123647-00823.warc.gz
289,771,172
28,387
# What is Inflation? Learn about Inflation with Sonu Sood. ### Movie Case Study The scene that you saw shows Shivraj (played by Sonu Sood) having a conversation with the distributor of his product. Shivraj is quite upset with the rising prices and is requesting the distributor to increase the price of the product. Although, he wants to increase the price by 300%. Shivraj continues to tell stories about how rising prices are affecting him. He says that his subordinates are asking for a higher salary. In this blog, Learning Perspectives will explore the meaning of Inflation. ### What is Inflation? Inflation refers to the general rise of prices in the economy. The inflation rate is the rate at which the value of money (currency) is falling which leads to rising prices of goods and services. This in turn results in lower purchasing power over a period of time and it affects the standard of living too. In the overall economy, it causes unemployment, lowers the cost of borrowing, encourages investing, and discourages spending. Similar to what Shivraj mentions in the scene, he narrates how he hasn’t got his house painted in over two years due to rising prices. When prices decrease, another situation occurs, this is called deflation. In this case, the purchasing power of money increases. ### Consumer Price Index: Inflation is measured through a tool called the consumer price index (CPI). This index measures the changes in the average prices of goods. These changes are compared through time. These goods include only consumer goods. It excludes goods purchased by businesses and the government. The consumer price index is sometimes called the cost of living index too. ### The formula for Consumer Price Index Cost of the basket in CY/ Cost of the basket in base year*100 A basket is defined as the goods that the country considers essential to calculate this index. The most recent report of CPI in India by the Ministry of Statistics and program Implementation indicates 25 goods in the goods basket. 2012 is the base year for the year 2021. Reserve Bank of India (RBI), which is the central bank of the country has forecasted inflation of 5.7% for the January- March quarter of 2022 while for the next quarter, it is forecasted at 5%. ### Causes of Inflation: Inflation occurs due to many factors, this includes the economic environment of the country, financial health, and interest rates. Demand-pull inflation occurs when total spending is higher than the supply in the market. In other words, prices are pulled up by the pressure from the buyers’ total expenditure. Cost-push Inflation occurs when there is an increase in the cost of production irrespective of the demand conditions. Costs increase by the labor, Price rise of raw materials, and increase in construction equipment build upward pressure on the price.
582
2,857
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.84375
3
CC-MAIN-2024-33
latest
en
0.960262
https://zh.wikipedia.org/wiki/%E5%85%B1%E8%BD%AD%E6%A2%AF%E5%BA%A6%E6%B3%95%E7%9A%84%E6%8E%A8%E5%AF%BC
1,532,203,105,000,000,000
text/html
crawl-data/CC-MAIN-2018-30/segments/1531676592654.99/warc/CC-MAIN-20180721184238-20180721204238-00411.warc.gz
1,071,178,031
25,438
# 共轭梯度法的推导 ${\displaystyle {\boldsymbol {Ax}}={\boldsymbol {b}}}$ ## 从Arnoldi/Lanczos迭代推导 ### 一般Arnoldi方法 Arnoldi迭代从一个向量${\displaystyle {\boldsymbol {r}}_{0}}$开始,通过定义${\displaystyle {\boldsymbol {v}}_{i}={\boldsymbol {w}}_{i}/\lVert {\boldsymbol {w}}_{i}\rVert _{2}}$,其中 ${\displaystyle {\boldsymbol {w}}_{i}={\begin{cases}{\boldsymbol {r}}_{0}&{\text{if }}i=1{\text{,}}\\{\boldsymbol {Av}}_{i-1}-\sum _{j=1}^{i-1}({\boldsymbol {v}}_{j}^{\mathrm {T} }{\boldsymbol {Av}}_{i-1}){\boldsymbol {v}}_{j}&{\text{if }}i>1{\text{,}}\end{cases}}}$ ${\displaystyle {\mathcal {K}}({\boldsymbol {A}},{\boldsymbol {r}}_{0})=\{{\boldsymbol {r}}_{0},{\boldsymbol {Ar}}_{0},{\boldsymbol {A}}^{2}{\boldsymbol {r}}_{0},\ldots \}}$ ${\displaystyle {\boldsymbol {AV}}_{i}={\boldsymbol {V}}_{i+1}{\boldsymbol {\tilde {H}}}_{i}{\text{,}}}$ {\displaystyle {\begin{aligned}{\boldsymbol {V}}_{i}&={\begin{bmatrix}{\boldsymbol {v}}_{1}&{\boldsymbol {v}}_{2}&\cdots &{\boldsymbol {v}}_{i}\end{bmatrix}}{\text{,}}\\{\boldsymbol {\tilde {H}}}_{i}&={\begin{bmatrix}h_{11}&h_{12}&h_{13}&\cdots &h_{1,i}\\h_{21}&h_{22}&h_{23}&\cdots &h_{2,i}\\&h_{32}&h_{33}&\cdots &h_{3,i}\\&&\ddots &\ddots &\vdots \\&&&h_{i,i-1}&h_{i,i}\\&&&&h_{i+1,i}\end{bmatrix}}={\begin{bmatrix}{\boldsymbol {H}}_{i}\\h_{i+1,i}{\boldsymbol {e}}_{i}^{\mathrm {T} }\end{bmatrix}}{\text{,}}\\h_{ji}&={\begin{cases}{\boldsymbol {v}}_{j}^{\mathrm {T} }{\boldsymbol {Av}}_{i}&{\text{if }}j\leq i{\text{,}}\\\lVert {\boldsymbol {w}}_{i+1}\rVert _{2}&{\text{if }}j=i+1{\text{,}}\\0&{\text{if }}j>i+1{\text{.}}\end{cases}}\end{aligned}}} ### 直接Lanzcos方法 ${\displaystyle {\boldsymbol {H}}_{i}={\begin{bmatrix}a_{1}&b_{2}\\b_{2}&a_{2}&b_{3}\\&\ddots &\ddots &\ddots \\&&b_{i-1}&a_{i-1}&b_{i}\\&&&b_{i}&a_{i}\end{bmatrix}}{\text{.}}}$ ${\displaystyle {\boldsymbol {H}}_{i}={\boldsymbol {L}}_{i}{\boldsymbol {U}}_{i}={\begin{bmatrix}1\\c_{2}&1\\&\ddots &\ddots \\&&c_{i-1}&1\\&&&c_{i}&1\end{bmatrix}}{\begin{bmatrix}d_{1}&b_{2}\\&d_{2}&b_{3}\\&&\ddots &\ddots \\&&&d_{i-1}&b_{i}\\&&&&d_{i}\end{bmatrix}}{\text{,}}}$ {\displaystyle {\begin{aligned}c_{i}&=b_{i}/d_{i-1}{\text{,}}\\d_{i}&={\begin{cases}a_{1}&{\text{if }}i=1{\text{,}}\\a_{i}-c_{i}b_{i}&{\text{if }}i>1{\text{.}}\end{cases}}\end{aligned}}} {\displaystyle {\begin{aligned}{\boldsymbol {x}}_{i}&={\boldsymbol {x}}_{0}+{\boldsymbol {V}}_{i}{\boldsymbol {H}}_{i}^{-1}(\lVert {\boldsymbol {r}}_{0}\rVert _{2}{\boldsymbol {e}}_{1})\\&={\boldsymbol {x}}_{0}+{\boldsymbol {V}}_{i}{\boldsymbol {U}}_{i}^{-1}{\boldsymbol {L}}_{i}^{-1}(\lVert {\boldsymbol {r}}_{0}\rVert _{2}{\boldsymbol {e}}_{1})\\&={\boldsymbol {x}}_{0}+{\boldsymbol {P}}_{i}{\boldsymbol {z}}_{i}{\text{,}}\end{aligned}}} {\displaystyle {\begin{aligned}{\boldsymbol {P}}_{i}&={\boldsymbol {V}}_{i}{\boldsymbol {U}}_{i}^{-1}{\text{,}}\\{\boldsymbol {z}}_{i}&={\boldsymbol {L}}_{i}^{-1}(\lVert {\boldsymbol {r}}_{0}\rVert _{2}{\boldsymbol {e}}_{1}){\text{.}}\end{aligned}}} {\displaystyle {\begin{aligned}{\boldsymbol {P}}_{i}&={\begin{bmatrix}{\boldsymbol {P}}_{i-1}&{\boldsymbol {p}}_{i}\end{bmatrix}}{\text{,}}\\{\boldsymbol {z}}_{i}&={\begin{bmatrix}{\boldsymbol {z}}_{i-1}\\\zeta _{i}\end{bmatrix}}{\text{.}}\end{aligned}}} {\displaystyle {\begin{aligned}{\boldsymbol {p}}_{i}&={\frac {1}{d_{i}}}({\boldsymbol {v}}_{i}-b_{i}{\boldsymbol {p}}_{i-1}){\text{,}}\\\alpha _{i}&=-c_{i}\zeta _{i-1}{\text{.}}\end{aligned}}} {\displaystyle {\begin{aligned}{\boldsymbol {x}}_{i}&={\boldsymbol {x}}_{0}+{\boldsymbol {P}}_{i}{\boldsymbol {z}}_{i}\\&={\boldsymbol {x}}_{0}+{\boldsymbol {P}}_{i-1}{\boldsymbol {z}}_{i-1}+\zeta _{i}{\boldsymbol {p}}_{i}\\&={\boldsymbol {x}}_{i-1}+\zeta _{i}{\boldsymbol {p}}_{i}{\text{.}}\end{aligned}}} ### 从正交性和共轭性导出共轭梯度法 {\displaystyle {\begin{aligned}{\boldsymbol {x}}_{i}&={\boldsymbol {x}}_{i-1}+\alpha _{i-1}{\boldsymbol {p}}_{i-1}{\text{,}}\\{\boldsymbol {r}}_{i}&={\boldsymbol {r}}_{i-1}-\alpha _{i-1}{\boldsymbol {Ap}}_{i-1}{\text{,}}\\{\boldsymbol {p}}_{i}&={\boldsymbol {r}}_{i}+\beta _{i-1}{\boldsymbol {p}}_{i-1}{\text{.}}\end{aligned}}} {\displaystyle {\begin{aligned}{\boldsymbol {r}}_{i}^{\mathrm {T} }{\boldsymbol {r}}_{j}&=0{\text{,}}\\{\boldsymbol {p}}_{i}^{\mathrm {T} }{\boldsymbol {Ap}}_{j}&=0{\text{.}}\end{aligned}}} {\displaystyle {\begin{aligned}{\boldsymbol {r}}_{i}&={\boldsymbol {b}}-{\boldsymbol {Ax}}_{i}\\&={\boldsymbol {b}}-{\boldsymbol {A}}({\boldsymbol {x}}_{0}+{\boldsymbol {V}}_{i}{\boldsymbol {y}}_{i})\\&={\boldsymbol {r}}_{0}-{\boldsymbol {AV}}_{i}{\boldsymbol {y}}_{i}\\&={\boldsymbol {r}}_{0}-{\boldsymbol {V}}_{i+1}{\boldsymbol {\tilde {H}}}_{i}{\boldsymbol {y}}_{i}\\&={\boldsymbol {r}}_{0}-{\boldsymbol {V}}_{i}{\boldsymbol {H}}_{i}{\boldsymbol {y}}_{i}-h_{i+1,i}({\boldsymbol {e}}_{i}^{\mathrm {T} }{\boldsymbol {y}}_{i}){\boldsymbol {v}}_{i+1}\\&=\lVert {\boldsymbol {r}}_{0}\rVert _{2}{\boldsymbol {v}}_{1}-{\boldsymbol {V}}_{i}(\lVert {\boldsymbol {r}}_{0}\rVert _{2}{\boldsymbol {e}}_{1})-h_{i+1,i}({\boldsymbol {e}}_{i}^{\mathrm {T} }{\boldsymbol {y}}_{i}){\boldsymbol {v}}_{i+1}\\&=-h_{i+1,i}({\boldsymbol {e}}_{i}^{\mathrm {T} }{\boldsymbol {y}}_{i}){\boldsymbol {v}}_{i+1}{\text{.}}\end{aligned}}} {\displaystyle {\begin{aligned}{\boldsymbol {P}}_{i}^{\mathrm {T} }{\boldsymbol {AP}}_{i}&={\boldsymbol {U}}_{i}^{-\mathrm {T} }{\boldsymbol {V}}_{i}^{\mathrm {T} }{\boldsymbol {AV}}_{i}{\boldsymbol {U}}_{i}^{-1}\\&={\boldsymbol {U}}_{i}^{-\mathrm {T} }{\boldsymbol {H}}_{i}{\boldsymbol {U}}_{i}^{-1}\\&={\boldsymbol {U}}_{i}^{-\mathrm {T} }{\boldsymbol {L}}_{i}{\boldsymbol {U}}_{i}{\boldsymbol {U}}_{i}^{-1}\\&={\boldsymbol {U}}_{i}^{-\mathrm {T} }{\boldsymbol {L}}_{i}\end{aligned}}} {\displaystyle {\begin{aligned}\alpha _{i}&={\frac {{\boldsymbol {r}}_{i}^{\mathrm {T} }{\boldsymbol {r}}_{i}}{{\boldsymbol {r}}_{i}^{\mathrm {T} }{\boldsymbol {Ap}}_{i}}}\\&={\frac {{\boldsymbol {r}}_{i}^{\mathrm {T} }{\boldsymbol {r}}_{i}}{({\boldsymbol {p}}_{i}-\beta _{i-1}{\boldsymbol {p}}_{i-1})^{\mathrm {T} }{\boldsymbol {Ap}}_{i}}}\\&={\frac {{\boldsymbol {r}}_{i}^{\mathrm {T} }{\boldsymbol {r}}_{i}}{{\boldsymbol {p}}_{i}^{\mathrm {T} }{\boldsymbol {Ap}}_{i}}}{\text{.}}\end{aligned}}} {\displaystyle {\begin{aligned}\beta _{i}&=-{\frac {{\boldsymbol {r}}_{i+1}^{\mathrm {T} }{\boldsymbol {Ap}}_{i}}{{\boldsymbol {p}}_{i}^{\mathrm {T} }{\boldsymbol {Ap}}_{i}}}\\&=-{\frac {{\boldsymbol {r}}_{i+1}^{\mathrm {T} }({\boldsymbol {r}}_{i}-{\boldsymbol {r}}_{i+1})}{\alpha _{i}{\boldsymbol {p}}_{i}^{\mathrm {T} }{\boldsymbol {Ap}}_{i}}}\\&={\frac {{\boldsymbol {r}}_{i+1}^{\mathrm {T} }{\boldsymbol {r}}_{i+1}}{{\boldsymbol {r}}_{i}^{\mathrm {T} }{\boldsymbol {r}}_{i}}}{\text{.}}\end{aligned}}} ## 参考文献 1. Hestenes, M. R.; Stiefel, E. Methods of conjugate gradients for solving linear systems (PDF). Journal of Research of the National Bureau of Standards. 1952年12月, 49 (6). (原始内容 (PDF)存档于2010-05-05). 2. Saad, Y. Chapter 6: Krylov Subspace Methods, Part I. Iterative methods for sparse linear systems 2nd. SIAM. 2003. ISBN 978-0898715347.
3,046
6,983
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 63, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.09375
4
CC-MAIN-2018-30
latest
en
0.322415
https://www.mathmammoth.com/videos/rational_numbers/many_operations_rational_numbers
1,603,246,165,000,000,000
text/html
crawl-data/CC-MAIN-2020-45/segments/1603107874637.23/warc/CC-MAIN-20201021010156-20201021040156-00455.warc.gz
811,425,836
9,996
^ # Many operations with negative fractions & decimals (rational numbers) - pre-algebra lesson I solve various example problems involving negative fractions & decimals and several operations — including complex fractions. For example, you will learn how to simplify the complex fraction 2/5 over 3/7. It is simply a fraction division problem. Another example is the calculation 8 3/4 over 5/8, and then subtract 1 1/4. Dividing fractions & decimals, including negative ones — video lesson Math Mammoth Rational Numbers — a short workbook where you can find worksheets to match this lesson. Math Mammoth Grade 7 curriculum (pre-algebra) Back to rational numbers videos index Back to pre-algebra videos index Back to the index of all videos WAIT! Receive my monthly collection of math tips & resources directly in your inbox — and get a FREE Math Mammoth book! You can unsubscribe at any time. ### Math Mammoth Tour Confused about the different options? Take a virtual email tour around Math Mammoth! You'll receive: An initial email to download your GIFT of over 400 free worksheets and sample pages from my books. Six other "TOURSTOP" emails that explain the important things and commonly asked questions concerning Math Mammoth curriculum. (Find out the differences between all these different-colored series!) This way, you'll have time to digest the information over one or two weeks, plus an opportunity to ask me personally about the curriculum. A monthly collection of math teaching tips & Math Mammoth updates (unsubscribe any time) ### "Mini" Math Teaching Course This is a little "virtual" 2-week course, where you will receive emails on important topics on teaching math, including: - How to help a student who is behind - Troubles with word problems - Teaching multiplication tables - Why fractions are so difficult - The value of mistakes - Should you use timed tests - And more! A GIFT of over 400 free worksheets and sample pages from my books right in the very beginning. A monthly collection of math teaching tips & Math Mammoth updates (unsubscribe any time)
442
2,096
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.640625
3
CC-MAIN-2020-45
longest
en
0.875844
https://www.ncertbooks.guru/mcq-questions-for-class-9-maths-chapter-5-with-answers/
1,695,927,075,000,000,000
text/html
crawl-data/CC-MAIN-2023-40/segments/1695233510427.16/warc/CC-MAIN-20230928162907-20230928192907-00337.warc.gz
987,905,258
16,755
# MCQ Questions for Class 9 Maths Chapter 5 Introduction to Euclid’s Geometry with Answers ## MCQ Questions for Class 9 Maths Chapter 5 Introduction to Euclid’s Geometry with Answers MCQs from Class 9 Maths Chapter 5 – Introduction to Euclid’s Geometry are provided here to help students prepare for their upcoming Maths exam. MCQs from CBSE Class 9 Maths Chapter 5: Introduction to Euclid’s Geometry 1. Which of the following statements are true? a) Only one line can pass through a single point. b) There is an infinite number of lines which pass through two distinct points. c) A terminated line can be produced indefinitely on both the sides d) If two circles are equal, then their radii are unequal. Explanation: A line that is terminated can be indefinitely produced on both sides as a line can be extended on both its sides infinitely. 2. A solid has __________dimensions. a) One b) Two c) Three d) Zero Explanation: A solid is a three-dimensional object. 3. A point has _______ dimension. a) One b) Two c) Three d) Zero Explanation: A point is always dimensionless. 4. The shape of base of Pyramid is: a) Triangle b) Square c) Rectangle d) Any polygon Explanation: A pyramid base could have any polygon shape. 5. The boundaries of solid are called: a) Surfaces b) Curves c) Lines d) Points (a) 6. A surface of a shape has: c) Length and thickness only (b) 7. The edges of the surface are : a) Points b) Curves c) Lines d) None of the above (c) 8. Which of these statements do not satisfy Euclid’s axiom? a) Things which are equal to the same thing are equal to one another b) If equals are added to equals, the wholes are equal. c) If equals are subtracted from equals, the remainders are equal. d) The whole is lesser than the part. (d) 9. The line drawn from the center of the circle to any point on its circumference is called: b) Diameter c) Sector d) Arc (a) 10. There are ________ number of Euclid’s Postulates a) Three b) Four c) Five d) Six (c) 11. The number of dimensions a solid has is: a) 1 b) 2 c) 3 d) 0 (c) 12. Boundaries of solids are: a) Surfaces b) Curves c) Lines d) Points (a) 13. The number of dimension that a point has is: a) 0 b) 1 c) 2 d) 3 (a) 14. The base of a Pyramid is: a) Only a triangle b) Only a square c) Only a rectangle d) Any polygon (b) 15. The first known proof that ‘the circle is bisected by its diameter’ was given by: a) Pythagoras b)Thales c) Euclid d) Hypatia (b) 16. If x + y =10 then x + y + z = 10 + z. Then the Euclid’s axiom that illustrates this statement is: a) First axiom b) Second axiom c) Third axiom d) Fourth axiom (b) 17. Greeks emphasized on: a) Public worship b) Household rituals c) Both a and b d) None of a, b and c (a) 18. In ancient India, the shapes of altars used for household rituals were: a) Squares and circles b) Triangles and rectangles c) Trapeziums and pyramids d) Rectangles and squares (a) 19. ‘Lines are parallel if they do not intersect’ is stated in the form of: a) An axiom b) A definition c) A postulate d) A proof (a) 20. The number of interwoven isosceles triangles in Sriyantra (in the Atharvaveda) is: a) 7 b) 8 c) 9 d) 10 (c) 21. For every line ‘l’ and a point P not lying on it, the number of lines that passes through P and parallel to ‘l’ are: a) 1 b) 2 c) 3 d) No line (a) 22. The total number of propositions in Euclid’s famous treatise “The Elements” are: a) 13 b) 55 c) 460 d) 465 (d) 23. The number of Euclid’s postulates is (are): a) 3 b) 4 c) 5 d) 6 (c) 24. Proved statements based on deductive reasoning, by using postulates and axioms are known as: a) A Statement only b) A Proposition only c) A Theorem only d) Both Proposition and Theorem (d) 25. John is of the same age as Mohan. Ram is also of the same age as Mohan. State the Euclid’s axiom that illustrates the relative ages of John and Ram (a) First Axiom (b) Second Axiom (c) Third Axiom (d) Fourth Axiom
1,191
3,923
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.90625
4
CC-MAIN-2023-40
longest
en
0.889206
https://mns.org.il/prime-numbers-the-search-the-discovery/
1,723,194,145,000,000,000
text/html
crawl-data/CC-MAIN-2024-33/segments/1722640762343.50/warc/CC-MAIN-20240809075530-20240809105530-00421.warc.gz
317,341,263
6,270
EN # Prime Numbers - the Search and the Discovery The main mathematical news 1976: The use of prime numbers in cryptography. 2008: The discovery of the first prime number with more than 10 million digits. 2018: The discovery of the 51th Mersenne prime with more than 24 million digits. To the MNS presentation Additional Theorems / conjectures / Open questions * There are infinitely many prime numbers. * Mersenne’s (refuted) conjecture: 2p-1 is prime for p=2,3,5,7,13,17,19,31,67,127,257 * Is the set of Mersenne primes infinite? * Many Mersenne numbers are composite. * Many primes aren’t Mersenne numbers. * The number M of the form M=n2-n+41 is prime, for any natural numbers n between 1 and 40. * Is there a “formula” that generates all and only prime numbers??? * Perfect numbers and their relation to Mersenne Primes. To the MNS presentation The main mathematical concepts / Principles Number theory (MSC2010#97F60) *    Prime/Composite (natural) number *    Mersenne prime *    perfect numbers Logic (MSC2010#97E30)
295
1,044
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.046875
3
CC-MAIN-2024-33
latest
en
0.676162
http://www.scribd.com/doc/27439048/r05211002-Electrical-Technology
1,409,431,751,000,000,000
text/html
crawl-data/CC-MAIN-2014-35/segments/1408500835699.86/warc/CC-MAIN-20140820021355-00365-ip-10-180-136-8.ec2.internal.warc.gz
577,881,565
32,235
Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more Standard view Full view of . 0 of . Results for: P. 1 r05211002 Electrical Technology # r05211002 Electrical Technology Ratings: (0)|Views: 195 |Likes: ### Availability: See more See less 07/20/2010 pdf text original R05 SET- 1P.CODE:33116 II.B.TECH - I SUPPLEMENTARY EXAMINATIONS NOV/DEC, 2009 ELECTRICAL TECHNOLOGY(Com. To EIE, BME, E. CONT. E, E.COMP.E, ICE) Time: 3hours Max.Marks:80 All questions carry equal marks - - - 1.a) Draw the O.C.C and load characteristics of a d.c compound generator.b) A 4 pole, wave wound armature has 800 conductors; a flux per pole of 30mwb/pole and runs at 650rpm. Calculate the e.m.f generated on open circuit. [8+8]2.a) State the advantages and disadvantages of Swinburnes test conducted on d.cshunt machine.b) Give the application of d.c shunt, series and compound motors.c) What happens if back emf is zero at the time of starting the motor? [16]3.a) Why the primary of the transformer draws current from the mains when thesecondary is not carrying any load(open circuit)?b) A 500/250V, 50Hz, 1- φ transformer is to be worked at a maximum flux densityof in the core. The effective cross-sectional area of the core is.Calculate the suitable values of the primary and secondary turns. [8+8] 2 1.5 wbm 2 90 cm 4. O.C and S.C tests on a 5KVA, 220/400V, 50HZ, 1- φ transformer gave thefollowing results:O.C test: 220V, 2A, 100W (l.v. side)S.C test: 40V, 11.4A, 200W (h.v. side)Find the efficiency and approximate regulation of the transformer at full load 0.8p.f lagging. [16]5.a) A 4 pole, 50HZ, 3-ph I.M has a rotor resistance of 0.02 per phase and standstill reactance of 0.5 per phase. Determine the speed at which the maximumtorque is developed? b) With a neat sketch, explain the construction and principle of operation of 3-phasewound induction motor. [8+8]6.a) Explain the essential differences between cylindrical and salient pole type rotorsused in large alternators.b) In an alternator, explain why S.C characteristic is a straight line where as O.Ccharacteristic is a curve. [8+8]7.a) Draw the schematic diagram of two phase A.C servomotor. Explain its principleof working and also draw the torque-speed characteristics.b) State the applications of Synchro. [8+8]8.a) Compare M.C instruments with M.I instruments in any four aspects. [8+8]b) Explain the following with respect to indicating instruments: i) Deflecting torque. ii) Controlling torque. iii) Damping torque. 3 R05 SET- 2P.CODE:33116 II.B.TECH - I SEMESTER SUPPLEMENTARY EXAMINATIONS NOV/DEC, 2009 ELECTRICAL TECHNOLOGY(Com. To EIE, BME, E. CONT. E, E.COMP.E, ICE) Time: 3hours Max.Marks:80 All questions carry equal marks - - - 1.a) List out the differences between separately excited D.C generator and self excitedd.c generator.b) Explain the function of commutator in case of d.c generator and d.c motor with aneat sketch. [8+8]2.a) What are the different methods of speed control used in d.c motors? State theirrelative merits and demerits.b) A 200V d.c shunt motor takes 45A when running at 750rpm. It has an armatureresistance of 0. . Determine the speed and armature current if the field flux isweakened by 15%. Neglect the brush contact drop. [8+8] 15 3.a) Give the constructional details of core type and shell type single phasetransformers.b) A single phase, 250/500V, transformer gave the following results:O.C test: 250V, 1A, 80W on l.v. sideS.C test: 20V, 12A, 100Watts on h.v. sideCalculate the circuit constants and show them on an equivalent circuit. [8+8]4.a) Explain the variation of efficiency with power factor in case of single phasetransformer.b) In a transformer, the core loss is 100watts at 60HZ and 72 watts at 40HZ. Find thehysterisis and eddy current loss at 50HZ.c) Define the regulation of a transformer? What is its significance? [16]5. Why starters are necessary for starting induction motors? Explain differentstarting methods of 3-phase I.M. [16]6.a) Obtain the expressions for distribution and coil span factors of an alternator.b) A 400V, 50KVA, 50HZ,3-phase, star connected alternator has the armatureeffective resistance of 0.15 per phase. An excitation of 2.5A produces on opencircuit on e.m.f of 130V (line). The same excitation produces a current of 90A onshort circuit. Calculate the full-load voltage regulation of the alternator at 0.8 pf lagging. [8+8] 4 7.a) Draw a torque-speed curve of a single phase I.M on the basis of double fieldrevolving theory.b) Which type of motor would you use in the following applications?i) Washing machine. ii) Portable electric machine.iii) Food mixer. iv) Refrigerator.State your reason. [8+8]8.a) Write the methods to extend the range of an ammeter and voltmeter.b) Explain the principle of working of attraction type M.I instrument with neatsketch. [8+8] ## Activity (5) Leo Gee liked this syira_syahirah liked this shivaram liked this Yong Soon Khiong liked this /*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->
1,454
5,098
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.625
3
CC-MAIN-2014-35
latest
en
0.760853
https://blogs.glowscotland.org.uk/nl/bantonbiggies/2023/11/
1,721,444,580,000,000,000
text/html
crawl-data/CC-MAIN-2024-30/segments/1720763514981.25/warc/CC-MAIN-20240720021925-20240720051925-00736.warc.gz
113,356,943
26,763
## Outdoor Learning Day The day started with a touch of frost. Despite this we tackled the Bee hotel wild flower beds. These have become full of grass with no flowers. We have dug out the grass, and will cover them up over the winter. in spring we will seed some more will flowers. We added some sand to make the soli a bit less rich and covered them which will hopefully kill any remains grass to make room for the flowers. ## Camera Stand challenge We feed the birds in the playground and have a NatureWatch camera to take photos of our visitors. The class were challenged to make a better camera stand, that could be adjustable and move around on a tricky slope. ## Estimating Weight Next we worked on some estimating weight for maths. Everybody got a small contained and filled it with sand. They then weighed their own box but did not tell anyone else. The children then estimated all of each other boxes. We will calculate the best estimator when we are doing some inside maths. There are some accurate estimates. ## Photos & Poems In the afternoon we took our iPads out to catch some photos of the winter light, it was a lovely bright, but cold day. We also wrote some haiku inspired by the season. ## Firepit We ended the day reading our haiku round the fire pit and finishing off with some marshmallows s’mores. We have started a new STEM project. Building some Steady Hand games for our Christmas Fayre. Yesterday me and my class were making steady hand games in the makerspace for the Christmas fayre. The games that buzz at you when you touch a shaped wire with a hoop. Nathaniel We were making a steady hand game it is a game you need to get all the way across if it beeps you are out. We were in the maker space and mr j wanted us to make a cool game. We made it because all the classes need to make thing for the Christmas fair so people would pay 10p to play and if they win the game you will get a sweet. Alex I also used my skills to make a box which is stable enough which was done using the make-do so that I can connect the metal wire to the box and have it not fall over. I have also included a light and buzzer but doing that made the buzzer take all the power and the light wouldn’t turn on and the buzzer barely went. Harry I was with Alexia we had some mistakes like when we were duck taping the crocodile clips but the thing kept falling off it was frustrating but we got to the end eventually we just have to connect the wire. Faith In the MakerSpace me and my partner learned how to solve problems, one of the main problems we had was when one of the wires was not working, although it was tedious my partner and I got all the wires tested and turns out the one to actually play the game with was broken, when we found that out we had to swap the wire and wire the game up and it was worth it because it worked really well afterwards. Olivia The pupils worked really well with their partners, using their experience with circuits and working with cardboard. I a looking forward to seeing them when they are finished. We have a couple more stages to go and will be making other games for the attendees of our Christmas Fayre. Yesterday the class joined in with the Create-a-long session broadcast in Teams by Digital NL and Education Scotland as part of #CSScotland23. We had a bit of bother with the sound, so had get caught up this morning. The session showed how to make a simple maze game with Microsoft MakeCode Arcade. The class changed the games a little to have a Christmas theme, we hope to use them at our Christmas Farye. I was impressed by how easily the class managed to work around any problems. Their experience with micro:bit coding and scratch helped. After we finished the game we started loading them onto our kitronic arcade devices. This took a bit of patience as our AirDrop has become flaky recently. The children have lots of ideas on how to extend the games and are exploring the tutorials to find other ones that we can use. Here are links to some of the games. I’ve not got links of them all yet. ## Banton Mill Last Friday we went a trip to Banton Mill. The was organised by Mr Carter, Mr Morecroft & Mr Barrie. Mr Barrie is the owner of the mill which is now the headquarters of Calders and the home of around 30 businesses. Mr Carter & Mr Morecroft are local history experts. We learnt about how the mill operated, the water wheel and how it connected to the machinery. We also found out a lot about mill workers and the way the building had changed. We had a tour underneath the mill to se where the water wheel used to turn, and the remains of the old spinning rooms. Back in school we investigated how water wheels turn and made model pulley systems from lego. We also used the trip and work in the class to practise our note taking, the children produced some great mind-maps, here are a couple:
1,059
4,891
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.34375
3
CC-MAIN-2024-30
latest
en
0.972243
https://www.coursehero.com/file/6471263/Fall-2009-Midterm-2/
1,513,064,525,000,000,000
text/html
crawl-data/CC-MAIN-2017-51/segments/1512948515309.5/warc/CC-MAIN-20171212060515-20171212080515-00277.warc.gz
708,740,850
87,105
Fall 2009 Midterm #2 # Fall 2009 Midterm #2 - Linear Algebra F09 Name Page 1 of... This preview shows pages 1–4. Sign up to view the full content. Linear Algebra F09 Name: Page 1 of 7 (1) (5 marks) Find parametric equations for the line passing the point P (1 , 1 , 2) and parallel to the two planes given below: 2 x y +3 x =0 ,x 3 y z 2=0 (2) (4 marks) Find the steady state vector q for the following regular tran- sition matrix: P = ± 1 / 41 / 5 3 / 44 / 5 ² This preview has intentionally blurred sections. Sign up to view the full version. View Full Document Linear Algebra F09 Name: Page 2 of 7 (3) (4 marks) Say T : R 2 R 2 is the linear operator de±ned by: a reflection about the line y = x followed by a counter-clockwise rotation by an angle of π/ 2 followed by an othogonal projection onto the y -axis. Use [ T ]=[ T ( e 1 ) | T ( e 2 )] to get the standard matrix of T . (NOTE: for full marks, you must use this theorem.) (4) (3 marks) Let T : R 3 R 3 be the linear operator de±ned by T ( x 1 ,x 2 3 )=(2 x 1 + x 2 +2 x 3 , 2 x 1 x 2 +5 x 3 , 4 x 1 x 2 +7 x 3 ) Determine if T is one-to-one. Linear Algebra F09 Name: Page 3 of 7 (5) (5 marks) Determine whether the set S = { M 1 ,M 2 3 } is a basis for the vector space V of 2 × 2 symmetric matrices, where M 1 = ± 12 20 ² 2 = ± 0 1 10 ² 3 = ± 01 ² This preview has intentionally blurred sections. Sign up to view the full version. View Full Document This is the end of the preview. Sign up to access the rest of the document. {[ snackBarMessage ]} ### Page1 / 6 Fall 2009 Midterm #2 - Linear Algebra F09 Name Page 1 of... This preview shows document pages 1 - 4. Sign up to view the full document. View Full Document Ask a homework question - tutors are online
568
1,734
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.15625
4
CC-MAIN-2017-51
latest
en
0.791211
https://git.lsv.fr/fthire/LFMT/-/commit/c766a3f42a61cb27c55ded2bccf0742a5a298831.patch
1,601,213,786,000,000,000
text/plain
crawl-data/CC-MAIN-2020-40/segments/1600400279782.77/warc/CC-MAIN-20200927121105-20200927151105-00720.warc.gz
419,978,655
1,601
From c766a3f42a61cb27c55ded2bccf0742a5a298831 Mon Sep 17 00:00:00 2001 From: Gaspard Ferey Date: Fri, 20 Apr 2018 14:26:22 +0200 Subject: [PATCH] Proposition de condition de wellformedness. --- LaTeX/lfmt.tex | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/LaTeX/lfmt.tex b/LaTeX/lfmt.tex index 0172f2d..a7ad5c7 100644 --- a/LaTeX/lfmt.tex +++ b/LaTeX/lfmt.tex @@ -35,6 +35,8 @@ We denote $$\mathcal{T}_{\mathcal{X}}$$ the free algebra over the terms with variables in $$\mathcal{X}$$. \end{definition} +Note: $$\mathcal{T}_{\mathcal{\emptyset}}$$ is the set of closed terms. + \begin{definition} We define $$\mathfrak{R}_{\mathcal{X}}$$ the set of pairs of terms: $\mathfrak{R}_{\mathcal{X}} \defn \{(t,u) \mid t,u \in \mathcal{T}_{\mathcal{X}}\}$ @@ -44,6 +46,7 @@ We denote $$Dom(\Gamma)$$ the domain of $$\Gamma$$ defines as: \begin{align*} Dom(\emptyset) &\defn \emptyset \\ + Dom(\Gamma , \mathcal{R}) &\defn Dom(\Gamma) \\ Dom(\Gamma , x : A) &\defn Dom(\Gamma) \cup \{x\} \end{align*} \end{definition} @@ -121,4 +124,23 @@ {\Gamma \vdash t: B} \end{rules} + +\newpage +\section{A simple wellformedness condition} + +On could start studying a very simple wellformedness condition. + +\begin{rules}{A (perhaps too) simple rule for relation wellformedness}{Relation WF} + \inferrule{ + \Gamma \vdash A : \ttype + \\ + \Gamma \vdash t : A + \\ + \Gamma \vdash u : A + } + {\Gamma \vdash \wf{\{(t,u)\}}} +\end{rules} + + + \end{document} -- 2.24.1
551
1,476
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.765625
3
CC-MAIN-2020-40
latest
en
0.37503
http://www.cram.com/flashcards/fractions-to-decimals-425436
1,503,095,544,000,000,000
text/html
crawl-data/CC-MAIN-2017-34/segments/1502886105187.53/warc/CC-MAIN-20170818213959-20170818233959-00424.warc.gz
509,629,589
21,330
• Shuffle Toggle On Toggle Off • Alphabetize Toggle On Toggle Off • Front First Toggle On Toggle Off • Both Sides Toggle On Toggle Off Toggle On Toggle Off Front ### How to study your flashcards. Right/Left arrow keys: Navigate between flashcards.right arrow keyleft arrow key Up/Down arrow keys: Flip the card between the front and back.down keyup key H key: Show hint (3rd side).h key A key: Read text to speech.a key Play button Play button Progress 1/5 Click to flip ### 5 Cards in this Set • Front • Back 1 / 4 as decimal =.25 1 / 3 as decimal =.3333333... 1/2 as decimal =.5 1/6 as decimal =.6666666... 1 / 7 = .1428571...
201
641
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.671875
3
CC-MAIN-2017-34
longest
en
0.700457
https://slideplayer.com/slide/6365061/
1,632,861,204,000,000,000
text/html
crawl-data/CC-MAIN-2021-39/segments/1631780060882.17/warc/CC-MAIN-20210928184203-20210928214203-00104.warc.gz
559,265,284
22,651
# Newton’s Laws of Motion. Motion and Speed Vocabulary Words  Motion  Position  Reference point  Distance  Displacement  Speed  Average speed  ## Presentation on theme: "Newton’s Laws of Motion. Motion and Speed Vocabulary Words  Motion  Position  Reference point  Distance  Displacement  Speed  Average speed "— Presentation transcript: Newton’s Laws of Motion Motion and Speed Vocabulary Words  Motion  Position  Reference point  Distance  Displacement  Speed  Average speed  Velocity * Copy down words; define them for homework! Activities  NJ Ask Coach Lesson 6  P48-52  Discussion question and #1-4 Motion  The process of changing from one position to another Speed  How fast an object changes position  Speed= Distance/Time Calculating Speed  Julie ran from her house on Ridge Road to Lincoln School; which is 2 miles away. It took her 20 minutes to get there; what was her speed in miles per minute?  2 miles/20minutes=0.1m/ min Velocity  Describes an object’s speed AND its direction Forces and Motion Vocabulary  Force  gravity  newton  net force  Balanced forces  Unbalanced forces  Acceleration  Friction  Air resistance  Mass  Momentum Activities  NJ Ask Coach Lesson 7  P. 52-54  Discussion question and #1-4 Forces  A force is a push or a pull  Measured in newtons Gravity  The force that exists between two objects that have mass, attracting them together What is Friction?  Static  Sliding  Rolling  Fluid  A fluid is any matter in which the molecules can move freely i.e. air, water, etc. Four Types of Friction  The force that occurs when two objects rub together Gravity  Force that exists between any two objects that have mass Gravitational Relationships Free Fall  Free fall- the motion of a falling object when the only force acting on it is gravity  In free fall, the force of gravity is an unbalanced force, which causes an object to accelerate Free Fall  all objects in free fall accelerate at the same rate regardless of their masses! Weight  Mass- the amount of matter in an object  Weight- measure of the force of gravity acting on an object  Weight=mass x acceleration  Gravity on Earth is 9.8m/sec 2 Calculating Weight Example:  Calculate the mass of an object weighing 10kg: 10kg x 9.8m/s 2 Momentum  A property that a moving object has because of its mass and velocity  Momentum = Mass x Velocity Calculating Momentum  What is the momentum of a 5kg bowling ball rolling down a lane at a velocity of 7m/s? =5kg x 7 m/s =35kg x m/s Law of Conservation of Momentum  The total amount of momentum in a group of interacting objects does not change unless outside forces act on the objects Newton’s Laws of Motion Isaac Newton not only studied Physics, but was the inventor of Calculus and one of History’s most famous scientists Newton’s 1 st Law of Motion  An object at rest will stay at rest, and an object in motion will continue to move in the same direction and the same speed unless acted upon by a force. “ 1 st Law Example What are some examples of forces that might stop the object from moving? To make it move? Inertia  The tendency of a still or moving object to resist changes in motion Newton’s Second Law of Motion  An object acted on by a net force will accelerate in the direction of the force. The object’s acceleration equals the net force divided by the mass.  Acceleration= Force/mass  Force= mass x acceleration 2 nd Law Example  A grocery cart that weighs 30 kg with a net force of 60N acting on it will accelerate at 2 m/s 2 because … acceleration = force/mass 60N/30kg= 2 m/s 2 Newton’s Third Law of Motion  For every action force exerted on an object, the object will exert an equal and opposite reaction. 3 rd Law Example  When you are swimming and you pull your arms through the water in one direction, your body moves in the opposite direction Energy Energy, Work, Power and Heat Vocabulary  Work  Joule  Energy  Kinetic energy  Potential energy  Power Activities  NJ Ask Coach Lesson 8  P.55-59  Discussion question and #1-4 Work  Work- result of a force moving an object over a distance  The unit of measurement for work is a joule  Work= Force x Distance Potential vs. Kinetic Energy Potential Energy Kinetic Energy Power  Power is the amount of work done in a period of time  Power= Work/Time  Remember: work is measured in joules and time is measured in seconds Energy  Energy- the ability to do work or cause change  Kinetic energy- the energy an object has because it is moving  Potential energy- stored energy Thermal Energy  Thermal energy, or heat, is the total energy of the movement of molecules in a substance Heat Transfer  There are three ways heat can be transferred:  Convection  Conduction  Radiation  Radiation is the transfer of energy via electromagnetic waves Conduction  Conduction is a situation where the heat source and heat sink are connected by matter. Temperature vs. Heat Temperature  Temperature is a measure of the average kinetic energy level of a substance Heat  Heat is the total energy associated with the motion of molecules Measuring Temperature  Temperatur e can be measured in…  Fahrenheit  Celsius  Kelvin Fahrenheit  Fahrenheit is the classic English system of measuring temperatures.  Water freezes at 32 degrees Fahrenheit  Water boils at 212 degrees Celsius  Celsius is the modern system of measuring temperature.  The freezing point of water is 0 degrees Celsius  the boiling point is 100 degrees Celsius Formula  Celsius=(Fahrenheit Temperature -32)*5/9 Thermal Expansion  Thermal expansion is when gases expand as the temperature increases  Think about it…  Have you ever been hot and sticky and felt that your clothes seemed tighter or that your hand were swelling?  This is an example of thermal expansion Contraction  The opposite of expansion is contraction  A substance will contract when heat is removed  If you remove enough heat from a gas it will become a liquid  Liquids can turn into solids with further cooling The End!! Download ppt "Newton’s Laws of Motion. Motion and Speed Vocabulary Words  Motion  Position  Reference point  Distance  Displacement  Speed  Average speed " Similar presentations
1,659
6,297
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4
4
CC-MAIN-2021-39
latest
en
0.851771
https://nrich.maths.org/public/leg.php?code=6&cl=3&cldcmpid=534
1,527,150,748,000,000,000
text/html
crawl-data/CC-MAIN-2018-22/segments/1526794866107.79/warc/CC-MAIN-20180524073324-20180524093324-00305.warc.gz
617,007,120
10,064
# Search by Topic #### Resources tagged with Place value similar to Digit Sum: Filter by: Content type: Stage: Challenge level: ### There are 54 results Broad Topics > Numbers and the Number System > Place value ### Subtraction Surprise ##### Stage: 2 and 3 Challenge Level: Try out some calculations. Are you surprised by the results? ### Digit Sum ##### Stage: 3 Challenge Level: What is the sum of all the digits in all the integers from one to one million? ### Skeleton ##### Stage: 3 Challenge Level: Amazing as it may seem the three fives remaining in the following `skeleton' are sufficient to reconstruct the entire long division sum. ### Basically ##### Stage: 3 Challenge Level: The number 3723(in base 10) is written as 123 in another base. What is that base? ### Arrange the Digits ##### Stage: 3 Challenge Level: Can you arrange the digits 1,2,3,4,5,6,7,8,9 into three 3-digit numbers such that their total is close to 1500? ##### Stage: 2 and 3 Challenge Level: Watch our videos of multiplication methods that you may not have met before. Can you make sense of them? ### X Marks the Spot ##### Stage: 3 Challenge Level: When the number x 1 x x x is multiplied by 417 this gives the answer 9 x x x 0 5 7. Find the missing digits, each of which is represented by an "x" . ### Back to the Planet of Vuvv ##### Stage: 3 Challenge Level: There are two forms of counting on Vuvv - Zios count in base 3 and Zepts count in base 7. One day four of these creatures, two Zios and two Zepts, sat on the summit of a hill to count the legs of. . . . ### Just Repeat ##### Stage: 3 Challenge Level: Think of any three-digit number. Repeat the digits. The 6-digit number that you end up with is divisible by 91. Is this a coincidence? ### What an Odd Fact(or) ##### Stage: 3 Challenge Level: Can you show that 1^99 + 2^99 + 3^99 + 4^99 + 5^99 is divisible by 5? ### Pupils' Recording or Pupils Recording ##### Stage: 1, 2 and 3 This article, written for teachers, looks at the different kinds of recordings encountered in Primary Mathematics lessons and the importance of not jumping to conclusions! ### Permute It ##### Stage: 3 Challenge Level: Take the numbers 1, 2, 3, 4 and 5 and imagine them written down in every possible order to give 5 digit numbers. Find the sum of the resulting numbers. ### Phew I'm Factored ##### Stage: 4 Challenge Level: Explore the factors of the numbers which are written as 10101 in different number bases. Prove that the numbers 10201, 11011 and 10101 are composite in any base. ### Even Up ##### Stage: 3 Challenge Level: Consider all of the five digit numbers which we can form using only the digits 2, 4, 6 and 8. If these numbers are arranged in ascending order, what is the 512th number? ### Three Times Seven ##### Stage: 3 Challenge Level: A three digit number abc is always divisible by 7 when 2a+3b+c is divisible by 7. Why? ### Six Times Five ##### Stage: 3 Challenge Level: How many six digit numbers are there which DO NOT contain a 5? ### Not a Polite Question ##### Stage: 3 Challenge Level: When asked how old she was, the teacher replied: My age in years is not prime but odd and when reversed and added to my age you have a perfect square... ### Repeaters ##### Stage: 3 Challenge Level: Choose any 3 digits and make a 6 digit number by repeating the 3 digits in the same order (e.g. 594594). Explain why whatever digits you choose the number will always be divisible by 7, 11 and 13. ### Balance Power ##### Stage: 3, 4 and 5 Challenge Level: Using balancing scales what is the least number of weights needed to weigh all integer masses from 1 to 1000? Placing some of the weights in the same pan as the object how many are needed? ##### Stage: 3, 4 and 5 We are used to writing numbers in base ten, using 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. Eg. 75 means 7 tens and five units. This article explains how numbers can be written in any number base. ### What a Joke ##### Stage: 4 Challenge Level: Each letter represents a different positive digit AHHAAH / JOKE = HA What are the values of each of the letters? ### Eleven ##### Stage: 3 Challenge Level: Replace each letter with a digit to make this addition correct. ### Multiplication Magic ##### Stage: 4 Challenge Level: Given any 3 digit number you can use the given digits and name another number which is divisible by 37 (e.g. given 628 you say 628371 is divisible by 37 because you know that 6+3 = 2+7 = 8+1 = 9). . . . ### Big Powers ##### Stage: 3 and 4 Challenge Level: Three people chose this as a favourite problem. It is the sort of problem that needs thinking time - but once the connection is made it gives access to many similar ideas. ##### Stage: 3 Challenge Level: Powers of numbers behave in surprising ways. Take a look at some of these and try to explain why they are true. ### Legs Eleven ##### Stage: 3 Challenge Level: Take any four digit number. Move the first digit to the end and move the rest along. Now add your two numbers. Did you get a multiple of 11? ### Football Sum ##### Stage: 3 Challenge Level: Find the values of the nine letters in the sum: FOOT + BALL = GAME ### Tis Unique ##### Stage: 3 Challenge Level: This addition sum uses all ten digits 0, 1, 2...9 exactly once. Find the sum and show that the one you give is the only possibility. ### Lesser Digits ##### Stage: 3 Challenge Level: How many positive integers less than or equal to 4000 can be written down without using the digits 7, 8 or 9? ### Number Rules - OK ##### Stage: 4 Challenge Level: Can you convince me of each of the following: If a square number is multiplied by a square number the product is ALWAYS a square number... ### Exploring Simple Mappings ##### Stage: 3 Challenge Level: Explore the relationship between simple linear functions and their graphs. ### Latin Numbers ##### Stage: 4 Challenge Level: Can you create a Latin Square from multiples of a six digit number? ### Enriching Experience ##### Stage: 4 Challenge Level: Find the five distinct digits N, R, I, C and H in the following nomogram ### Cayley ##### Stage: 3 Challenge Level: The letters in the following addition sum represent the digits 1 ... 9. If A=3 and D=2, what number is represented by "CAYLEY"? ### Always a Multiple? ##### Stage: 3 Challenge Level: Think of a two digit number, reverse the digits, and add the numbers together. Something special happens... ### Mini-max ##### Stage: 3 Challenge Level: Consider all two digit numbers (10, 11, . . . ,99). In writing down all these numbers, which digits occur least often, and which occur most often ? What about three digit numbers, four digit numbers. . . . ### Seven Up ##### Stage: 3 Challenge Level: The number 27 is special because it is three times the sum of its digits 27 = 3 (2 + 7). Find some two digit numbers that are SEVEN times the sum of their digits (seven-up numbers)? ### A Story about Absolutely Nothing ##### Stage: 2, 3, 4 and 5 This article for the young and old talks about the origins of our number system and the important role zero has to play in it. ### 2-digit Square ##### Stage: 4 Challenge Level: A 2-Digit number is squared. When this 2-digit number is reversed and squared, the difference between the squares is also a square. What is the 2-digit number? ### Reach 100 ##### Stage: 2 and 3 Challenge Level: Choose four different digits from 1-9 and put one in each box so that the resulting four two-digit numbers add to a total of 100. ### Back to Basics ##### Stage: 4 Challenge Level: Find b where 3723(base 10) = 123(base b). ### Cycle It ##### Stage: 3 Challenge Level: Carry out cyclic permutations of nine digit numbers containing the digits from 1 to 9 (until you get back to the first number). Prove that whatever number you choose, they will add to the same total. ### Two and Two ##### Stage: 3 Challenge Level: How many solutions can you find to this sum? Each of the different letters stands for a different number. ### Chocolate Maths ##### Stage: 3 Challenge Level: Pick the number of times a week that you eat chocolate. This number must be more than one but less than ten. Multiply this number by 2. Add 5 (for Sunday). Multiply by 50... Can you explain why it. . . . ### Quick Times ##### Stage: 3 Challenge Level: 32 x 38 = 30 x 40 + 2 x 8; 34 x 36 = 30 x 40 + 4 x 6; 56 x 54 = 50 x 60 + 6 x 4; 73 x 77 = 70 x 80 + 3 x 7 Verify and generalise if possible. ### Really Mr. Bond ##### Stage: 4 Challenge Level: 115^2 = (110 x 120) + 25, that is 13225 895^2 = (890 x 900) + 25, that is 801025 Can you explain what is happening and generalise? ##### Stage: 1, 2, 3 and 4 Nowadays the calculator is very familiar to many of us. What did people do to save time working out more difficult problems before the calculator existed? ### Plus Minus ##### Stage: 4 Challenge Level: Can you explain the surprising results Jo found when she calculated the difference between square numbers? ### Novemberish ##### Stage: 4 Challenge Level: a) A four digit number (in base 10) aabb is a perfect square. Discuss ways of systematically finding this number. (b) Prove that 11^{10}-1 is divisible by 100.
2,360
9,224
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.8125
4
CC-MAIN-2018-22
latest
en
0.868839
https://zuoyedaixie.net/%E4%BD%9C%E4%B8%9Agui-javafx%E4%BB%A3%E5%81%9A-java-css-javafx-gui/
1,720,942,050,000,000,000
text/html
crawl-data/CC-MAIN-2024-30/segments/1720763514551.8/warc/CC-MAIN-20240714063458-20240714093458-00307.warc.gz
936,699,461
18,290
# 作业GUI | javafx代做 | java | css – javafx gui ### javafx gui 11 1. [5.5 points] TAKE HOME SECTION (Feel free to tear this sheet off, and take it home with you). Implement a graphical user interface (GUI) in javafx for a personal financial calculator. Your financial calculator should allow the user to perform simple interest and compound interest calculations. For your reference, simple interest is calculated as: Amount = Principal * ( 1 + InterestRate * Time ), While compound interest is calculated as: Amount = Principal * (1 + InterestRate) ^ Time. You are free to use the notes and the Javadoc on the Internet to implement this, however, you will be required to strictly adhere to the academic integrity policy. Any implementation where entire chunks of code are directly copied from similar implementations will be given a 0. Requirements for GUI: (a) The GUI should at minimum contain two text fields , one for user to enter the principal amount and one for user to enter the time in months. These should be pre-filled with default values (you are free to choose defaults). The GUI should have labels for both fields describing what they do. [. 5 point] ``````(b) Since the interest rate is typically represented as a percentage, the GUI should use a slider that the user can interact with to set the interest rate. As the user slides the slider, the GUI should display the slider value. The GUI should have a label for the slider describing what it does. The user should be able to choose a value between 0 and 100 for the interest rate, and your code should convert this value to a ratio between 0 and 1. [ 1 point] `````` ``````(c) The user should be allowed to choose between simple and compound interest calculation either by interacting with the GUI using the mouse or by pressing a keyboard key. You need to implement both choices. It is up to you what keyboard key you use, but your GUI should display to the user what key they should press `````` 12 ``````and what control they should click with the mouse to switch between simple and compound interest. By default, the user should be able to calculate simple interest. [ 1 point] `````` ``````(d) The GUI should have a button that the user clicks to compute the result and display it to a label or a text field. [.5 point] `````` ``````(e) You need to use css to stylize buttons and the layout element containing the controls. [.5 point] `````` ``````(f) Your code should compile and run correctly. [ 2 points] `````` ``````(g) You are free to choose any layout container that helps accomplish
593
2,574
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.609375
3
CC-MAIN-2024-30
latest
en
0.911949
https://intelligentonlinetools.com/blog/tag/pandas-dataframe/
1,631,819,474,000,000,000
text/html
crawl-data/CC-MAIN-2021-39/segments/1631780053717.37/warc/CC-MAIN-20210916174455-20210916204455-00185.warc.gz
371,782,350
13,889
## Machine Learning for Correlation Data Analysis Between Food and Mood Can sweet food affect our mood? A friend of mine was interesting if some of his minor mood changes are caused by sugar intake from sweets like cookies. He collected and provided records and in this post we will use correlation data analysis with python pandas dataframes to check the connection between food and mood. We will create python script for this task. ## Connection Between Eating and Mental Health From internet resources we can confirm that relationship between how we feel and what we eat exists.[1] Sweet food is not recommended to eat as fluctuations in blood sugar cause mood swings, lack of energy [2]. The information about chocolate is however contradictory. Chocolate affects us both negatively and positively.[3] But chocolate has also sugar. What if we eat only small amount of sweets and not each day – is there still any connection and how strong is it? The machine learning data analysis can help us to investigate this. ## The Problem So in this post we will estimate correlation between sweet food and mood based on provided daily data. Correlation means association – more precisely it is a measure of the extent to which two variables are related. [4] ## Data The dataset has two columns, X and Y where: X is how much sweet food was taken on daily basis, on the scale 0 – 1 , 0 is nothing, 1 means a max value. Y is variation of mood from optimal state, on the scale 0 – 1 , 0 – means no variations or no defects, 1 means a max value. ## Approach If we calculate correlation between 2 columns of daily data we will get something around 0. However this would not show whole picture. Because the effect of the food might take action in a few days. The good or bad feeling can also stay for few days after the event that caused this feeling. So we would need to take average data for several days for both X (looking back) and Y (looking forward). Here is the diagram that explains how data will be aggregated: And here is how we can do this in the program: 1.for each day take average X data for last N days and take average Y data for M next days. 2.create a pandas dataframe which has now new moving averages for X and Y. 3.calculate correlation between new X and Y data What should be N and M? We will use different values – from 1 to 14. And we will check what is the highest value for correlation. Here is the python code to use pandas dataframe for calculating averages: ```def get_data (df_pandas,k,z): x = np.zeros(df_pandas.shape[0]) y = np.zeros(df_pandas.shape[0]) new_df = pd.DataFrame() #creates a new dataframe that's empty for index, row in df_pandas.iterrows(): x[index]=df_pandas.loc[index-k:index,'X'].mean() y[index]=df_pandas.loc[index:index+z,'Y'].mean() new_df=pd.concat([pd.DataFrame(x),pd.DataFrame(y)], "columns") new_df.columns = ['X', 'Y'] return new_df ``` ## Correlation Data Analysis For calculating correlation we use also pandas dataframe. Here is the code snipped for this: ```for i in range (1,n): for j in range (1,m): data=get_data(df, i, j) corr_df.loc[i, j] = data['X'].corr(data['Y']) print ("corr_df") print (corr_df) ``` pandas.DataFrame.corr by default is calculating pearson correlation coefficient – it is the measure of the strength of the linear relationship between two variables. In our code we use this default option. [8] ## Results After calculating correlation coefficients we output data in the table format and plot results on heatmap using seaborn module. Below is the data output and the plot. The max value of correlation for each column is highlighted in yellow in the data table. Input data and full source code are available at [5],[6]. ## Conclusion We performed correlation analysis between eating sweet food and mental health. And we confirmed that in our data example there is a moderate correlation (0.4). This correlation is showing up when we use moving averaging for 5 or 6 days. This corresponds with observation that swing mood may appear in several days, not on the same or next day after eating sweet food. We also learned how we can estimate correlation between two time series variables X, Y.
923
4,196
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.125
3
CC-MAIN-2021-39
latest
en
0.956683
https://www.folkstalk.com/2022/09/javascript-random-letters-and-numbers-with-code-examples.html
1,708,930,128,000,000,000
text/html
crawl-data/CC-MAIN-2024-10/segments/1707947474653.81/warc/CC-MAIN-20240226062606-20240226092606-00495.warc.gz
773,459,882
14,859
# Javascript Random Letters And Numbers With Code Examples Javascript Random Letters And Numbers With Code Examples In this lesson, we’ll use programming to try to solve the Javascript Random Letters And Numbers puzzle. The code shown below demonstrates this. ```function makeid() { var text = ""; var possible = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789"; for (var i = 0; i < 5; i++) text += possible.charAt(Math.floor(Math.random() * possible.length)); return text; } console.log(makeid()); ``` There are a variety of approaches that can be taken to solve the same problem Javascript Random Letters And Numbers. The remaining solutions are discussed further down. `fgfdgfdgfd` Through many examples, we learned how to resolve the Javascript Random Letters And Numbers problem. ## How do you generate a random character in JavaScript? random() method is used to generate random characters from the specified characters (A-Z, a-z, 0-9). The for loop is used to loop through the number passed into the generateString() function. During each iteration, a random character is generated. ## How do you generate random letters in HTML? “how to create a random string generator in html” Code Answer • function makeid() { • var text = “”; • var possible = “ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789”; • for (var i = 0; i < 5; i++) • text += possible. charAt(Math. floor(Math. random() * possible. length)); • return text; ## Does JavaScript have a random number generator? Generating Javascript Random Numbers Javascript creates pseudo-random numbers with the function Math. random() . This function takes no parameters and creates a random decimal number between 0 and 1.16-Apr-2021 ## What is random number in JavaScript? random() The Math. random() function returns a floating-point, pseudo-random number that’s greater than or equal to 0 and less than 1, with approximately uniform distribution over that range — which you can then scale to your desired range.05-Sept-2022 ## How do you get random strings? Using the random index number, we have generated the random character from the string alphabet. We then used the StringBuilder class to append all the characters together. If we want to change the random string into lower case, we can use the toLowerCase() method of the String . ## How do you shuffle a word in Javascript? javascript shuffle string • String. prototype. shuffle = function () { • n = a. length; ​ • for(var i = n – 1; i > 0; i–) { var j = Math. floor(Math. • var tmp = a[i]; a[i] = a[j]; • a[j] = tmp; } • return a. join(“”); } • console. log(“the quick brown fox jumps over the lazy dog”. shuffle()); • ​ console. ## What is charAt Javascript? Description. charAt() is a method that returns the character from the specified index. Characters in a string are indexed from left to right. The index of the first character is 0, and the index of the last character in a string, called stringName, is stringName. length – 1. ## How do you use math random in Java? How to use the Math. random() method in Java • import java. lang. Math; //importing Math class in Java. • class MyClass { • public static void main(String args[]) • { • double rand = Math. random(); // generating random number. • System. out. • } ## What is random string? Random strings can be unique. Used in computing, a random string generator can also be called a random character string generator. This is an important tool if you want to generate a unique set of strings. The utility generates a sequence that lacks a pattern and is random. ## How do you generate a random number from 1 to 10 in JavaScript? We can simply Math. random() method to generate random number between 1 and 10 in javascript. Math. random() returns a random number between 0(inclusive), and 1(exclusive).
862
3,843
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.984375
3
CC-MAIN-2024-10
latest
en
0.704532