content
stringlengths
86
994k
meta
stringlengths
288
619
Using Inequalities to Solve Optimization Problems Question Video: Using Inequalities to Solve Optimization Problems Mathematics • First Year of Secondary School Charlotte wants to make dresses and suits. Each dress or suit will have the same quantity of cloth and the same number of buttons. The following inequality represents the number of dresses (π ·) and the number of suits (π ) that she can make with 25 mΒ² cloth: 5π · + 7π < 25. Additionally, the following inequality represents the number of dresses (π ·) and the number of suits (π ) that she can make with 100 buttons: 12π · + 18π < 100. Given that she has 25 mΒ² of cloth and 100 buttons, does she have enough cloth to make 2 dresses and 3 suits? Video Transcript Charlotte wants to make dresses and suits. Each dress or suit will have the same quantity of cloth and the same number of buttons. The following inequality represents the number of dresses π · and the number of suits π that she can make with 25 square meters of cloth: five π · plus seven π is less than 25. Additionally, the following inequality represents the number of dresses π · and the number of suits π that she can make with 100 buttons: 12π · plus 18π is less than 100. Given that she has 25 square meters of cloth and 100 buttons, does she have enough cloth to make two dresses and three suits? Letβ s begin by identifying some of the key pieces of information in this question. Sheβ s making dresses and suits. The first inequality weβ re interested in gives us information about the number of dresses and suits she can make with 25 square meters of cloth. Itβ s five π · plus seven π is less than 25. The inequality that represents the number of items she can make with 100 buttons is 12π · plus 18π is less than 100. We need to use all of this information to establish whether she will have enough cloth to make two dresses and three suits. So what weβ ll do is draw each of these inequalities on the coordinate plane. Letβ s clear some space to do so. Now, before we add these to the coordinate plane, there are two more inequalities that we need to construct. We know that the number of dresses that she makes and separately the number of suits π that she makes cannot be negative. We canβ t have a negative number of items. This means that the number of dresses and the number of suits must individually be part of the set of natural numbers. In inequality form, we can write this as π · must be greater than or equal to zero and π must be greater than or equal to zero. Since both π · and π are nonnegative, we can consider the construction of our inequalities purely in the first quadrant as shown. Then, we can draw the lines that represent π · equals zero and π equals zero. If we designate the π ₯-axis to be π ·, the number of dresses, and the π ¦-axis to be π , the number of suits, then the line π equals zero is the π ₯-axis and the line π · equals zero is the π ¦-axis. We wonβ t shade the whole region just yet. But we know since π · must be greater than or equal to zero, we need to shade the right-hand side of the line π · equals zero. Similarly, to represent the inequality π is greater than or equal to zero, we shade above the line π equals zero. With that completed, letβ s now add the first inequality to our diagram. Letβ s begin by plotting the line five π · plus seven π equals 25 on the coordinate plane. We want to find the points at which it intersects the π ·-axis and the π -axis. So weβ ll individually let π · and π be equal to zero and solve for the remaining unknown. If π · is equal to zero, our equation becomes seven π equals 25, meaning π is 25 over seven. This line therefore passes through the π -axis or the π ¦-axis a little bit above 3.5. Then, if we let π be equal to zero, we get five π · equals 25 which means π · equals five. And our line passes through the π ·-axis or the π ₯-axis at five. We add this line to our diagram. And at this point, we know we have a strict inequality. Five π · plus seven π is strictly less than 25. And so we add a dotted line in the place of five π · plus seven π equals 25. We need to establish which side of the line satisfies this inequality. And so we choose a point that lies either side of this line and then we substitute it into the inequality and see if it satisfies it. Itβ s always sensible where possible to choose the point zero, zero. So we let π · equals zero and π equals zero, which means that five π · plus seven π is also zero. This is indeed less than 25 as we required. So the side of the line that the point zero, zero lies on satisfies our inequality. We can therefore shade the entire region on this side of the line. Now, we actually wonβ t go past the π ₯- and π ¦-axes because we know π is greater than zero and π · is greater than zero. Now, we have enough to answer the question, β Does she have enough cloth for two dresses and three suits?β To do so, weβ ll plot the point two, three on the coordinate plane. If it lies in the shaded region, then we know she does have enough cloth. If it doesnβ t, then she does not. Well, the point two, three does not lie in our shaded region. So we can confirm she does not have enough cloth for two dresses and three suits. And in fact, we can check this by substituting π · equals two and π equals three into the inequality. When π · is two and π is three, five π · plus seven π is 31. This is greater than 25. So this number of dresses and suits does not satisfy the inequality. So what was the point in drawing this on the coordinate plane then? Well, plotting inequalities on the coordinate plane allows us to answer problems that involve more than one inequality. So letβ s draw the graph of 12π · plus 18π equals 100. The line 12π · plus 18π equals 100 looks like this. Once again, we draw a dotted line because we have a strict inequality. And if we were to substitute π · equals zero and π equals zero into our inequality, we would find it is indeed less than 100. Zero plus zero is zero, which is less than 100. So we shade the side of the line that contains the point zero, zero. We can now see that 100 buttons is enough for two dresses and three suits, since the point two, three lies on the side of the line that satisfies our inequality. Weβ re also now able to see the region that satisfies all four of our inequalities. Itβ s this triangle on the lower left-hand side of our diagram. We know that any point that lies inside this triangle satisfies all four of our inequalities. So, for instance, the point one, one satisfies all four inequalities, meaning that she has enough cloth and enough buttons to make one dress and one suit. However, we have done enough to answer this question. She does not have enough cloth for two dresses and three suits.
{"url":"https://www.nagwa.com/en/videos/720170639427/","timestamp":"2024-11-12T06:05:52Z","content_type":"text/html","content_length":"261254","record_id":"<urn:uuid:b5abea1d-c702-481a-b93e-7fb28224fed0>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00775.warc.gz"}
Statistical Mining in Data Streams Title: Statistical Mining in Data Streams 1 Statistical Mining in Data Streams • Ankur Jain • Dissertation Defense • Computer Science, UC Santa Barbara • Committee • Edward Y. Chang (chair) • Divyakant Agrawal • Yuan-Fang Wang • The Data Stream Model • Introduction and research issues • Related work • Data Stream Mining • Stream data clustering • Bayesian reasoning for sensor stream processing • Contribution Summary • Future work Data Streams • A data stream is an unbounded and continuous sequence of tuples. • Tuples arrive online and could be • A tuple seen once cannot be easily retrieved • No control over the tuple arrival order Applications Sensor Networks Applications Network Monitoring Applications Text Processing • Video surveillance • Stock ticker monitoring • Process control manufacturing • Traffic monitoring analysis • Transaction log processing Traditional DBMS does not work! Data Stream Projects • STREAM (Stanford) • A general-purpose Data Stream Management System • Telegraph (Berkeley) • Adaptive query processing • TinyDB General purpose sensor database • Aurora Project (Brown/MIT) • Distributed stream processing • Introduces new operators (map, drop etc.) • The Cougar Project (Cornell) • Sensors form a distributed database system • Cross-layer optimizations (data management layer and the routing layer) • MAIDS (UIUC) • Mining Alarming Incidents in Data Streams • Streaminer Data stream mining Data Stream Processing Key Ingredients • Adaptivity • Incorporate evolutionary changes in the stream • Approximation • Exact results are hard to compute fast with limited memory A Data Stream Management System (DSMS) The Central Stream Processing System Thesis Outline • Develop fast, online, statistical methods for mining data streams. • Adaptive non-linear clustering in multi-dimensional streams • Bayesian reasoning sensor stream processing • Filtering methods for resource conservation • Change detection in data streams • Video sensor data stream processing • The Data Stream Model • Introduction and research issues • Related work • Data Stream Mining • Stream data clustering • Bayesian reasoning for sensor stream processing • Contribution Summary • Future work Clustering in High-Dimensional Streams • Given a continuous sequence of points, group them into some number of clusters, such that the members of a cluster are geometrically close to each other. Example Application Network Monitoring Connection tuples (high-dimensional) Stream Clustering New Challenges • One-pass restriction and limited memory • Fading cluster technique proposed by Aggarwal et • Non-linear separation boundaries • We propose using the kernel trick to deal with the non-linearity issue • Data dimensionality • We propose effective incremental dimension reduction technique The 2-Tier Framework Adaptive Non-linear Clustering Input Space Tier1 Stream Segmentation q-dimensional q lt d Tier 2 LDS Projection Update Latest point received from the stream 2-Tier clustering module (uses the kernel trick) Fading Clusters The Fading Cluster Methodology • Each cluster Ci, has a recency value Ri s.t. • Ri f(t-tlast), where, • t current time • tlast last time Ci was updated • f(t) e-? t • ? fading factor • A cluster is erased from memory (faded) when Ri h, where h is a user parameter • ? controls the influence of historical data • Total number of clusters is bounded Non-linearity in Data Input Space Feature Space Spectral clustering methods likely to perform Traditional clustering techniques (k-means) do not perform well Feature space mapping Non-linearity in Network Intrusion Data Geometrically well-behaved trend Use kernel trick Input Space Feature Space ipsweep attack data The Kernel Trick • Actual projection in higher dimension is computationally expensive • The kernel trick does the non-linear projection • Given two input space vectors x,y • k(x,y) lt?(x),?(y)gt Gaussian kernel function k(x,y) exp(-?x-y2) used in the previous example ! Kernel Function Kernel Trick - Working Example Not required explicitly! • ?x (x1, x2) ? ?(x) (x12, x22, v2x1x2) • lt?(x),?(z)gt lt(x12, x22, v2x1x2), (z12, z22, • x12z12 x22z22 2x1x2z1z2, • (x1z1 x2z2)2, • ltx,zgt2. Kernel trick allows us to make operations in high-dimensional feature space using a kernel function but without explicitly representing ? Stream Clustering New Challenges • One-pass restriction and limited memory • We use the fading cluster technique proposed by Aggarwal et. al. • Non-linear separation boundaries • We propose using kernel methods to deal with the non-linearity issue • Data dimensionality • We propose effective incremental dimension reduction technique Dimensionality Reduction • PCA like kernel method desirable • Explicit representation EVD preferred • KPCA is computationally prohibitive - O(n3) • The principal components evolve with time frequent EVD updates may be necessary • We propose to perform EVD on grouped-data instead Requires a novel kernel method The 2-Tier Framework Adaptive Non-linear Clustering Input Space Tier1 Stream Segmentation q-dimensional q lt d Tier 2 LDS Projection Update Latest point received from the stream 2-Tier clustering module (uses the kernel trick) Fading Clusters The 2-Tier Framework • Tier 1 captures the temporal locality in a • Segment is a group of contiguous points in the stream geometrically packed closely in the feature space • Tier 2 adaptively selects segments to project data in LDS • Selected segments are called representative • Implicit data in the feature space is projected explicitly in LDS such that the feature-space distances are preserved The 2-Tier Framework Obtain a point x From the stream TIER 2 Is S a representative segment? Add S in memory and update LDS Is (?(x) novel w.r.t S and s gt smin) OR is s TIER 1 Clear contents of S Obtain in LDS Add x to S Is close to an active cluster? Update cluster centers and recency values. Delete faded clusters Assign x to its nearest cluster Create new cluster with x Network Intrusion Stream • Simulated data from MIT Lincoln Labs. • 34 Continuous Attributes (Features) • 10.5 K Records • 22 types of intrusion attacks 1 normal class Network Intrusion Stream Clustering accuracy at LDS dimensionality u10 Efficiency - EVD Computations Image data 5K Records, 576 Features, 10 digits Newswire data 3.8K Records, 16.5K Features, 10 news topics In Retrospect • We proposed an effective stream clustering • We use the kernel trick to delineate non-linear boundaries efficiently • We use stream segmentation approach to continuously project data into a low dimensional • The Data Stream Model • Introduction and research issues • Related work • Contributions Towards Stream Mining • Stream data clustering • Bayesian reasoning sensor steam processing • Contribution Summary • Future work Bayesian Reasoning for Sensor Data Processing • Users submit queries with precision constraints • Resource conservation is of prime concern to prolong system life • Data acquisition • Data communication Find the temperature with 80 confidence Use probabilistic models at central site for approximate predictions preventing actual Dependencies in Sensor Attributes Attribute Acquisition Cost Temperature 50 J Voltage 5 J Acquire Voltage! Get Temperature Dependency Model Bayesian Networks Report Temperature Get Voltage Using Correlation Models Deshpande et al. • Correlation models ignore conditional Intel Lab ( Real Sensor network data) Attributes Voltage (V), Temperature (T), Humidity (H) voltage is correlated with temperature Humidity 35-40) voltage is conditionally independent of temperature, given humidity ! Deshpande et al. VLDB04 BN vs. Correlations • Correlation model Deshpande et. al. • Maintains all dependencies • Search space of finding best possible alternative sensor attribute is high • Joint probability is represented in O(n2) cells NDBC Buoy Dataset • Bayesian Network • Maintains vital dependencies only • Lower search complexity O(n) • Storage O(nd), d avg. node degree • Intuitive dependency structure Intel Lab. Dataset Bayesian Networks (BN) • Qualitative Part Directed Acyclic Graph (DAG) • Nodes Sensor Attributes • Edges Attribute influence relationship • Quantitative Part Conditional Probability Table • Each node X has its own CPT , P(Xparents(X)) • Together, the BN represent the joint probability • factored from P(T,H,V,L) P(T)P(HT)P(VH)P(LT • The influence relationship is represented by conditional entropy function H. • H(Xi)?kl1 P( Xi xil )log(P( Xi xil )) • We learn the BN by minimizing H(Xi Parents(Xi)). System Architecture Acquisition Plan Group Query (Q) Acquisition Values Query Processor Finding the Candidate Attributes • For any attribute in the group-query Q, analyze candidates attributes in the Markov blanket • Selection criterion • Select candidates in a • greedy fashion Information Gain (Conditional Entropy) Acquisition cost Meet precision constraints Maximize resource conservation Experiments Resource Conservation NDBC dataset, 7 attributes Effect of using MB Property with ?min 0.90 Effect of using group-queries, Q - Group-query Results - Selectivity Wave Period (WP) Wind Speed (SP) Air Pressure (AP) Wind Direction (DR) Water Temperature (WT) Wave Height (WH) Air Temperature (AT) In Retrospect • Bayesian networks can encode the sensor dependencies effectively • Our method provides significant resource conservation for group-queries Contribution Summary • Adaptive Stream resource management using Kalman Filters. SIGMOD04 • Adaptive sampling for sensor networks. • Adaptive non-linear clustering for Data Streams. CIKM06 • Using stationary-dynamic camera assemblies for wide-area video surveillance and selective attention. CVPR06 • Filtering the data streams. in submission • Efficient diagnostic and aggregate queries on sensor networks. • in submission • OCODDS An On-line Change-Over Detection framework for tracking evolutionary changes in Data Streams. in submission Future Work • Develop non-linear techniques for capturing temporal correlations in data streams • The Bayesian framework can be extended to address what-if queries with counterfactual evidence • The clustering framework can be extended for developing stream visualization systems • Incremental EVD techniques can improve the performance further Back to Stream Clustering • We propose a 2-tier stream clustering framework • Tier 1 Kernel method that continuously divides the stream into segments • Tier 2 Kernel method that uses the segments to project data in a low-dimensional space (LDS) • The fading clusters reside in the LDS Clustering LDS Projection Clustering LDS Update Network Intrusion Stream Clustering accuracy at LDS dimensionality u10 Cluster strengths at LDS dimensionality u10 Effect of dimensionality Query Plan Generation • Given a group query, the query plan computes candidate attributes that will actually be acquired to successfully address the query. • We exploit the Markov Blanket (MB) property to select candidate attributes. • Given a BN G, the Markov Blanket of a node Xi comprises the node, and its immediate parent and Exploiting the MB Property • Given a node Xi and a set of arbitrary nodes Y in a BN s.t. MB(Xi) µ Y Xi), the conditional entropy of Xi given Y is at least as high as that given its Markov blanket or H(XiY) • Proof Separating MB(Xi) into two parts MB1 MB(Xi) Y and MB2 MB(Xi) - MB1 and denoting Z Y MB(Xi) • H(XiY) H(XiZ,MB1) Y Z • H(XiZ,MB1,MB2) Additional information cannot increase entropy • H(XiZ, MB(Xi)) MB(Xi) • H(XiMB(Xi)) Markov-blanket definition Bayesian Reasoning -More Results Effect of using MB Property with ?min 0.90 Query answer Quality loss 50-node Synth. Data BN Bayesian Reasoning for Group Queries • More accurate in addressing group queries • Q (Xi, ?i)Xi 2 X Æ (0 lt ?i 1) Æ 1 i n) s.t. ?i ltmaxl P(Xi xil) • X X1,X2 ,X3,, Xn Sensor attributes • ?i Confidence parameters • P(Xi xil) Probability with which Xi assumes the value of xil • Bayesian reasoning is helpful in detecting Bayesian Reasoning Candidate attribute selection algorithm User Comments (0)
{"url":"https://www.powershow.com/view4/6fe057-YjU2M/Statistical_Mining_in_Data_Streams_powerpoint_ppt_presentation","timestamp":"2024-11-03T00:30:05Z","content_type":"application/xhtml+xml","content_length":"197738","record_id":"<urn:uuid:a3b54a85-7e1d-4b47-bfb9-4e8ed60e340f>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00026.warc.gz"}
One-hour Mandelbrot: Creating a fractal on the vintage Xerox Alto I wrote a short program to generate the Mandelbrot set on the Xerox Alto, a groundbreaking minicomputer from the 1970s. The program, in the obsolete BCPL language, ran very slowly—taking almost exactly an hour—but the result below shows off the Alto's monochrome bitmapped display. (Bitmapped displays were a rarity at the time because memory was so expensive.) The Xerox Alto took an hour to generate the Mandelbrot set. The Alto was a revolutionary computer designed at Xerox PARC in 1973 to investigate personal computing. It introduced the GUI, Ethernet and laser printers to the world, among other things. In the photo above, the Alto computer itself is in the lower cabinet. The Diablo disk drive (with the 1970s orange stripe) uses a removable 14 inch disk pack that stores 2.5 megabytes of data. (A bunch of disk packs are visible behind the Alto.) The Alto's display is bitmapped with 606x808 pixels in an unusual portrait orientation, and the optical mouse is next to the display. Last year Y Combinator received an Alto from computer visionary Alan Kay and I'm helping restore the system, along with Marc Verdiell, Luca Severini and Carl Claunch. My full set of Alto posts is here and Marc's videos are here. I haven't posted an update for a while, but now I can write new programs and download them to the Alto using the Living Computer Museum's Alto file system implementation and gateway to the Alto's 3Mb Ethernet. I decided to start with the Mandelbrot set to take advantage of the Alto's high resolution display. Marc's latest video shows the Mandelbrot programming running on the Alto. The Mandelbrot program The Mandelbrot set algorithm is fairly simple. Each point on the plane represents a complex number c. You repeatedly iterate the complex function f(z) = z^2 + c. If the value diverges to infinity, the point is outside the set. Otherwise, the point is inside the set and the pixel is set to black. Setting the pixel is tricky because the Alto doesn't have a graphics API; you need to determine which bit in memory to set.4 Since the Xerox Alto doesn't support floating point1, I needed a way to represent the numbers with its 16-bit word. I use fixed point arithmetic: 4 bits to the left of the decimal point and 12 bits to the right.2 For instance, the number 1.25 is represented in 16 bits as 1.25*2^12 = 0x1400. These fixed point numbers can be added with standard integer addition. After multiplying two fixed point numbers with integer multiplication, the 32-bit result must be divided by 2^12 (i.e. shifted right by 12) to restore the decimal point location.3 The code (above) is written in BCPL, the main language used on the Alto. BCPL is a precursor to C and many features of C are clearly visible in BCPL: everything from lvalues and rvalues to the ternary operator. You can think of BCPL as C without types; the only BCPL types are 16-bit words along with C-like structs, unions and bitfields. BCPL may look unfamiliar at first, but the code above should be clear if you consider the following syntax differences with C: • Blocks are indicated with [ and ] instead of { and }. • Indexing is with a!1 instead of a[1]. • And, Or, and Shift bit operations are &, %, and lshift/rshift. • Variable definitions use let. • Arrays are defined with vec. More information on BCPL is in the BCPL Reference Manual and my earlier article on using BCPL with the Alto. The Xerox Alto, a few minutes into generation of the Mandelbrot set. Why is the Alto so slow? Running the Mandelbrot set illustrates the amazing improvement in computer speed since the Alto was created in 1973 and the huge changes in computer architecture. On a modern computer, a Javascript program can generate the Mandelbrot set in a fraction of a second, while the Alto took an hour. The first factor is the Alto's slow clock speed of 5.88 MHz, hundreds of times slower than a modern processor. In addition, the Alto doesn't execute machine instructions directly, but uses a relatively inefficient microcode emulator that takes many cycles to perform one machine instruction. The ALU board from the Xerox Alto. The Alto doesn't use a microprocessor chip, but a CPU built from three boards of integrated circuits. Unlike modern computers, the Alto doesn't use a microprocessor chip, but instead has a CPU built from three boards full of simple TTL chips. The photo above shows the arithmetic-logic unit (ALU) board, which uses four 4-bit 74181 ALU chips to perform addition, subtraction and logic operations. You can also see the CPU's registers on this board. The Alto doesn't include a hardware multiplier, but must perform multiplication by repeated shifts and adds. Thus, the Alto performs especially poorly on the Mandelbrot set, which is essentially repeated multiplications. The Mandelbrot set was a quick program to try out the Alto's graphics. Next I'll try some more complex projects on the Alto. If you want to run my code, it's on Github; you can run it on the ContrAlto simulator if you don't have an Alto available. If you're interested in retrocomputing fractals, I also generated a Mandelbrot on a 50 year old IBM 1401 mainframe The 1401 generated the Mandelbrot set in 12 minutes—not because it's a faster machine than the Alto, but because the resolution on the line printer was very very low. Mandelbrot generated on the IBM 1401 mainframe. Notes and references 1. There is a floating point library (source) for the Alto. I decided not use use it since the integer Mandelbrot was already very slow. But using floating point would make sense if you wanted to zoom in on the Mandelbrot. ↩ 2. Fixed-point arithmetic is a common trick for fast Mandelbrot calculation. ↩ 3. To multiply two 16-bit numbers efficiently, I use the double precision MulFull function (written in Nova assembler) in PressML.asm, part of the Computer History Museum's archived Alto software. ↩ 4. The hardest part of generating the Alto Mandelbrot was figuring out how to configure the display memory and update it correctly. The details on how the display works are in chapter 4 of the Xerox Alto Hardware Manual. To summarize, the display contents are defined by a linked list of display control blocks (DCBs), which define a rectangular region of pixels on the display. A microcode task reads 16 words of pixels from memory at a time and writes them to the display board, which shifts the pixels out to the monitor. Thus, as each scanline is being written to the CRT, the CPU is busily reading the pixels for that line from memory and feeding them to the display, another reason why the Alto is slow. The Alto's Smalltalk environment has a simple graphics API, but we don't have Smalltalk running yet. ↩ 11 comments: @David, the Alto indeed had other languages, as well as a writable control store, so a language environment or even an application could load custom microcode. But "Novacode" was widely used, e.g.: Bravo, Markup, Draw, the Interim File Server, etc. The Cedar system, with its Cedar Mesa programming language, ran on the much faster Dorado rather than the Alto. Thanks for another post. I saw Marc was still posting on youtube, so I was wondering if you would keep blogging about it. The Mandelbrot set was a great idea to showcase the bit mapping capabilities of the alto, which were definitely ahead of it's time. I don't know about faster programming languages for the Alto, but you could probably greatly decrease the time required by disabling the screen redrawing subroutine (if possible) until the program is finished. @Lord, the screen redrawing is done in microcode, and you are correct that it slows things down -- by about a factor of three (see http://bwlampson.site/38-AltoSoftware/ThackerAltoHardware.pdf). It could be shut off during the computation phase by moving the statement "lvdas!0 = dcb" down to the end. If I understand the Nova instruction set properly, lshift n and rshift n must be performed by multiple single bit shifts and the Alto implementation of the BCPL (aka Nova) instruction set doesn't extend it with multiple bit shifts. This implies that the code for lshift n and rshift n is either unwound into 16 shifts in total, which is fairly slow or implemented as a call to a library routine involving a loop which would be even slower. So, replacing the lshift and rshift n by a division could be faster if we use microcoded single precision div: {iiii:ffffffffffff} * {iiii:ffffffffffff} => {IIIIIIII:FFFFFFFF}:{FFFFFFFFFFFFFFFF} / 4096. The standard BCPL division is signed, however, PressML actually includes a MulDiv operation which performs unsigned multiply then divide using the microcode instructions with an intermediate 32-bit value. This should work if we manually adjust for signed arithmetic. Thus we can do: let x2sp =(x ls 0) ? -x,x // x^2 will be +ve. x2sp = MulDiv(x2sp,x2sp,4096) // yield x^2 single precision. let y2sp = (y ls 0) ? -y,y // y^2 will be +ve y2sp = MulDiv(y,y,4096) // yield = y^2, single precision if x2sp+ y2sp ge 16384 then break // Quit if x^2 + y^2 > 4 in place of: MulFull(y, y, y2) // y2 = y*y. Integer multiplication, y2 is double word. MulFull(x, x, x2) // x2 = x*x if x2!0 + y2!0 ge 1024 then break // Quit if x^2 + y^2 > 4 if n eq 20 then // Last iteration. Still inside set, so set pixel. let adr = (200 + ypos) * 38 + h // 200 blank pixels at top, 38 words per line v!adr = v!adr % (1 lshift (15-b)) // Convert to single precision by dropping 12 bits. // rshift 12 = lshift top word 4 let x2sp = (x2!0 lshift 4) % (x2!1 rshift 12) // x2sp = x^2, single precision let y2sp = (y2!0 lshift 4) % (y2!1 rshift 12) // y2sp = y^2, single precision and later instead of: MulFull(x, y, xy) // xy = x*y let xysp = (xy!0 lshift 4) % (xy!1 rshift 12) // xysp = x*y, single precision we would have: let xysgn=x xor y // top bit is sign for result. if x<0 then x=-x if y<0 then y=-y x=MulDiv(x,y,4096) // the unsigned version. if xysgn<0 then x=-x // correct the sign. y=x + x + cy x=x2sp - y2sp + cx In addition, the double checking of the exit conditions surely makes things slower. I would do: // ... as before n=n-1 // dsz, nop. ]repeatuntil n ls 0 The display update code is still fairly slow owing to the multiply and the shift: if n ls 0 then // Still inside set, so set pixel. let adr = (200 + ypos) * 38 + h // 200 blank pixels at top, 38 words per line v!adr = v!adr % (1 lshift (15-b)) We don't use v again, so I'd set it to the right address at the outer loop: v=v+200*38 // start 200 scans down. let cy=y0... I'd increment it after the end of the b loop: for b = 0 to 15 do // horizontal bit count ... as before. v=v+1 // isz nop. Since it's addressed linearly, this takes care of both the *38 and +h. Lastly, I'd use a simple bitmask for handling the pixel, and because the only use for b is the loop counter it makes that part of the code including the end of the n loop: let b=32768 // horizontal bit mask // ... as before n=n-1 // dsz, nop. ]repeatuntil n ls 0 if n ls 0 then v!0=v!0 % b // set the pixel b=b rshift 1 ]repeatwhile b -cheers from Julz get "streams.d" let Main() be let v = vec 30705 // Pixel buffer v = (v + 1) & -2 // Data needs to be 32-bit aligned let dcb = vec 5 // Display control block: defines display region dcb = (dcb + 1) & -2 dcb!0 = 0 // End of display list dcb!1 = 38 // 38 words per line dcb!2 = v // Data pointer dcb!3 = 404 // # lines / 2 let lvdas = #420 // Address holds pointer to display control block lvdas!0 = dcb for i = 0 to 30703 do v!i = 0 // Clear display // Values are represented as fixed point with 12 bits to right of decimal point. let x0 = (-2) lshift 12 // Left boundary: x = -2 let x1 = 1 lshift 12 // Right boundary: x = 1 let y0 = (-1) lshift 12 // Top boundary: y = -1 let y1 = 1 lshift 12 // Bottom boundary: y = 1 let xstep = (x1 - x0) / 600 // Render 600 pixels horizontally let ystep = (y1 - y0) / 400 // Render 400 pixels vertically let x2 = vec 2 // double word to hold x^2 let y2 = vec 2 // double word to hold y^2 let xy = vec 2 // double word to hold x * y v=v+200*38 // start 200 scans down. let cy = y0 // Constant value, y part. I.e. the z value for this pixel for ypos = 0 to 400 do // line count. Note "for" limits are inclusive. let cx = x0 // Constant value, x part. for h = 0 to 37 do // horizontal word count let b = 32768 // horizontal bit count let x = cx // The complex z value is represented as x + i*y let y = cy let n=20 let x2sp =(x ls 0) ? -x,x // x^2 will be +ve anyway, so just abs. x2sp = MulDiv(x2sp,x2sp,4096) // yield x^2 single precision. let y2sp = (y ls 0) ? -y,y // ditto for y^2. y2sp = MulDiv(y,y,4096) // yield = y^2, single precision if x2sp+ y2sp ge 16384 then break // Quit if x^2 + y^2 > 4 let xysgn=x xor y // top bit is sign for result. if x ls 0 then x=-x if y ls 0 then y=-y x=MulDiv(x,y,4096) // the unsigned version. if xysgn ls 0 then x=-x // correct the sign. // z = z^2 + c (complex arithmetic, z = x+iy) // i.e. y = 2xy + cy // x = x^2 - y2 + cx y=x + x + cy x=x2sp - y2sp + cx ]repeatuntil n eq 0 if n eq 0 then v!0=v!0 % b // set the pixel cx = cx + xstep // Move to next cx value b=b rshift 1 // next bitmask (rshift is unsigned) ]repeatwhile b // until all the bits in this video word are done. v=v+1 // next video word. cy = cy + ystep // Move to next cy value Gets(keys) // Get a key, i.e. wait for a keypress Finally, apologies, the external[..] entry for MulFull should be to MulDiv -best regards from Julz @Snial/Julz, you say "If I understand the Nova instruction set properly, lshift n and rshift n must be performed by multiple single bit shifts and the Alto implementation of the BCPL (aka Nova) instruction set doesn't extend it with multiple bit shifts." Actually, the Alto's standard microcode includes some augmented instructions, described starting on page 16 of the hardware manual (http://www.bitsavers.org/pdf/xerox/alto/Alto_Hardware_Manual_Aug76.pdf). One of these is: CYCLE (60000B): Left cycle (rotate) the contents of ACO by the amount specified in instruction bits 12-15, unless this value is zero, in which case cycle ACO left by the amount specified in ACI. Leaves ACI = cycle count mod 20B. Hello Paul, Thanks for that, I should have checked properly. Optimising Mandelbrot sets are fun. In the late 1980s I had a 68008 based Sinclair QL and wrote an assembler version which would display a Mandelbrot set on its 256x256 x 8 colour mode, in 8 colours. It was possible to get it down to about 2 minutes 40s and I think I had the equivalent of a larger value of n. I could also zoom in on quadrants, but the fixed-point limitations did show up before long ;-) Hello Paul, Although the Alto itself supports microcode rotates, that's not the same as a shift. Doesn't this file imply, the BCPL compiler makes a call to library routines for shifts? And it can't use those routines to optimise a shift by a constant into a jump into the unrolled loop, because it hasn't set up FRET. Thus, this brings us back to my original comment: shifts will be slow. Rotates, I think can be converted into shifts, by using the cycle function a second time to generate a mask. Thus, a shift left of n can be done as: dest = (src rol n) & ~((1 rol n)-1) A shift right (assuming rol isn't a rotate through carry) is: dest = (src rol (16-n)) & ((1 rol (16-n)-1) It's not clear to me that this would be any faster on average on an Alto. The average time (with the display on) for 16 shifts will be in order of 2.5us*3 = 7.5us per instruction *36 nova instructions = 270us per rounding operation. That's about 1ms for all 3 giving at least 20ms per black pixel. @Snial, then there is this BCPL microcode: http://xeroxalto.computerhistory.org/Indigo/AltoSource//BCPLRUNTIMESOURCE.DM!1_/.BcplUtil.mu.html ; Right shift ; Computes ac0 ← ac0 rshift ac1 ; Called by jsr @347 ; Note that shift count may be either positive or negative Thanks for all the suggestions for improving performance. I tried them out and Mandelbrot runtime is now 9 minutes instead of an hour. Details are in my new post.
{"url":"https://www.righto.com/2017/06/one-hour-mandelbrot-creating-fractal-on.html?showComment=1497903678194","timestamp":"2024-11-10T15:57:57Z","content_type":"application/xhtml+xml","content_length":"151180","record_id":"<urn:uuid:d52718e0-bfe1-4a55-97b6-5a853cbb23ab>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00309.warc.gz"}
4) Number Before • Develop fluency identifying the number before, for numbers up to ten. Representing the number after on a number track. For example, the number after five. Representing the number before on a number line. For example, the number before five. • number before • one less than • one fewer than When initially completing the activity introduce and use the language “the number after”. In subsequent sessions, once students are fluent with the activity, change the language to “one greater than” and later still “one more than”. One use for number lines is counting forwards and backwards. For example, forwards from zero or backwards from ten. Please note, this is not how the number lines are to be used in these activities. In these activities number lines are used to develop the concept of a mental number line, that is, where numbers are located in relation to each other. If students need to count to identify the location of a number they should only count on/back one, two or three. Counting more than this is inefficient, error prone and limits opportunities for students to develop more effective strategies. • Activity 2 focuses on locating the number after. • Activity 4 focuses on locating the number before. • As students progress through the Bond Blocks system they will develop the skills needed to locate harder to identify numbers such as 7 and 8. Alternating numbers Remove every second block from the steps. Build flat on a desk. Students count aloud, pointing to each number as it is said. Next, repeat this activity but the student takes turns saying the counting numbers with the teacher. The teacher points to and says the number that is represented with Bond Blocks. The student points to the space before this block and says the number that is missing. Then, repeat counting with alternating numbers, taking turns with the teacher, but introduce the language, “The number before”. The teacher points to the Bond Block that is placed and says, “The number before ten is…” The student points to the space after Block Ten and says “Nine.” Once students are fluent using the alternating steps made with even numbers repeat using the odd numbered Bond Blocks. Counting number before Complete the ‘a little harder’ activity board. Even steps Count backwards, subtracting two each time, starting at 10, with blocks, to 2. Extend to counting backwards with even numbers, by two, from 18. Count backwards, subtracting two each time, starting at 10, without blocks, to 2. Extend to counting backwards by two, without blocks, from 18. Odd steps Count backwards, subtracting two each time, starting at 9, with blocks, to 1. Extend to counting backwards with odd numbers, by two, from 19. Count backwards, subtracting two each time, starting at 9, without blocks, to 1. Extend to counting backwards by two, without blocks, from 19. Develop fluency counting backwards when packing away. Cover the template with a piece of white paper that is 22cm x 20cm. Give students a different instruction each time they pack away. For example: • Pack away the even blocks first. • Pack away the odd blocks first. • Pack away the even blocks first, starting at 10, counting backwards. • Pack away the odd blocks first, starting at 9, counting backwards. In the next activity students identify individual numbers in the counting sequence to 10 without counting. Go to Activity 5 Counting: Identifying Numbers 6 to 10, Building Steps
{"url":"https://bondblocks.com/4-number-before/","timestamp":"2024-11-09T23:04:06Z","content_type":"text/html","content_length":"431221","record_id":"<urn:uuid:d1e30460-639a-48e0-8082-1e7964b9f942>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00444.warc.gz"}
Cite as Ulrich Bauer, Håvard Bakke Bjerkevik, and Benedikt Fluhr. Quasi-Universality of Reeb Graph Distances. In 38th International Symposium on Computational Geometry (SoCG 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 224, pp. 14:1-14:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022) Copy BibTex To Clipboard author = {Bauer, Ulrich and Bjerkevik, H\r{a}vard Bakke and Fluhr, Benedikt}, title = {{Quasi-Universality of Reeb Graph Distances}}, booktitle = {38th International Symposium on Computational Geometry (SoCG 2022)}, pages = {14:1--14:18}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-227-3}, ISSN = {1868-8969}, year = {2022}, volume = {224}, editor = {Goaoc, Xavier and Kerber, Michael}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2022.14}, URN = {urn:nbn:de:0030-drops-160221}, doi = {10.4230/LIPIcs.SoCG.2022.14}, annote = {Keywords: Reeb graphs, contour trees, merge trees, distances, universality, interleaving distance, functional distortion distance, functional contortion distance}
{"url":"https://drops.dagstuhl.de/search/documents?author=Bauer,%20Ulrich","timestamp":"2024-11-09T00:18:38Z","content_type":"text/html","content_length":"120310","record_id":"<urn:uuid:02da9f74-f6ed-4d8e-b087-fc7746f62e37>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00613.warc.gz"}
Use IFERROR with VLOOKUP to Get Rid of #N/A Errors When using the VLOOKUP formula in Excel, sometimes you may end up with the ugly #N/A error. This happens when your formula can not find the lookup value. In this tutorial, I will show you different ways to use IFERROR with VLOOKUP to handle these #N/A errors cropping up in your worksheet. Using the combination of IFERROR with VLOOKUP allows you to show something meaningful in place of the #N/A error (or any other error for that matter). Before getting into details on using this combination, let’s first quickly go through the IFERROR function and see how it works. IFERROR Function Explained With IFERROR function, you can specify what should happen in case a formula or a cell reference returns an error. Here is the syntax of the IFERROR function. =IFERROR(value, value_if_error) • value – this is the argument that is checked for the error. In most cases, it is either a formula or a cell reference. When using VLOOKUP with IFERROR, the VLOOKUP formula would be this argument. • value_if_error – this is the value that is returned if there is an error. The following error types evaluated: #N/A, #REF!, #DIV/0!, #VALUE!, #NUM!, #NAME?, and #NULL!. Possible Causes Of VLOOKUP Returning a #N/A Error VLOOKUP function may return a #N/A error due to any of the following reasons: 1. The lookup value is not found in the lookup array. 2. There is a leading, trailing, or double space in the lookup value (or in the table array). 3. There is a spelling error in the lookup value or the values in the lookup array. You can handle all these causes of error with the combination of IFERROR and VLOOKUP. However, you should keep an eye out for cause #2 and #3, and correct these in the source data instead of letting IFERROR handle these. Note: IFERROR would treat an error irrespective of what caused it. If you only want to treat the errors caused by VLOOKUP not being able to find the lookup value, use IFNA instead. That will make sure that errors other than #N/A are not treated and you can investigate these other errors. You can treat leading, trailing, and double spaces using the TRIM function. Replacing VLOOKUP #N/A Error with Meaningful Text Suppose you have a dataset as shown below: As you can see that the VLOOKUP formula returns an error as the lookup value is not in the list. We are looking to get the score for Glen, which is not in the table of scores. While this is a very small dataset, you may get huge datasets where you have to check the occurrence of many items. For every case when the value is not found, you will get a #N/A error. Here is the formula you can use to get something meaningful instead of the #N/A error. =IFERROR(VLOOKUP(D2,$A$2:$B$10,2,0),"Not Found") The above formula returns the text “Not Found” instead of the #N/A error. You can also use the same formula to return blank, zero, or any other meaningful text. Nesting VLOOKUP With IFERROR Function In case you are using VLOOKUP and your lookup table is fragmented on the same worksheet or different worksheets, you need to check the VLOOKUP value through all of these tables. For example, in the dataset shown below, there are two separate tables of student names and the scores. If I have to find the score of Grace in this dataset, I need to use the VLOOKUP function to check the first table, and if the value is not found in it, then check the second table. Here is the nested IFERROR formula I can use to look for the value: =IFERROR(VLOOKUP(G3,$A$2:$B$5,2,0),IFERROR(VLOOKUP(G3,$D$2:$E$5,2,0),"Not Found")) Using VLOOKUP with IF and ISERROR (Versions prior to Excel 2007) IFERROR function was introduced in Excel 2007 for Windows and Excel 2016 in Mac. If you’re using the prior versions, then IFERROR function will not work in your system. You can replicate the functionality of IFERROR function by using the combination of the IF function and the ISERROR function. Let me quickly show you how to use the combination of IF and ISERROR instead of IFERROR. In the above example, instead of using IFERROR, you can also use the formula shown in cell B3: =IF(ISERROR(A3),”Not Found”,A3) The ISERROR part of the formula checks for the errors (including the #N/A error) and returns TRUE if an error is found and FALSE if not. • If it’s TRUE (which means that there is an error), the IF function returns the specified value (“Not Found” in this case). • If it’s FALSE (which means that there is no error), the IF function returns that value (A3 in the above example). IFERROR treats all kinds of errors while IFNA treats only the #N/A error. When handling errors caused by VLOOKUP, you need to make sure you’re using the right formula. Use IFERROR when you want to treat all kinds of errors. Now an error can be caused by various factors (such as the wrong formula, misspelled named range, not finding the lookup value, and returning error value from the lookup table). It wouldn’t matter to IFERROR and it would replace all these errors with the specified value. Use IFNA when you want to treat only #N/A errors, which are more likely to be caused by VLOOKUP formula not being able to find the lookup value. You May Also Find the Following Excel Tutorials Useful: 16 thoughts on “Use IFERROR with VLOOKUP to Get Rid of #N/A Errors” 1. Very Useful 2. What is the result returned by VLOOKUP when both the columns has same value (i.e., blank)? I received “#N/A” when both the source cell and target cell has no values. Can someone help me here please? thanks in advance. 3. Kudos! This has been really helpful. 4. =+IFERROR(Vlookup(B2,’Customer Details’!B:C,2,0),””) why this formula not showing the result ? □ I am using excel 2010, while using the formula it is showing the correct selection in the formula but it is not displaying the area we want but it is showing the entire formula again. can you help me? 5. i am using =IFNA(vlookup,0) in code but its reflected in excel as =ifna(vlookup,0) as i want value 0 where #N/A comes. because of that it gives me value as #NAME 6. Thanks □ I think IFNA will be a better option rather than ISERROR 7. I think it’s much better to use the IFNA function that works more or less like IFERROR, but with the really important difference that IFNA only get rid of the #N/A errors… □ Yeah, if your lookup data table doesn’t have errors, IFNA is the better choice ☆ No, IFNA is always the best solution with VLOOKUP, because only the #N/A are hidden, so it’s possible to detect all other errors: wrong range, wrong formula, misspelled name range and so on… With IFERROR you hide all this stuff and you cannot correct the errors… ○ not always dear ■ So, what do you mean? Why not always? ■ Hello Franz.. While I agree that IFNA is the better choice with VLOOKUP, it’s also dependent on the data structure and the output that decides what function should be used. As far as I know, IFNA is not available in 2007 and 2010 versions of Excel Windows. Instead of going the longer IF + ISNA, route, it’s easier to check the formula and make sure there are no errors in the formula or named range and go with IFERROR instead. Another example is of a recent dashboard i worked on, where the data I got itself had errors such as DIV and NA. Instead of going a 2-step process of checking with IFNA and then treating the DIV errors with IFERROR, it’s better to make sure the formula/named range is correct and then use IFERROR. Also, wrong range anyway returns NA error, so even IFNA wouldn’t help in that case. My point is, IFNA is better, but it’s not the only way to go. 8. Good post but I do have an issue with IFERROR You may also be getting an error if the range looked up is too short narrow, or if the cell value returned (legitimately) is itself an error, or if the index is negative, or if the lookup range is unsorted and the last element in the VLOOKUP is omitted, or if the lookup value is an error, or maybe mistyped a range name or … What I’m getting at is that errors can occur for many reasons and defaulting the error response to “not found” may mean you’re going to overlook an incorrect formula Much better to anticipate the error (with a COUNTIF in this case) and deal with it properly IFERROR is a very dangerous thing to use in such a cavalier manner – beware □ Hey Jim.. I mentioned in the tutorial that there can be various reasons for errors and first it must be sorted at the data level instead of letting IFERROR handle it (covered in the ‘possible causes or error’ section). Also, the next step would always be to make sure the formula is created properly. There wouldn’t be any other way to handle misspelled named range or not having the proper data structure or not having the right lookup range, than to make sure it’s sorted in the first place. In case of approximate match, having the data sorted in an ascending order is a pre-requisite to use the VLOOKUP formula. That would anyway lead to wrong results (even in cases when the result is not an error). In case the formula returns a result that is an error, IFERROR would still be valuable in making the result more meaningful. The cases where this combination works well is when you get data download from a data set that have fixed formats and you need to perform lookup on such data. When used properly, IFERROR can be a really useful function. Leave a Comment
{"url":"https://trumpexcel.com/iferror-vlookup/","timestamp":"2024-11-06T13:52:21Z","content_type":"text/html","content_length":"422958","record_id":"<urn:uuid:f52c2725-c6e4-4e03-9368-acb29c50d68d>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00604.warc.gz"}
Chapter 7 Terms Flashcards The word percent comes from the Latin word per centum translates into… this symbol %bis used to denote what Decimal 2 Place values Left Decimal 2 Place values Right when writing a percent as a fraction it is usually written in ____ ____ How to change a mixed fraction percent into a fraction 1) Remove the % and convert the mixed fraction into a improper fraction 2) Multiply times 0.01 Percent Decimal Combo to Fraction 1) %/B x 1/100 2) Move Decimal point of Numerator to make it a whole # and adjust Denom. accordingly. In a percent problem the word “of” means… In a percent problem the word “is” means… In a percent problem the word “What, or some equivalent” means… When solving percent problems there are three factors, they are The Percent, The Base, The Amount. The typical formula for for percent equations are… Percent x Base = Amount (or any variation)
{"url":"https://www.brainscape.com/flashcards/chapter-7-terms-1490321/packs/1641797","timestamp":"2024-11-03T03:43:52Z","content_type":"text/html","content_length":"110052","record_id":"<urn:uuid:afc2ce1a-cf43-4055-a97c-ebf9df8d6fc2>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00292.warc.gz"}
How do you multiply 2/4xx2/4? | Socratic How do you multiply #2/4xx2/4#? 2 Answers The simplest solution is to simply perform the multiplication (numerators x numerators, denominators x denominators). $\frac{2}{4} \times \frac{2}{4} = \frac{2 \times 2}{4 \times 4} = \frac{4}{16}$ This can be further reduced by division by 4 to get the simplest fractional answer: $\frac{2}{4} \times \frac{2}{4} = \frac{1}{4}$ When multiplying fractions, multiply the numerators and denominators across. $\frac{a}{b} \times \frac{c}{d} = \frac{a c}{b d} .$ $\frac{2}{4} \times \frac{2}{4}$ $\frac{2 \times 2}{4 \times 4} = \frac{4}{16}$ Reduce $\frac{4}{16}$ by dividing the numerator and denominator by $4$. $\frac{4 \div 4}{16 \div 4}$ Alternatively, you could reduce the original fractions before multiplying them. $\frac{2 \div 2}{4 \div 2} \times \frac{2 \div 2}{4 \div 2}$ $\frac{1}{2} \times \frac{1}{2} = \frac{1}{4}$ Impact of this question 841 views around the world
{"url":"https://socratic.org/questions/59efe62311ef6b6a9db58dcf#495150","timestamp":"2024-11-02T02:56:02Z","content_type":"text/html","content_length":"33637","record_id":"<urn:uuid:4b7062be-23fd-4bd2-be90-dec4e4c920cf>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00816.warc.gz"}
Route from Basildon to Hyndburn The distance between Basildon and Hyndburn is 253 miles. Travel time is 4 hours and 11 minutes. From: Basildon, County: Essex, England To: Hyndburn, County: Lancashire, England 253 mi , 4 h 11 min Tolls: 0£ Stations Fuel consumption: Fuel cost: Driving directions from Basildon to Hyndburn 253 mi 4 h 11 min Head north on Craylands 302 ft Continue slightly left onto Churchill Avenue 1121 ft Enter the roundabout and take the 3rd exit onto Broadmayne (A1321) 408 ft Exit the roundabout onto Broadmayne (A1321) 1.4 mi Enter the roundabout and take the 2nd exit onto Upper Mayne (A176) 213 ft Exit the roundabout onto Upper Mayne (A176) 0.7 mi Enter the roundabout and take the 2nd exit onto Upper Mayne (A176) 164 ft Exit the roundabout onto Upper Mayne (A176) 0.3 mi Enter the roundabout and take the 1st exit onto A127 16 ft Exit the roundabout onto A127 1307 ft Merge right onto Southend Arterial Road (A127) 6.3 mi Take the ramp on the left towards M25 0.3 mi Enter Cranham Interchange and take the 3rd exit onto M25 1158 ft Exit the roundabout onto M25 17.6 mi Continue onto M25 11.8 mi Take the ramp on the left towards A1081: St Albans 1021 ft Enter the roundabout and take the 3rd exit onto A1081 244 ft Exit the roundabout onto A1081 896 ft Enter The Bell Roundabout and take the 2nd exit onto A1081 404 ft Exit the roundabout onto A1081 1.3 mi Enter London Colney Roundabout and take the 2nd exit onto North Orbital Road (A414) 313 ft Exit the roundabout onto North Orbital Road (A414) 1.9 mi Enter Park Street Roundabout and take the 3rd exit onto A414 352 ft Exit the roundabout onto A414 3.5 mi Take the ramp towards M1: The NORTH 53.6 mi Keep right onto M1 5.1 mi Take the ramp on the left towards M6: The N. WEST 22.8 mi Keep left onto M6 14.2 mi Keep right onto M6 7.9 mi Keep right onto M6 72 mi Keep right onto M6 15.7 mi Take the ramp on the left 1198 ft Keep right at the fork 321 ft Enter the roundabout and take the 3rd exit 1018 ft Exit the roundabout 0.4 mi Merge right onto M65 12.8 mi Take the ramp on the left 0.4 mi Enter Hyndburn Interchange and take the 1st exit onto Dunkenhalgh Way (A6185) 21 ft Exit the roundabout onto Dunkenhalgh Way (A6185) 827 ft Turn right onto Blackburn Road (A678) 0.5 mi Turn right onto Whalley Road (A680) 492 ft Turn right onto Read Street 323 ft You have arrived at your destination, on the right 0 ft How much does it cost to drive from Basildon to Hyndburn with petrol? Let's delve into the expenses of a road trip from Basildon to Hyndburn. The journey using petrol will set you back 43.8 pounds. Now, let's walk through how we arrived at this figure. Firstly, we consider the price of petrol, which is 143 pence per liter. Next, we take into account the car's fuel consumption rate, which stands at 37.5 miles per gallon. Now, suppose you're carpooling. If there are two passengers sharing the ride, each person will contribute 21.9 pounds (£43.8 divided by 2). With three passengers, the cost per person drops to around 14.6 pounds (£43.8 divided by 3). And if there are four passengers onboard, each person chips in 10.95 pounds (£43.8 divided by 4). Let's delve deeper into the calculations. To determine how much fuel is needed for the trip, we rely on the distance and the car's fuel efficiency. The distance between Basildon and Hyndburn is approximately 252.53 miles. So, to cover this distance, the car will require roughly 30.6 liters of petrol. We obtained this value by dividing the distance by the car's fuel consumption rate of 37.5 miles per gallon and then converting it to liters. Once we have the amount of fuel needed, we calculate the total cost. Multiplying the fuel quantity (30.6 liters) by the price per liter (143 pence) yields 43.8 pounds. Therefore, the fuel cost for the journey from Basildon to Hyndburn is 43.8 pounds. Diesel cost from Basildon to Hyndburn. The trip with diesel fuel costs 27.6 £ (151p/lt, 62.7 mpg). For 2 passengers: 13.8 £ (27.6/2) /p. For 3 passengers: 9.2 £ (27.6/3) /p. For 4 passengers: 6.9 £ (27.6/4) /p. The main reasons diesel cars use less fuel than petrol cars are: Diesel engines have higher compression ratios than petrol engines, improving thermal efficiency. Diesel engines extract more energy from each fuel unit due to this increased compression ratio, improving thermal efficiency. Diesel automobiles have superior fuel economy than petrol ones. Diesel engines create more torque at lower RPM than petrol engines. Diesel vehicles perform better in steady-state driving, requiring less throttle input to maintain speed. Thus, diesel cars have reduced fuel consumption, especially on highways. Alternative routes from Basildon to Hyndburn An alternative route from Basildon to the Hyndburn is 258.54 miles and takes 4 hours and 21 minutes. Which is the cheapest route from Basildon to Hyndburn by car? The cheapest route from Basildon to Hyndburn is the first suggested route (253mi, 4h 11min) and costs 43.8 £ (petrol fuel cost, 143p/lt, 37.5mpg). The most expensive route from Basildon to Hyndburn is the second suggested route (259mi, 4h 21min) and costs 44.8 £. For greater safety, the driver should choose the route that passes through the highway (median barrier) and not through smaller roads (curves in the road, bad road conditions, no median barrier). How to get from Basildon to Hyndburn? There are two suggested routes. The fastest route is 253mi (distance from Basildon to Hyndburn) and its duration is 4h 11min. The slowest route is 259mi, and its duration is 4h 21min. RouteCalculator provides you with the information to prepare your trip from Basildon to Hyndburn by car or motorbike. It offers you alternative road routes that you can follow to go from Basildon to Hyndburn. It provides driving directions from Basildon to Hyndburn, i.e. where to turn and when, distance of the Basildon - Hyndburn route, travel time, display of the route on the map, fuel cost (petrol, diesel) and toll (if exist) cost. In case you share the journey (carpooling) from Basildon to Hyndburn with other people or friends, RouteCalculator provides the cost of the journey and the amount that each passenger will have to Where is Basildon located? Basildon is located in Essex county and in England. It is situated at an altitude of 21 meters above sea level. Basildon has coordinates 51.5760840^o,0.4887360^o. RouteCalculator provides a map of Basildon from which you can plan your trips to other UK cities. What is the location of Hyndburn? Hyndburn is located in Lancashire county and in England. It is situated at an altitude of 136 meters above sea level. Hyndburn has coordinates 53.7675645^o,-2.3816682^o. RouteCalculator provides a map of Hyndburn from which you can plan your trips to other UK cities.
{"url":"https://routecalculator.co.uk/distance/Basildon/Hyndburn","timestamp":"2024-11-09T10:55:46Z","content_type":"text/html","content_length":"51124","record_id":"<urn:uuid:651179cb-e840-48f6-9b89-5f2c8ed9426b>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00513.warc.gz"}
=PERMUT formula | Calculate the number of permutations without repetitions. Formulas / PERMUT Calculate the number of permutations without repetitions. =PERMUT(number,number_chosen) • number - an integer, required • number_chosen - an integer, required • =PERMUT(2,4,6) The PERMUT function can be used to calculate the number of permutations for a given set of numbers. For example, if you wanted to calculate the number of permutations for the numbers 2, 4, and 6, you would use the formula. This formula would return the number of permutations for the given set of numbers, which in this case would be 12. • = PERMUT(A2:A6) The PERMUT function can also be used to calculate the number of permutations for a range of numbers. For example, if you wanted to calculate the number of permutations for the numbers from 3 to 6, you would use the preceding formula. This formula would return the number of permutations for the range of numbers, which in this case would be 120. • =PERMUT(2,4,6,2) The PERMUT function can also be used to calculate the number of permutations for a given set of numbers with a given repetition. For example, if you wanted to calculate the number of permutations for the numbers 2, 4, and 6 with repetition, you would use the above formula. This formula would return the number of permutations for the given set of numbers with a repetition of 2, which in this case would be 36. The PERMUT function is used to calculate the number of permutations for a given set of objects or events. It is often used for lottery-style probability calculations. • The PERMUT function calculates permutations where repetitions are not allowed. • The PERMUTATIONA and COMBIN functions allow repetitions and calculate permutations where repetitions are allowed. • The COMBINA function calculates permutations where repetitions are allowed. Frequently Asked Questions What is the PERMUT function? The PERMUT function is a calculation that determines the number of permutations for a given number of objects. A permutation is a set or subset of objects or events. What is the purpose of the PERMUT function? The PERMUT function is used to calculate probability when dealing with lottery-style games. How is the PERMUT function used? The PERMUT function can be used to calculate the number of different outcomes in a lottery-style game. For example, if you have a lottery game with five numbers, the PERMUT function can be used to calculate the total number of possible outcomes. What are some examples of how the PERMUT function can be used? • Calculating the total number of possible outcomes in a lottery-style game. • Determining the probability of certain outcomes in a lottery-style game. • Calculating the number of different arrangements of a given set of objects.
{"url":"https://sourcetable.com/formula/permut","timestamp":"2024-11-11T05:09:14Z","content_type":"text/html","content_length":"59418","record_id":"<urn:uuid:f9bb3600-f06f-4325-9c55-1b24bd4e5036>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00713.warc.gz"}
Dr. Dan A. Simovici Professor of Computer Science University of Massachusetts Boston Department of Computer Science ● Ph.D., July 1974, University of Bucharest, Romania ● M.S. in Mathematics, June 1970, University of Iasi, Romania ● M.S. in E.E., June 1965, Polytechnical Institute of Iasi, Romania Professional Affiliations ● Senior Member of IEEE ● Vice Chair of the Technical Committee for Multiple-Valued Logic of the Computer Society ● Association for Computing Machinery ● AAAI Academic Career ● 1985 - present: Professor of Computer Science, UMB ● 2010 - present: Honorary Professor of Computer Science, University of Iasi, Romania ● January 2006: Visiting Professor of Computer Science, University of Science and Technology, Lille, France ● 1998: Visiting Professor of Computer Science, Tohoku University, Sendai, Japan ● 1984 - present: Director of the Computer Science Graduate Program, UMB ● 1982 - 1985: Associate Professor of Computer Science, UMB ● 1981 - 1982: Associate Professor of Computer Science, University of Miami, Florida Research Interests • Information-Theoretical and Linear Methods in Data Mining • Semantic Models in Databases • Algebraic Aspects of Multiple-Valued Logic Other Activities • Vice-Chair the Program Committees for ISPA'07 for Databases and Data Mining • Member of Program Committees for major datamining conferences: KDD, PKDD, DAWAK, EGC. • General Chairman of the 32nd International Symposium for Multiple-Valued Logic, Boston, Massachusetts, 15-18 May 2002. • Managing Editor of Journal for Multiple-Valued Logic and Soft Computing • Editor of International Journal for Parallel, Emergent, and Distributed Systems • Editor of International Journal for Software and Information Technologies • General Co-Chairman of the 26th International Symposium for Multiple-Valued Logic, Santiago de Compostela, Spain, May 1996. • General Chairman of the 24th International Symposium for Multiple-Valued Logic, Boston, Massachusetts, May 1994. • Chairman of the Technical Committee of the Computer Society/IEEE (elected at the 19th annual meeting, May 1988, Spain). • General Chairman of the 17th International Symposium for Multiple-Valued Logic, Boston, Massachusetts. • Reviewer for the Computer Science Accreditation Board of IEEE/ACM. Ph.D. Students ● Dana Cristofor (graduated 2002) ● Laurentiu Cristofor (graduated 2002) ● Szymon Jaroszewicz (graduated 2003) ● Richard Butterworth (graduated 2006) ● Selim Mimaroglu (graduated 2008) ● Saaid Baraty (graduated 2013) ● Dan Pletea (graduated 2013) ● Rosanne Vetro (graduated 2015) ● Paul Fomenky (graduated 2018) ● Roman Sizov (graduated 2019) ● Kaixun Hua (graduated 2019) ● Cristina Maier (graduated 2021) ● Logical Foundation of Computer Science: vol. 1: Propositional Logic and vol. 2: Predicate Logic World Scientific, 2024, 1336 pages by Peter A. Fejer and Dan A. Simovici ● Introduction to the Theory of Formal Languages, 2024, 464 pages by Dan A. Simovici ● Linear Algebra Tools for Data Mining, 2nd edition, 2023, 1004 pages by Dan A. Simovici ● Clustering - Theoretical and Practical Aspects World Scientific, 2021 ● Mathematical Analysis for Machine Learning and Data Mining , World Scientific, 2018 ● Linear Algebra Tools for Data Mining , World Scientific, 2012 ● Mathematical Tools for Data Mining , Springer-Verlag 2008 by Dan A. Simovici and C. Djeraba (second edition 2015) ● Theory of Formal Languages with Applications by Dan Simovici and Richard Tenney, World Scientific, 1999 ● Relational Database Systems by Dan Simovici and Richard Tenney, Academic Press, 1995 ● Mathematical Foundations of Computer Science. vol. I: Sets, Relations, Induction in Computer Science (with Peter Fejer), Springer Verlag, New York, 1990 ● Logical Foundations of Computer Science (with Peter Fejer), in preparation for Springer-Verlag, New York ● Introduction aux Structures Algébriques, (2 vols.), ERPI, Montreal, Canada, 1992 ● Formal Languages and Compiling Techniques, Editura Didactica si Pedagogica, Bucharest, Romania, 1978 Recent Publications □ Inertial entropy and external Validation of clusterings, Journal of Harbin Institute of Technology (New Series), with Joshua Yee, 2024 □ Monotonic Entropies, Scientific Annals of Computer Science vol. 34 (1), 2024, pp. 67 87 doi: 10.47743/SACS.2024.1.67 □ Dual Criteria Determination of the Number of Clusters in Data (Kaixun Hua), Proceedings of SYNASC 2018, Timisoara, 201-207, Computer Society. □ Ultrametricity of Dissimilarity Spaces and Its Significance for Data Mining (with R. Vetro and K. Hua), EGC 2015, Luxembourg, Revue des Nouvelles Technologies de l'Information, RNTI E. 28, 89-100 (pdf file) □ Several Remarks on Dissimilarities and Ultrametrics, Scientific Annals of Computer Science, "Al. I. Cuza" University of Iasi, Romania, vol. XXV, 1, 2015, pp. 155-170 (pdf file) □ Representative Training Sets for Classification and the Variablity of Empirical Distributions (with Saaid Baraty), Extraction et Gestion des Connaisances, February 2014, EGC'2014, Revue des Nouvelles Technologies, E. 26, pp. 299-304 (pdf file) □ Evaluating Data minability Through Compression -- An Experimental Study (with Saaid Baraty and Dan Pletea), International Journal on Advances in Software, vol. 6, no.3-4, 2013, pp 237--245 (pdf □ On Submodular and Supermodular Functions on Lattices and Related Stuctures, to appear in the Proceedings of the 44th International Symposium for Multiple-Valued Logic, Bremen, Germany, May 17-19, 2014 (pdf file) □ Data Mining of Medical Data: Opportunities and Challeges in Mining Association Rules, IALS, Cecilienhof, Potsdam, August 2012 (pdf file) □ Evaluating Data Minability through Compression - An Experimental Study (with D. Pletea and S. Baraty) - Proceedings of Data Analytics 2012, Barcelona, Spain, September 2012, pp. 97-102 (pdf file) □ Polarities, Axiallities, and Marketability, DaWaK 2012, Vienna, September 2012 (with P. Fomenky and W. Kurz), LNCS 7448, Springer-Verlag, pp.243-252 (pdf file) □ Information-Theoretical Mining of Determining Sets for Partially Defined Functions, to appear at the Journal for Multiple-Valued Logic and Soft Computing (with Dan Pletea and Rosanne Vetro) (pdf □ Evaluating Bayesian Networks by Sampling with Simplified Assumptions EGC 2012, Bordeaux, Revue des Nouvelles Technologies de l'Information, RNTI, E.23, pp. 11-16 (with Saaid Baraty) □ Several Remarks on the Metric Space of Genetic Codes, International Journal of Data Mining and Bioinformatics, vol. 6, 2012, pp. 17-26 (with D. Weisman) (pdf file) □ Entropic-Genetic Clustering, Revue des Nouvelles Technologies d'Information, Extraction et Gestion des Connaissances, 2011, Brest, France, pp. 71--76 (with M. Breaban and H. Luchian) (pdf file) □ Approximative distance computation by random hashing (with S. Mimaroglu and M. Yagci), Journal of Supercomputing, appeared in "On line first" (to appear in print this Fall, (pdf file) □ Entropy quad-trees for high complexity region detection (with R. Vetro and W. Ding), IJSSCI, vol. 3, pp. 16-33, 2011. □ The Impact of Triangular Inequality Violations on Medoid-Based Clustering, Proceedings of ISMIS 2011, (with S. Baraty and C. Zara) Warsaw, Poland, June 2011, Lecture Notes in Artificial Intelligence, LNAI 6804, pp. 280--289, (pdf file) □ Entropies on Bounded Lattices, Proceedings of the 41st International Symposium for Multiple-Valued Logic, Tuusula, Finland, May 2011, pp. 307--312 (pdf file) □ Singular value decomposition is a valid predictor of stroke importance in reading Chinese, (with Wang, H.C., Angele, B., Schotter, E., Yang, J., Pomplun, M. and Rayner, K.) Poster at the 16th European Conference on Eye Movements (ECEM2011), Marseille, France. August, 2011. □ Bernoulli Trials Based Feature Selection for Crater Detections, (with Liu, W. Ding, J. P. Cohen, T. Stepinski) the 19th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, Chicago, IL, November, 2011 □ Mining Determining Sets for Partially Defined Functions (with D. Pletea and R. Vetro), Advances in Data Mining, Lecture Notes in Artificial Intelligence LNAI 5633, Springer-Verlag (pdf file) □ Scalable pattern mining with Bayesian networks, Data Mining and Knowledge Discovery, (Springer-Verlag), vol. 18, 2009, pp.56-100 (with S. Jaroszewicz and T. Scheffer) (pdf file) □ Mining Approximative Descriptions of Sets Using Rough Sets (with Selim Mimaroglu), Proceedings of the 39th International Symposium for Multiple-Valued Logic, Okinawa, Japan, May 2009 (pdf file) □ Binary Sequences and Association Graphs for Fast Detection of Sequential Patterns (with S. Mimaroglu), EGC 2009, Strassbourg, January 2009 (pdf file) □ Edge Evaluation in Bayesian Network Structures (with Saaid Baraty), Proceedings of the 8th Australian Data Mining Conference (AusDM 2009), Australian Computer Society and ACM, pp. 193-201 (pdf □ Structural Classification of XML Documents Using Multisets (with S. Iyer), International Journal on Artificial Intelligence Tools, vol. 17, no.5, pp.1003-1022 (pdf file) □ Approximate Computation of Object Distances by Locally-Sensitive Hashing (with S. Mimaroglu), to appear in Proceedings of DBMIN'08, Las Vegas, August 2008 (pdf file) □ Metric-Entropy Pairs on Lattices, Journal of Universal Computer Science (Springer-Verlag), vol. 13, no.11, 2007,pp. 1767-1778 (pdf file) □ Betweenness, Metrics and Entropies in Lattices, Proceedings of ISMVL 2008, Dallas, TX, May 2008; the posted version is a pre-print that will appear in the Journal for Multiple-Valued Logic and Soft Computing (pdf file) □ Detecting Eye Fixations by Projection Clustering (with T. Urruty, S. Lew, N. Ihadaddene) ACM Transactions on Multimedia Computing, Communications and Applications, vol. 3, no.4, December 2007, (pdf file) □ Metric Methods in Data Mining, a chapter in Data Mining Patterns - New Methods and Applications, P. Poncelet. M. Teisseire, F. Masseglia (eds.), Information Science Reference, Hershey, 2007, pp. □ Structure Inference of Bayesian Networks from Data: A New Approach Based on Generalized Conditional Entropy (with Saaid Baraty), Proceedings of ECG 2008, Sophia Antipolis, France, Revue des Nouvelles Technologies et de l'Information, RNTI-E-11, 2008, pp. 337-342 (pdf file) □ Multisets and Clustering XML Documents (with Swami Iyer) Proceedings of ICTAI, October 2007, Patras, Greece, IEEE CS Press, pp.267-274 (pdf file) □ Clustering and Approximate Identification of Frequent Item Sets (with S. Mimaroglu) Proceedings of FLAIRS 2007, Key West, May 2007, pp. 502-506 (pdf file) □ Clustering by Random Projections (with T. Urruty and C. Djeraba), ICDM 2007, Leipzig (pdf file), Lecture Notes in Artificial Intelligence, no. 4597, pp. 107-119 □ On the Axiomatization of Generalized Entropic Distances, accepted at ISMVL 2007, Oslo, May 2007 (pdf file) An extended version in the Journal of Multivalued Logic and Soft Computing, v. 13, f.4-6,pp.295-320 is (pdf file) □ Model detection for User Behavior in Video Sessions (with Sylvain Mongy and Chabane Djeraba), DMIN, June 2007, Las Vegas (pdf file), Proceedings of DMIN 2007, CSREA Press, pp. 99-103 □ A New Metric Splitting Criterion for Decision Trees(with Szymon Jaroszewicz) (pdf file) Journal of Parallel, Emerging and Distributed Computing, vol.21, no.4, pp. 239-256, 2006. □ On Feature Extraction through Clustering (with Richard Butterworth and Gregory Piatetsky-Shapiro) (pdf file) Proceedings of ICDM 2005, pp. 581--584 Houston, Texas, November 2005. □ Biclustering of Gene Expression Data Based on Local Nearness (with J. Aguilar-Ruiz and Domingo Savio Rodriguez) (pdf file) Proceedings of EGC 2006, Lille, France, January 2006, pp. 681--692. □ On the Ranges of Algebraic Functions in Lattices (with S. Rudeanu) (pdf file) Studia Logica, vol. 84, no.3, pp. 451--483, December 2006. □ Semi-Supervised Incremental Clustering of Categorical Data (with N. Singla), Proceedings of EGC 2005, Paris, France, pp. 189-200. □ An Abstract Axiomatization of the Notion of Entropy (with Ivo Rosenberg), Proceedings of ISMVL, May 2005, Calgary, Canada (pdf file) . □ Metric Incremental Clustering of Nominal Data (with N. Singla and M. Kuperberg), Proceedings of ICDM 2004, Brigton, UK, pp. 523-527 (pdf file) □ Interestingness of Frequent Itemsets Using Bayesian Networks as Background Knowledge (with S. Jaroszewicz), Proceedings of KDD 2004, Seattle, pp. 178--186. (pdf file) □ A Greedy Algorithm for Supervised Discretization (with R. Butterworth, D. S. Santos and Lucila Ohno-Machado), Journal of Biomedical Informatics, vol. 37(4), pp. 285--292. (pdf file) □ Measures on Boolean polynomials and their applications in data mining (with S. Jaroszewicz and I. G. Rosenberg), Applied Discrete Mathematics, volume on Discrete Mathematics and Data Mining, vol. 144,1, pp. 123--139 (pdf file) □ A Metric Approach to Building Decision Trees Based on Goodman-Kruskal Association Index (with S. Jaroszewicz), PAKDD 2004, Sydney, Australia, May 2004, LNAI 3056, Springer-Verlag, pp. 181--190 (pdf file) □ A Graph-Theoretical Approach to Boolean Interpolation of Non-Boolean Functions (with S. Rudeanu), Proceedings of the 34th International Symposium for Multiple-Valued Logic, Toronto, May 2004, published by IEEE Computer Society, pp. 245--250 (pdf file) □ Evolutionary Strategy for Learning Multiple-Valued Logic Functions (with A. Ngom and I. Stojmenovic), Proceedings of the 34th International Symposium for Multiple-Valued Logic, Toronto, May 2004, published by IEEE Computer Society, pp. 154--160 □ A Metric Approach to Supervised Discretization (with R. Butterworth), EGC 2004, Clermont-Ferrand, France, January 2004, Revue des Nouvelles Technologies de l'Information, RNTI-E-2, vol. 1, pp. □ The Goodman-Kruskal Coefficient and Its Applications in the Genetic Diagnosis of Cancer (with S. Jaroszewicz, W. Kuo and L. Ohno-Machado), IEEE Transactions on Biomedical Engineering, vol. 51, no. 7, pp. 1095--1102, July 2004. (pdf file) □ Generating an Informative Cover for Association Rules (with L. Cristofor), Proceedings of the 2002 IEEE International Conference on Data Mining, pp. 597-600 (pdf file) □ Approximation of Non-Boolean Functions by Boolean Functions and Applications in Non-standard Computing, in Proceedings of the 2002 International Symposium on New Paradigm Computing, December 2002, Sendai, Japan, pp. 27--31 (invited talk) (pdf file) □ Several Remarks on Non-Boolean Functions over Boolean Algebras, Proc. of the International Symposium for Multiple-Valued Logic, Meiji University, Tokyo, May 2003, pp. 163--168 (pdf file) □ An Algebraic Approach to Entropy in Beyond Two: Theory and Applications of Multiple-Valued Logic, M. Fitting and E. Orlowska (editors), Springer-Verlag, Heidelberg, New York, 2003, pp. 101-115. □ Generalized Entropy and Decision Trees, EGC 2003 - Journees francophones d'Extraction et de Gestion de Connaissances, January 2003, Lyon, France (with S. Jaroszewicz), pp. 369-380 (ps file) (pdf □ Support Approximations using Bonferroni-Type Inequalities (with S. Jaroszewicz), Principles of Data Mining and Knowledge Discovery, PKDD 2002, Helsinki, August 2002, Lecture Notes in Artificial Intelligence, vol. 2431, pp. 212--224, Springer Verlag, Berlin, 2002. (ps file) (pdf file) □ Generating Informative Cover Rules (with Laurentiu Cristofor), International Conference on Data Mining, Maebashi, Japan, December 2002. (ps file) (pdf file) □ Finding Median Partitions Using Information-Theoretical Algorithms (with D. Cristofor), Journal of Universal Computer Science, vol 8, no.2, 153--172. (ps file) (pdf file) □ An Inclusion-Exclusion Result for Boolean Polynomials and Its Applications in Data Mining (with S. Jaroszewicz and I. Rosenberg), Proceedings of the Discrete Mathematics and Data Mining Workshop, Washington, April, 2002 (SIAM DM Meeting), pp. 165-173. (ps file) (pdf file) □ An Information-Theoretical Approach to Clustering Categorical Databases Using Genetic Algorithms (with Dana Cristofor), Proceedings of the Workshop on Clustering High-Dimensional Data and Its Applications, Washington, April, 2002 (SIAM DM Meeting), pp. 37-46. (ps file) (pdf file) □ On Functions Defined on Free Boolean Algebras (with I. Rosenberg and S. Jaroszewicz), Proceedings of the ISMVL 2002, Boston, Massachusetts, IEEE Computer Society, Los Alamitos, California, pp. 192--201. (ps file) (pdf file) □ Mining for Purity Dependencies in Relational Databases (with L. Cristofor and D. Cristofor), EGC 2000, Montpellier, January 19-23 (ps file) (pdf file) (best paper award received from AFIA (The French Association for Artificial Intelligence). □ An Axiomatization of Partition Entropy (with S. Jaroszewicz) Transactions on Information Theory, July 2002, vol. 48 (7), pp. 2138--2142 (a preliminary form appeared in the Proceedings of the 31st ISMVL,.Warsaw, Poland, May 2001, pp. 259-266). □ Impurity Measures in Databases (with L. Cristofor and D. Cristofor), Acta Informatica, 38 (2002), pp. 307-324. □ Prunning Redundant Association Rules Using Maximum Entropy Principle (with S. Jaroszewicz), Proceedings of PAKDD, Taipei, May 2002, Lecture Notes in Artificial Intelligence, vol. 2336, Springer Verlag, pp. 135--147. (ps file) (pdf file) □ A General Measure of Rule Interestingness (with S. Jaroszewicz) in Principles of Data Mining and Knowledge Discovery, the 5th European Conference, PKDD 2001, Freiburg, September 2001, Lecture Notes in Artificial Intelligence, vol. 2168, Springer-Verlag, pp. 253-266. □ Mining Association Rules in Entity-Relationship Modeled Databases (with Laurentiu Cristofor), Technical Report, UMB, TR 2001-1 (pdf file) □ An Information-Theoretical Approach to Genetic Algorithms for Clustering (with Dana Cristofor), Technical Report, UMB, TR 2001-2 □ Generalized Entropy and Projection Clustering of Categorial Data (with D. Cristofor, L. Cristofor) in Principles of Data Mining and Knowledge Discovery, the 4th European Conference, PKDD 2000, Lyon, Lecture Notes in Artificial Intelligence, 1910, Springer-Verlag, pp. 619--625 □ Impurity Measures and Applications to Classification and Clustering, (with Dana and Laurentiu Cristofor) presented at the Int. Conf. on Advances in Infrastructure for Electronic Bussiness, Science, and Education, Scuola Superiore G.R. Romoli (Telecom -- Italia), Aquila, Italy, August 2000 □ Data Mining of Weak Functional Decompositions (with S. Jarosiewicz) in the Proceedings of the 30th International Symposium for Multiple-Valued Logic, Portland, Oregon, pp. 77-82 □ On Information-Theoretical Aspects of Relational Databases (with S. Jarosewicz), in Finite vs. Infinite, Springer-Verlag, pp. 301--322 □ Galois Connections and Data Mining, (with L. Cristofor and D. Cristofor), Journal of Universal Computer Sciene, Springer Verlag, vol.6, no.1, pp. 60-74 □ Boolean Completeness in Two-valued Set Logic, (with I. Stojmenovic and R. Tosic) Multi. Val. Logic, 2000, vol. 5, pp. 267--280 □ On Axiomatization of Conditional Entropy of Functions between Finite Sets, (with S. Jarosiewicz) Proc.of the 29th ISMVL, Freiburg, Germany, pp. 24--31 □ Automatic Data Restructuring (with S. Ginsburg and Nan Shu) Journal of Universal Computer Sciene, vol. 5, no, 4, pp. 243-286 □ Learning with Permutably Homogeneous Perceptrons, (with A. Ngom, I. Stojenovic, C. Reischer) Proc.of the 28th ISMVL, Fukuoka, Japan, pp. 161--167 □ Functional Entropy and Decision Trees, (with V. Shmerko, V. Cheushev, S. Yanushkiewicz) Proc.of the 28th ISMVL, Fukuoka, Japan, pp. 257--264 □ Completeness Criteria in Set-Valued Logic Under Composition with Union and Intersection, Proceedings of the 27th International Symposium for Multiple-Valued Logic, May 1997, pp. 75--82. □ A Characterization of the Information Content of a Classification (with K. Baclawski), Information Processing Letters, vol 57 (1996), pp. 211--214. □ Several Remarks on the Complexity of Set-Valued Switching Functions, (with C. Reisher) Proceedings of the 26th International Symposium for Multiple-Valued Logic, Santiago de Compostela, Spain, □ A Categorial Approach to Database Semantics, (with K. Baclawski and W. White) Math. Struct. in Computer Science (1994), v. 4, pp. 147-183 Recent Talks □ Data Mining of Medical Data: Opportunities and Challanges (Potsdam, August, 2012) (pdf file) □ The Vapnik-Chervonenkis Dimension and Learnability (full version), Siemens Doctoral Summer School at the University of Iasi, Romania, June, 2012 (pdf file) □ Linear Methods in Data Mining, Siemens Doctoral Summer School at the University of Iasi, Romania, June 20, 2009 (pdf file) □ Hereditary Families of Sets in Data Mining, University of Bucharest, Romania, June 25, 2009 (pdf file) □ Metric Methods in Data Mining, IDA 2006, Iasi, Romania, June 16, 2006, (pdf file) □ Metric Methods in Mining, Dana Farber Cancer Institute, Boston, February 27, 2004, (pdf file) □ Wavelets and Applications (MIT, April 27, 2004) (pdf file) □ Research Directions in Data Mining (in Romanian, October 2004, Universities of Bucharest and Iasi, Romania) (pdf file) □ Metrics on Partitions of Finite Sets and Data Mining Applications (UMB, May 11, 2005) (pdf file) □ An Abstract Axiomatization of the Notion of Entropy (Calgary, May 19, 2005) (pdf file) □ Efficient Computing Through Random Algorithms (Doctoral Summer School, June 2013, Iasi, Romania) (pdf file) □ Multivalued and Binary Ultrametrics and Clusterings (Doctoral Summer School, June 2014, Iasi, Romania) (pdf file) □ Clustering Axiomatization (Doctoral Summer School, June 2019, Iasi, Romania) (pdf file) CS620 - THEORY of COMPUTATION - SLIDES and HANDOUTS CS 620-2024 SLIDES CS 620 HOMEWORKS □ Homework 1 (pdf file) □ Homework 2 (pdf file) □ Homework 3 (pdf file) CS 620 HANDOUTS and PROBLEM SESSIONS □ Problem Session 1 (pdf file) □ Problem Session 2 (pdf file) □ Problem Session 3 (pdf file) □ Problem Session 4 (pdf file) □ Problem Session 5 (pdf file) □ Problem Session 6 (pdf file) □ Problem Session 7 (pdf file) □ Problem Session 8 (pdf file) □ Problem Session 9 (pdf file) His 4-Legged Friends: If you want a friend ... buy a dog (Harry Truman) Franz (RIP) and Max (RIP) Mrs. Barry
{"url":"https://www.cs.umb.edu/~dsim/","timestamp":"2024-11-05T20:08:16Z","content_type":"text/html","content_length":"36716","record_id":"<urn:uuid:ba203dcb-d1c4-406b-80ee-c9aab38c2331>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00538.warc.gz"}
Multivariate metamodelling is a way to make simplified models of mechanistic models that can be run faster and are more interpretable. This opens up a set of possibilities for how to use already existing mechanistic models to optimize processes and improve understanding. Our Multivariate Metamodelling methods allow our customers to: • Understand and verify underlying relationships in the mechanistic model • Speed up mechanistic models to enable online use • Invert model to estimate internal states and parameters • Combine metamodel outputs with empirical measurements for a hybrid modelling approach that combines “the best of both worlds” and models the process as it really is, in light of prior knowledge. • Find neutral parameter sets for optimization of secondary KPIs while keeping main KPI constant Example applications • Simpler real-time use of mechanistic models in metallurgy: Speed-up of FEM of electrical conditions in FeMn furnace • Seeing new output patterns: The repertoire of a spatiotemporal PDE pattern generating model of animal skin expand and inspected • Advanced imaging of cancer cells: Spectral modelling to separate light scattering from light absorbances in synchrotron FTIR microscopy • Comparison of competing model alternatives: Overlap and uniqueness of two spatiotemporal models of heart muscle elasticity • Flexible model selection and fast fitting to data: Summarizing 40 different models of growth curves into the «meta-modelome of line curvature» Mechanistic models Models describing expected behavior. Mechanistic, or theory driven models, are widely used to describe behavior of a system based on known theory. These mathematical models could be for furnaces in the metallurgy industry, wind turbines, spread of infectious diseases, or the cardiovascular system. Examples: Finite element models of heat diffusion or electrical fields, differential equations of reaction kinetics, or computational fluid dynamic models of turbulence in gases or liquids. A good mechanistic model describes essential properties and behaviors, according to the laws of physics within the selected process design. Such a model is a valuable source of prior knowledge, especially about how the system will behave under conditions where you don’t even WANT to have observational data, for example situations that cause harm to equipment or personnel. Mechanistic models are not perfect, but they represent valuable knowledge. Mechanistic mathematical models often encapsulate deep prior knowledge of domain experts. However, mechanistic models may be slightly oversimplified – they often do not take all possible interactions between the parameters into account. Still, they do describe valuable insight about how the laws of physics determine the input-output relationships of the system, and how the system was intended to One other issue is that many of the coefficients that are built into the mechanistic/first principle models are estimated from certain experiments that might not have a general applicability, but represent a “best estimate”. This might introduce some bias in the models, so that they cannot be extrapolated uncritically to other similar but not identical applications. Old models put to rest. Mechanistic models are often used for design of processes and assets. But today, they are seldom used in the operations phase. One reason is that they are often slow to compute and difficult to fit to streams of process measurements. Another reason is that a model is no longer trusted – maybe there were discrepancies between the planned “design” in the model and what was actually “built”, or perhaps the model has not been updated with later furnace modernizations. Raw materials and operating conditions also change over time, reducing fit with the original mechanistic model. So, theory-driven mechanistic models might not be perfect. But they can be supplemented by data-driven models based on actual process measurements. This is called “hybrid modelling”, and combine mechanistic- and data driven modelling, using the advantage of each and avoiding the disadvantage of each. Multivariate Metamodelling Metamodelling makes hybrid modelling faster and easier. A multivariate metamodel is a simplified description of the output behavior of a mechanistic model under different input conditions. When building a metamodel, we need to describe the mechanistic model’s behavior. This means running the mechanistic model repeatedly under different conditions, making sure that the relevant range of conditions are included. The model inputs (various combinations of relevant system design descriptions, parameter values, initial conditions and computational controls), are chosen so that the model outputs will be is representative for the use case of interest as well as for unwanted, but possible deviations from these. This set of computer simulations with the mechanistic model needs only to be done only once. In Idletechs we do this through our highly efficient experimental designs, to make the most cost-effective computer simulations of parameter combinations, spanning the whole relevant range of behavior with respect to the important statistical properties. Read more in Design of Experiments The input-output relationship from the simulated data from the mechanistic model are described with the same methods that are used for modelling the relationship between empirical measurements of input and output. These so-called subspace regression methods, extensions of methods originally developed in the field of chemometrics (ref H. Martens & T Næs (1989) Multivariate Calibration. J. Wiley & Sons Ltd. Chichester UK), are fast to compute, give good input-output prediction models and are designed to give users a graphical overview and provide insight. The laws of physics as implemented in the mechanistic model still apply in the obtained metamodels of the mechanistic model’s behavior. Only now the computations of outputs from inputs can run thousands of times faster, and without risk of local minima or lack of convergence. Different ways to use multivariate metamodeling. Multivariate metamodelling can be used for a range of applications: 1. Understand and verify underlying relationships in the mechanistic model 2. Speed up mechanistic models to enable online use 3. Invert model to estimate internal states 4. Combine metamodel outputs with empirical measurements for a hybrid modelling approach that combines “the best of both worlds” and models the process as it really is. 5. Find so-called “neutral parameter sets” for optimization of secondary KPIs while keeping main KPI constant Understand and verify underlying relationships Most mechanistic models define how a system’s outputs depend on its inputs according to theory: \(Outputs \approx F(Inputs)\) The metamodel of such a theoretical, mechanistic mathematical model is a simpler statistical approximation model: \(Outputs \approx f(Inputs)\) A metamodel in this causal direction is guaranteed to give good descriptions of the mechanistic model’s outputs from its combination of inputs. Such a metamodel gives quantitative predictions and graphical insight into what are the most critical elements and stages hidden inside the mechanistic model itself. The user gets information at three levels: • THAT the Inputs to a system determines on the Outputs: Fast theoretical predictions of critical properties that are either unobservable but essential, or particularly suited for actual Example: The known operating conditions of a furnace (e.g. electrode control, input current, raw material charge and cooling rate) predicts the inner, hidden position of the electrode tip, or the outer surface temperature and electromagnetic field. • HOW the Inputs to a system are combined to predict the Outputs under different operating conditions: Global and Local Sensitivity Analysis. Example: How certain combinations of operating conditions of the furnace should be able to predict its inner or outer properties. • WHY different combinations of Inputs affect different combinations of Outputs the way they do: Revealing the most important patterns of covariation in the system. Example: Why a combination of electrode control, input current and raw material charge seem to affect a combination of electrode tip and outer electromagnetic field, while a different combination of input current and cooling capacity seem to affect the surface temperature distribution. In other words: The many input/output variations form distinct patterns of causalities that can monitored in real time, in particular if the model is fitted fast enough to relevant measurements. Speed up models Many mechanistic mathematical models are highly informative, but too slow for real-time updating. Examples: Nonlinear spatiotemporal dynamics of e.g. metallurgical reactions, or heating and cooling processes. A metamodel of such a slow mechanistic model may be developed to mimic its input/output behavior and make the computations much faster and more understandable. To establish a metamodel of a mechanistic model requires some computer simulation work (mostly automatic) and some multivariate data analysis (requires our expertise). But this work is done once and for all. Once established, our metamodels run very fast, due to their mathematical form (low-dimensional bilinear subspace regressions). Moreover, each time the original non-linear model is applied, it may give computational problems, such as local minima and lack-of-convergence. In contrast, its bilinear metamodel do not suffer in this way, for those problems were already dealt with during the metamodel development. Estimate internal states Various predicted Outputs predicted from a metamodel, e.g. the surface temperature distribution of an engine, may be compared to actually measured profiles of these Outputs, e.g. continuous thermal video monitoring of that engine. This allows us to infer the unknown causal Inputs – inner states and parameter values like unwanted changes in the inner combustion, heat conductivity or cooling of the engine. One way to attain this is to search for already computed Output combinations that resemble the measured Output profile, starting with the previous simulation results. But an even faster approach is also possible. In many cases it may be even more fruitful to invert the modelling direction, compared to the causal direction Output=f(Inputs): \(Inputs \approx g(Outputs)\) That can give an exceptionally fast way to predict unknown input conditions and inner process states from real-time output measurements. In addition, the metamodel will automatically give warning of abnormality if it detects discrepancies between the predicted and measured temperature distribution of the engine surface. This facility is very sensitive, since it ignores patterns of variation that have already been determined to be OK. Adapt and expand to real world Combine metamodel outputs with empirical measurements for a hybrid modelling approach that combines “the best of both worlds” and models the process as it really is. Improving the process knowledge and also the mechanistic model. It is generally a good idea to bring background knowledge into the interpretation of process measurements. Even though the boundary conditions may have changed, the laws of physics are the same. Improving your old mechanistic model. The so-called subspace regression models also provide automatic outlier detection if and when new, unexpected process variation patterns are seen to develop in the measurements. These new pattern developments are used for generating warnings to the operators. Moreover, process properties that have changed are revealed and corrected in terms of model parameter settings that appear to change compared to what you expected. Thereby the combined meta-/data-modelling process helps you update the parameter values in your old mechanistic model. But in addition, your old mechanistic model may not only be updated, but also expanded in this process: Unexpected, new variation patterns in the hybrid can be analyzed more depth, in terms of mechanistic mathematical equations, e.g. differential equation between the inner states, known or known, of your process. These model elements may be grafted onto the original mechanistic model, just like a new branch may be grafted onto the trunk of an old fruit tree. Thereby, your old mechanistic model becomes fully adapted to more modern times. Some mechanistic models display an apparent weakness: Several different combinations of Input conditions can give more or less the same Output profile: \(F(Inputs_1) \approx F(Inputs_2) \approx … F(Inputs_N) = Outputs\) This means that the mechanistic model is mathematically ambiguous, in the sense that different input combinations may cause the system to behave in the same way. Such a collection of parameter combinations with more or less equal effects on the system is called a “neutral parameter set”. And a mechanistic model with neutral parameter sets is called “mathematically sloppy”. However, in an industrial setting, this apparent weakness may be turned into a great advantage. Assume that this ambiguity is a property of the physical system itself (e.g. a furnace or an engine), and not just due to an error in its mechanistic model. Assume also that there are multiple key performance indicators (KPIs) for a given system. May be there are ways to optimize one key KPI, KPI[Key], without sacrificing the other KPIs, KPI[Other]? Then, for a process where a good mechanistic model shows such an ambiguity, its metamodel may list a range of different input combinations that may change KPI[Key] while not affecting KPI[Other]. Seeing this type of ambiguity may allow you to find ways to optimize the Input conditions with respect to KPI[Key] without affecting the other desired process Output significantly. For instance, by studying the sloppiness of the mechanistic model (its many-inputs-to-one-output, e.g. discovered via its metamodel) you may find new ways to run an engine that reduce CO2 or fuel cost without sacrificing engine power. Or new and less risky ways to control the electrode positioning in a furnace without affecting its productivity. Idletechs helps you to combine valuable information from both the mechanistic model (via its metamodel) and from modern measurements (e.g. thermal cameras etc), in a way that is understandable for operators and experts alike. Design of Experiments One of the essential tools in meta- and hybrid- modelling is Design of Experiments (DoE). Proper use of DoE ensures that the parameter space in the mechanistic and simulation-based models is described with a minimum number of combinations of the parameters of interest, i.e. the best subset of experimental runs. The traditional analysis of results from DoE, ANalysis Of VAriance (ANOVA) yields statistical inference w.r.t the importance of the parameters, and to distinguish real effects from noise. The multivariate subspace models gives detail insight into the relationship between samples and variables from experimental designs, specifically in situations with multiple outputs. Yet another important aspect in this context it that one can a priori estimate the danger of overlooking real effects by means of power estimation. No experiments should be performed before there exists an estimate of the uncertainty in the outputs of the model. Furthermore, DOE will also clarify possible interaction effects which initially may not have been the considered in the mechanistic models. Derived input and output parameters might be added by so-called feature engineering; adding transforms of the initial parameters based on domain specific knowledge and theory. Temperature is e.g. rarely affecting a process in a linear way. The modern optimal designs also offer the definition of constraints as known in the system under observation, e.g. that some combinations of parameters will be give fatal outcome of the process. After the initial model has been established, the model can be numerically and graphically optimized, which is the basis for one or more confirmation runs for verification.
{"url":"https://idletechs.com/metamodelling/","timestamp":"2024-11-07T04:02:28Z","content_type":"text/html","content_length":"47695","record_id":"<urn:uuid:70eab393-076f-40f5-8a55-88661ca099d4>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00800.warc.gz"}
Math Forum :: View topic - Cauchy's Rigidity Theorem - www.mathdb.org Math Forum :: View topic – Cauchy&#39;s Rigidity Theorem Does anyone have a proof of Cauchy’s rigidity theorem? If possible, I want to have an electronic version of it. Besides, I am interested in knowing if it has any generalization to higher dimensions. Thanks for attention!_________________
{"url":"https://www.mathdb.org/phpbb2/viewtopicphpt490amp/","timestamp":"2024-11-01T22:38:17Z","content_type":"text/html","content_length":"27324","record_id":"<urn:uuid:d46f412c-361a-4d3d-9692-1986322f49a7>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00061.warc.gz"}
passgen Configuration File Section: File Formats (5) Index Return to Main Contents passgen.conf - The configuration file for the CryptNET password candidate generator. The passgen.conf configuration file is read when the passgen program is executed, if in exists. Usually, it is stored in the /etc directory, but could be stored elsewere. If the file is stored in a location other than the default, that location must be explicitly defined at compile time. All of the options that passgen accepts on the command line may have default values set for them in the configuration file. In the configuration file, lines beginning with a '#' character are ignored as comments. Blank lines are ignored. The configuration options are space delimited in a "key value" format where the key is the variable name in the source code and the value is what the program will set that variable to. Restrictions on length or format are specified as a comment directly above the variable definition line. The configuration file is distributed with all values set to the program defaults and commented out. To change a value, the line defining the variable in the configuration file should be uncommented and an acceptable alternative value should be set. The following variables can be set in the configuration file: Specifies the length of the passwords to be generated. Specifies the number of passwords to be generated. Use the non-blocking /dev/urandom device to speed password candidate generation. (0 = No, 1 = Yes) Exclude the space character from generated password candidates. (0 = No, 1 = Yes) Generate password candidates consisting of the following: 0 - All printable characters. 1 - Alpha numeric characters only. 2 - Alphabetic characters only. 3 - Numeric characters only. For alphabetic characters, force the characters to a specific case. 0 - Do not force case (mixed case). 1 - Force all alphabetic characters to lowercase. 2 - Force all alphabetic characters to uppercase. The level homoglyphs (similar looking characters) should be suppressed. 0 - No homoglyph suppression. 1 - Suppress font homoglyphs such as zero, capital 'o' (Oscar), one, and lowercase 'l' (Lema). 2 - Suppress font homoglyphs and potential character homoglyphs such as backtick, apostrophe, double quote, space, and underbar. passgen(1) passwd(1) There are no known bugs. This document was created by man2html, using the manual pages. Time: 17:02:24 GMT, June 18, 2011
{"url":"https://cryptnet.net/fsp/passgen/passgen.conf.5.html","timestamp":"2024-11-03T05:46:15Z","content_type":"text/html","content_length":"4373","record_id":"<urn:uuid:f0d059a0-34de-4f88-900e-291ffe4255f3>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00266.warc.gz"}
NumPy Tutorial for Beginners In this post, we go over the basics of using NumPy, a powerful library for Python that allows for more advanced data manipulation and mathematics. What Is NumPy? NumPy is a powerful Python library that is primarily used for performing computations on multidimensional arrays. The word NumPy has been derived from two words — Numerical Python. NumPy provides a large set of library functions and operations that help programmers in easily performing numerical computations. These kinds of numerical computations are widely used in tasks like: • Machine Learning Models: while writing Machine Learning algorithms, one is supposed to perform various numerical computations on matrices. For instance, matrix multiplication, transposition, addition, etc. NumPy provides an excellent library for easy (in terms of writing code) and fast (in terms of speed) computations. NumPy arrays are used to store both the training data as well as the parameters of the Machine Learning models. • Image Processing and Computer Graphics: Images in the computer are represented as multidimensional arrays of numbers. NumPy becomes the most natural choice for the same. NumPy, in fact, provides some excellent library functions for fast manipulation of images. Some examples are mirroring an image, rotating an image by a certain angle, etc. • Mathematical tasks: NumPy is quite useful to perform various mathematical tasks like numerical integration, differentiation, interpolation, extrapolation, and many others. As such, it forms a quick Python-based replacement of MATLAB when it comes to Mathematical tasks. You might also enjoy: Tutorial How to write MatLab functions in Python NumPy Installation The fastest and the easiest way to install NumPy on your machine is to use the following command on the shell: pip install numpy. This will install the latest/most stable version of NumPy on your machine. Installing through pip is the simplest way to install any Python package. Let us now talk about the most important concept in NumPy, a NumPy array. Arrays in NumPy The most important data structure that NumPy provides is a powerful object called a NumPy array. A NumPy array is an extension of a usual Python array. NumPy arrays are equipped with a large number of functions and operators that help in quickly writing high-performance code for various types of computations that we discussed above. Let us see how we can quickly define a one-dimensional NumPy import numpy as np my_array = np.array([1, 2, 3, 4, 5]) print my_array In the above simple example, we first imported the NumPy library using import numpy as np. Then, we created a simple NumPy array of 5 integers and then we printed it. Go ahead and try it on your own machine. Use the steps under the “NumPy Installation” section to make sure that you have installed NumPy in your machine. Let us now see what all we can do with this particular NumPy array. print my_array.shape This will print the shape of the array that we created - (5, ). This indicates that my_array is an array with 5 elements. We can print the individual elements as well. Just like a normal Python array, NumPy arrays are indexed from 0. print my_array[0] print my_array[1] The above commands will print 1 and 2 respectively on the terminal. We can also modify the elements of a NumPy array. For instance, suppose we write the following 2 commands: my_array[0] = -1 print my_array We will get [-1, 2, 3, 4, 5] on the screen as output. Now suppose, we want to create a NumPy array of length 5 but with all elements as 0, can we do it? Yes. NumPy provides an easy way to do the same. my_new_array = np.zeros((5)) print my_new_array We will get [0., 0., 0., 0., 0.] as the output. Similar to np.zeros, we also have np.ones. What if we want to create an array of random values? my_random_array = np.random.random((5)) print my_random_array The output we will get will look something like [0.22051844 0.35278286 0.11342404 0.79671772 0.62263151]. The output that you got may vary since we are using a random function which assigns each element a random value between 0 and 1. Let us now see how we can create 2-dimensional arrays using NumPy. my_2d_array = np.zeros((2, 3)) print my_2d_array This will print the following on the screen: [[0. 0. 0.] [0. 0. 0.]] Guess what the output would be for the following code: my_2d_array_new = np.ones((2, 4)) print my_2d_array_new Here it is: [[1. 1. 1. 1.] [1. 1. 1. 1.]] Basically, when you use the function np.zeros() or np.ones(), you can specify the tuple that talks about the size of the array. In the above two examples, we used the following tuples, (2, 3) and (2, 4) to indicate 2 rows with 3 and 4 columns respectively. Multidimensional arrays like the ones above can be indexed using my_array[i][j] notation where i indicates the row number and j indicates the column number. i and j both start from 0. my_array = np.array([[4, 5], [6, 1]]) print my_array[0][1] The output of the above code snippet is 5, since it is the element present in the index 0 row and index 1 column. You can also print the shape of my_array as follows: print my_array.shape The output is (2, 2), indicating that there are 2 rows and 2 columns in the array. NumPy provides a powerful way to extract rows/columns of a multidimensional array. For instance, consider the example of my_array that we defined above. [[4 5] [6 1]] Suppose, we want to extract all elements of the second column (index 1) from it. Here, as can be seen, the second column is comprised of two elements: 5 and 1. To do so, we can do the following: my_array_column_2 = my_array[:, 1] print my_array_column_2 Observe that, instead of a row number, we have provided a colon (:) and for the column number we have used the value 1. The output will be: [5, 1]. We can similarly extract a row from a multidimensional NumPy array. Now, let us see the power that NumPy provides when it comes to performing computations on several arrays. Array Manipulations in NumPy Using NumPy, you can easily perform mathematics on arrays. For instance, you can add NumPy arrays, you can subtract them, you can multiply them, and even divide them. Here are a few examples of this: import numpy as np a = np.array([[1.0, 2.0], [3.0, 4.0]]) b = np.array([[5.0, 6.0], [7.0, 8.0]]) sum = a + b difference = a - b product = a * b quotient = a / b print “Sum = \n“, sum print “Difference = \n“, difference print “Product = \n“, product print “Quotient = \n“, quotient The output will be as follows: Sum = [[ 6. 8.] [10. 12.]] Difference = [[-4. -4.] [-4. -4.]] Product = [[ 5. 12.] [21. 32.]] Quotient = [[0.2 0.33333333] [0.42857143 0.5 ]] As you can see, the multiplication operator performs element-wise multiplication instead of matrix multiplication. To perform matrix multiplication, you can do the following: matrix_product = a.dot(b) print “Matrix Product = “, matrix_product The output would be: [[19. 22.] [43. 50.]] As you can see, NumPy is really powerful in terms of the library function that it provides. You can perform large calculations in a single line of code with the excellent interface that NumPy exposes. This is what makes it an elegant tool for various numerical computations. You should definitely consider mastering it if you wish you develop a career as a mathematician or as a data scientist. You need to know Python before getting proficient in NumPy. Further Reading Neural Network Using Python and Numpy Learn NumPy Arrays With Examples Build up a Neural Network with python Deep Learning Prerequisites: The Numpy Stack in Python Originally published by Vijay Singh at dze.oncom Thanks for reading :heart: If you liked this post, share it with all of your programming buddies! Follow me on Facebook | Twitter #python #numpy
{"url":"https://morioh.com/a/7a908fe46fb8/numpy-tutorial-for-beginners","timestamp":"2024-11-04T02:19:42Z","content_type":"text/html","content_length":"96167","record_id":"<urn:uuid:f8f31374-e177-488d-86a8-250b5ce6c70d>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00283.warc.gz"}
SABC: Simulated Annealing approach to Approximate Bayesian... in EasyABC: Efficient Approximate Bayesian Computation Sampling Schemes Algorithms for the Simulated Annealing approach to Approximate Bayesian Computation (SABC). SABC(r.model, r.prior, d.prior, n.sample, eps.init, iter.max, v=ifelse(method=="informative",0.4,1.2), beta=0.8, delta=0.1, resample=5*n.sample, verbose=n.sample, method="noninformative", adaptjump= TRUE, summarystats=FALSE, y=NULL, f.summarystats=NULL, ...) r.model Function that returns either a random sample from the likelihood or a scalar distance between such a sample and the data. The first argument must be the parameter vector. r.prior Function that returns a random sample from the prior. d.prior Function that returns the density of the prior distribution. n.sample Size of the ensemble. eps.init Initial tolerance or temperature. iter.max Total number of simulations from the likelihood. v Tuning parameter that governs the annealing speed. Defaults to 1.2, for the noninformative algorithm and 0.4, for the informative algorithm. beta Tuning parameter that governs the mixing in parameter space. Defaults to 0.8. delta Tuning parameter for the resampling steps. Defaults to 0.1. resample Number of accepted particle updates after which a resampling step is performed. Defaults to 5*n.sample. verbose Shows the iteration progress each verbose simulations from the likelihood. NULL for no output. Defaults to verbose = n.sample. adaptjump Whether to adapt covariance of jump distribution. Default is TRUE. method Argument to select algorithm. Accepts noninformative or informative. summarystats Whether summary statistics shall be calculated (semi-) automatically. Defaults to FALSE. y Data vector. Needs to be provided if either summarystats = TRUE or if r.model returns a sample from the likelihood. f.summarystats If summarystats = TRUE this function is needed for the calculation of the summary statistics. Defaults to f.summarystats(x)=(x,x^2,x^3), where the powers are to be understood ... further arguments passed to r.model Function that returns either a random sample from the likelihood or a scalar distance between such a sample and the data. The first argument must be the parameter vector. Tuning parameter that governs the annealing speed. Defaults to 1.2, for the noninformative algorithm and 0.4, for the informative algorithm. Tuning parameter that governs the mixing in parameter space. Defaults to 0.8. Number of accepted particle updates after which a resampling step is performed. Defaults to 5*n.sample. Shows the iteration progress each verbose simulations from the likelihood. NULL for no output. Defaults to verbose = n.sample. Whether to adapt covariance of jump distribution. Default is TRUE. Whether summary statistics shall be calculated (semi-) automatically. Defaults to FALSE. Data vector. Needs to be provided if either summarystats = TRUE or if r.model returns a sample from the likelihood. If summarystats = TRUE this function is needed for the calculation of the summary statistics. Defaults to f.summarystats(x)=(x,x^2,x^3), where the powers are to be understood element-wise. SABC defines a class of algorithms for particle ABC that are inspired by Simulated Annealing. Unlike other algorithms, this class is not based on importance sampling, and hence does not suffer from a loss of effective sample size due to re-sampling. The approach is presented in detail in Albert, Kuensch, and Scheidegger (2014; see references). This package implements two versions of SABC algorithms, for the cases of a non-informative or an informative prior. These are described in detail in the paper. The algorithms can be selected using the method argument: method=noninformative or method=informative. In the informative case, the algorithm corrects for the bias caused by an over- or under-representation of the prior. The argument adaptjump allows a choice of whether to adapt the covariance of the jump distribution. Default is TRUE. Furthermore, the package allows for three different ways of using the data. If y is not provided, the algorithm expects r.model to return a scalar measuring the distance between a random sample from the likelihood and the data. If y is provided and summarystats = FALSE, the algorithm expects r.model to return a random sample from the likelihood and uses the relative sum of squares to measure the distances between y and random likelihood samples. If summarystats = TRUE the algorithm calculates summary statistics semi-automatically, as described in detail in the paper by Fearnhead et al. (2012; see references). The summary statistics are calculated by means of a linear regression applied to a sample from the prior and the image of f.summarystats of an associated sample from the E Matrix with ensemble of samples. P Matrix with prior ensemble of samples. eps Value of tolerance (temperature) at final iteration. ESS Effective sample size, due to final bias correction (informative algorithm only). Effective sample size, due to final bias correction (informative algorithm only). Carlo Albert <carlo.albert@eawag.ch>, Andreas Scheidegger, Tobia Fasciati. Package initially compiled by Lukas M. Weber. C. Albert, H. R. Kuensch and A. Scheidegger, Statistics and Computing 0960-3174 (2014), arXiv:1208.2157, A Simulated Annealing Approach to Approximate Bayes Computations. P. Fearnhead and D. Prangle, J. R. Statist. Soc. B 74 (2012), Constructing summary statistics for approximate Bayesian computation: semi-automatic approximate Bayesian computation. ## Not run: ## Example for "noninformative" case # Prior is uniform on [-10,10] d.prior <- function(par) dunif(par,-10,10) r.prior <- function() runif(1,-10,10) # Model is the sum of two normal distributions. Return distance to observation 0: f.dist <- function(par) return( abs(rnorm( 1 , par , ifelse(runif(1)<0.5,1,0.1 ) ))) # Run algorithm ("noninformative" case) res <- SABC (f.dist,r.prior,d.prior,n.sample=500,eps.init=2,iter.max=50000) ## End(Not run) ## Not run: # Histogram of results hist(res$E[,1],breaks=200) ## End(Not run) For more information on customizing the embed code, read Embedding Snippets.
{"url":"https://rdrr.io/cran/EasyABC/man/SABC.html","timestamp":"2024-11-14T18:07:09Z","content_type":"text/html","content_length":"28522","record_id":"<urn:uuid:011be24e-247b-4fe1-bb0b-29ae9a852932>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00704.warc.gz"}
Minimal $(D,D)$ conformal matter and generalizations of the van Diejen model Belal Nazzal, Anton Nedelin, Shlomo S. Razamat SciPost Phys. 12, 140 (2022) · published 26 April 2022 • doi: 10.21468/SciPostPhys.12.4.140 We consider supersymmetric surface defects in compactifications of the $6d$ minimal $(D_{N+3},D_{N+3})$ conformal matter theories on a punctured Riemann surface. For the case of $N=1$ such defects are introduced into the supersymmetric index computations by an action of the $BC_1\,(\sim A_1\sim C_1)$ van Diejen model. We (re)derive this fact using three different field theoretic descriptions of the four dimensional models. The three field theoretic descriptions are naturally associated with algebras $A_{N=1}$, $C_{N=1}$, and $(A_1)^{N=1}$. The indices of these $4d$ theories give rise to three different Kernel functions for the $BC_1$ van Diejen model. We then consider the generalizations with $N>1$. The operators introducing defects into the index computations are certain $A_{N}$, $C_N$, and $(A_1)^{N}$ generalizations of the van Diejen model. The three different generalizations are directly related to three different effective gauge theory descriptions one can obtain by compactifying the minimal $(D_{N+3},D_{N+3})$ conformal matter theories on a circle to five dimensions. We explicitly compute the operators for the $A_N$ case, and derive various properties these operators have to satisfy as a consequence of $4d$ dualities following from the geometric setup. In some cases we are able to verify these properties which in turn serve as checks of said dualities. As a by-product of our constructions we also discuss a simple Lagrangian description of a theory corresponding to compactification on a sphere with three maximal punctures of the minimal $(D_5,D_5)$ conformal matter and as consequence give explicit Lagrangian constructions of compactifications of this 6d SCFT on arbitrary Riemann surfaces. Authors / Affiliations: mappings to Contributors and Organizations See all Organizations. Funders for the research work leading to this publication
{"url":"https://scipost.org/SciPostPhys.12.4.140","timestamp":"2024-11-06T08:49:07Z","content_type":"text/html","content_length":"39380","record_id":"<urn:uuid:26fcabd7-332a-4040-b52d-2b61d40ac0be>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00821.warc.gz"}
How to find the surface area of a cylinder - Intermediate Geometry All Intermediate Geometry Resources Example Questions Example Question #1 : How To Find The Surface Area Of A Cylinder A cylinder has a volume of 16 and a radius of 4. What is its height? Correct answer: Since the radius is 4, the area of the base is Example Question #2 : How To Find The Surface Area Of A Cylinder What is the surface area of a cylinder with diameter 4 and height 6? The equation to calculate the surface area of a cylinder is: Correct answer: If the diameter of the cylinder is 4, the radius is equal to 2. Therefore: Example Question #1 : How To Find The Surface Area Of A Cylinder Find the surface area of the following right cylinder: Correct answer: The answer is Plug in 4 for Then to get the area of the round side, you would take the circumference times the height. Thus with the formula you would get To get your final answer, add Also you could remember the formula and plug in Example Question #1 : How To Find The Surface Area Of A Cylinder Given a cylinder with radius of 5cm and a height of 10cm, what is the surface area of the entire cylinder? Correct answer: The surface area of the whole cylinder = (2 * area of circle) + lateral area Think of the lateral area as the paper label on a can; It wraps around the outside of the can while leaving the top and bottom untouched. The area of the circle, times 2, is to account for the top and the bottom of the cylinder. Area of a circle = So the area of the circle = Now for the lateral area. Notice how if we have a can with a paper label, we can take the label, cut it, and unroll it from the can. In this way, our label now looks like a rectangle with a height = height and the width = circumference of the circle. Circumference = So our rectangle is going to have a height of 10 and a width of 10 So the total surface area = Example Question #4 : Cylinders The circumference of the base of a cylinder is Correct answer: The formula to find the surface area of a cylinder is In this kind of equation-based problem, it's helpful to ask "What information do I have?" and "What information is missing that I need?" The problem provides information for the Therefore, the first step for this problem is to solve for Because the diameter is Now the surface area can be solved for after the Example Question #6 : How To Find The Surface Area Of A Cylinder Find the surface area of the cylinder below. Correct answer: To find the surface area of the cylinder, first find the areas of the bases: Next, find the lateral surface area, which is a rectangle: Add the two together to get the equation to find the surface area of a cylinder: Plug in the given height and radius to find the surface area. Make sure to round to Example Question #7 : How To Find The Surface Area Of A Cylinder Find the surface area of the given cylinder. Correct answer: To find the surface area of the cylinder, first find the areas of the bases: Next, find the lateral surface area, which is a rectangle: Add the two together to get the equation to find the surface area of a cylinder: Plug in the given height and radius to find the surface area. Make sure to round to Example Question #8 : How To Find The Surface Area Of A Cylinder Find the surface area of the given cylinder. Correct answer: To find the surface area of the cylinder, first find the areas of the bases: Next, find the lateral surface area, which is a rectangle: Add the two together to get the equation to find the surface area of a cylinder: Plug in the given height and radius to find the surface area. Make sure to round to Example Question #9 : How To Find The Surface Area Of A Cylinder Find the surface area of the given cylinder. Correct answer: To find the surface area of the cylinder, first find the areas of the bases: Next, find the lateral surface area, which is a rectangle: Add the two together to get the equation to find the surface area of a cylinder: Plug in the given height and radius to find the surface area. Make sure to round to Example Question #10 : How To Find The Surface Area Of A Cylinder Find the surface area of the given cylinder. Correct answer: To find the surface area of the cylinder, first find the areas of the bases: Next, find the lateral surface area, which is a rectangle: Add the two together to get the equation to find the surface area of a cylinder: Plug in the given height and radius to find the surface area. Make sure to round to Certified Tutor Graphic Era Hill University, Bachelor, Computer Science. Certified Tutor Albright College, Bachelor of Science, Mathematics. Immaculata University, Master of Arts, Urban Education and Leadership. Certified Tutor The University of Texas at San Antonio, Bachelor of Science, Mathematics. All Intermediate Geometry Resources
{"url":"https://www.varsitytutors.com/intermediate_geometry-help/how-to-find-the-surface-area-of-a-cylinder","timestamp":"2024-11-03T20:16:03Z","content_type":"application/xhtml+xml","content_length":"179434","record_id":"<urn:uuid:8755ee7b-579b-49f5-9caa-dc8cdce47d44>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00376.warc.gz"}
ISTC-CC Abstract Greedy Sequential Maximal Independent Set and Matching are Parallel on Average Proceedings of the 24th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA'12), June 2012. Guy E. Blelloch, Jeremy Fineman*, and Julian Shun Carnegie Mellon University * Georgetown University The greedy sequential algorithm for maximal independent set (MIS) loops over the vertices in an arbitrary order adding a vertex to the resulting set if and only if no previous neighboring vertex has been added. In this loop, as in many sequential loops, each iterate will only depend on a subset of the previous iterates (i.e. knowing that any one of a vertex's previous neighbors is in the MIS, or knowing that it has no previous neighbors, is sufficient to decide its fate one way or the other). This leads to a dependence structure among the iterates. If this structure is shallow then running the iterates in parallel while respecting the dependencies can lead to an efficient parallel implementation mimicking the sequential algorithm. In this paper, we show that for any graph, and for a random ordering of the vertices, the dependence length of the sequential greedy MIS algorithm is polylogarithmic (O(log^2 n) with high probability). Our results extend previous results that show polylogarithmic bounds only for random graphs. We show similar results for greedy maximal matching (MM). For both problems we describe simple linear-work parallel algorithms based on the approach. The algorithms allow for a smooth tradeoff between more parallelism and reduced work, but always return the same result as the sequential greedy algorithms. We present experimental results that demonstrate efficiency and the tradeoff between work and parallelism. FULL PAPER: pdf
{"url":"https://istc-cc.cmu.edu/publications/papers/2012/mis_abs.shtml","timestamp":"2024-11-06T13:34:25Z","content_type":"application/xhtml+xml","content_length":"7029","record_id":"<urn:uuid:4c334b14-7be3-40d0-889f-114a941b47f3>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00756.warc.gz"}
Handshaking lemma / Degree sum formula Behold, the degree sum formula: The degree sum formula states that, given a graph G = (V,E), the sum of the degrees is twice the number of edges. Let's look at K[3], a complete graph (with all possible edges) with 3 vertices. First, recall that degree means the number of edges that are incident to a vertex. A vertex is incident to an edge if the vertex is one of the two vertices the edge connects. In the case of K[3], each vertex has two edges incident to it. Actually, for all K graphs (complete graphs), each vertex has n-1 degrees, n being the number of vertices. Dope. So, for each vertex in the set V, we increment our sum by the number of edges incident to that vertex. Or, in another way, construct a degree sequence for a graph and sum it: sum([2, 2, 2]) # 6. This sum is twice the number of edges. Our graph should have 6 / 2 edges. The "twice the number of edges" bit may seem arbitrary. But each edge has two vertices incident to it. In the degree sum formula, we are summing the degree, the number of edges incident to each vertex. A degree is a property involving edges. Edges are connections between two vertices. Summing the degrees of each vertex will inevitably re-count edges. Properties we can derive from this formula Anything multiplied by 2 is even. Since the sum of degrees is two times the number of edges the result must be even and the number of edges must be even too. With the above knowledge, we can know if the description of a graph is possible. This is useful in a puzzle such as the one I found in this book: At a recent math seminar, 9 mathematicians greeted each other by shaking hands. Is it possible that each mathematician shook hands with exactly 7 people at the seminar? Each mathematician would shake the hand of 7 others which amounts to shaking hands with every mathematician minus yourself and one other person. A graph may not have jumped out at you, but this puzzle can be solved nicely with one. Think of each mathematician as a vertex and a handshake as an edge. Can we have a graph with 9 vertices and 7 edges? Applying the degree sum formula, we can say no. When we sum the degrees of all 9 vertices we get 63, since 9 * 7 = 63. Since the sum of degrees is twice the number of edges, we know that there will be 63 รท 2 edges or 31.5 edges. Since half a handshake is merely an awkward moment, we know this graph is impossible. I hate telling mathematicians that they can't shake hands. Can we have 9 mathematicians shake hands with 8 other mathematicians instead? Can we have a graph with 9 vertices and 8 edges? Summing 8 degrees 9 times results in 72, meaning there are 36 edges. Top comments (0) For further actions, you may consider blocking this person and/or reporting abuse
{"url":"https://practicaldev-herokuapp-com.global.ssl.fastly.net/adnauseum/handshaking-lemma-degree-sum-formula-419a","timestamp":"2024-11-04T16:34:45Z","content_type":"text/html","content_length":"70104","record_id":"<urn:uuid:9f368a15-d907-430f-8929-5e39c5c85a9d>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00043.warc.gz"}
Mathf.Sqrt Accuracy / Decimal Places I just noticed that a square root calculation I’m doing is not accurate enough. Seem that unityscript’s Mathf.sqrt() function only calculates to within 5 decimal places, which is nowhere near as accurate as I need it to be. This is different than Javascript, which I just tested and got over 10 decimal places. Does anyone know how to force it to have more decimal places? I should point out that I’m sending the return to a System.Decimal type, not float type. Mathf.Sqrt(3681.011926874569421266247056) = 60.67134 <- only 5 decimal places Math.sqrt(3681.011926874569421266247056) = 60.671343539389085 sqrt(3681.011926874569421266247056) = 60.671343539389083000236736115795 Thanks for any help. When working with doubles, using Math.Sqrt from the System namespace will provide significantly greater precision than Unity’s built-in Mathf.Sqrt function.
{"url":"https://discussions.unity.com/t/mathf-sqrt-accuracy-decimal-places/133275","timestamp":"2024-11-07T13:26:06Z","content_type":"text/html","content_length":"28067","record_id":"<urn:uuid:671f2c0c-9b62-40f1-ad3e-7be2d9e679a4>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00307.warc.gz"}
ThmDex – An index of mathematical definitions, results, and conjectures. R1022: Partition of basic function into positive and negative parts provides the decomposition $f = f^+ - f^-$. Since $f$ is almost everywhere zero, one has the almost everywhere equality $f^+ = f^-$. Both $f^+$ and $f^-$ are unsigned functions $X \to [0, \infty]$, so result R1901: Unsigned integral of almost everywhere equal functions states that their integrals coincide \begin{equation} \int_X f^+ \, d \mu = \int_X f^- \, d \mu \end{equation} Now, by definition of a signed integral, $\int_X f \, d \mu : = \int_X f^+ \, d \mu - \ int_X f^- \, d \mu = 0$, which proves the claim. $\square$
{"url":"https://theoremdex.org/r/1903","timestamp":"2024-11-05T19:48:12Z","content_type":"text/html","content_length":"7650","record_id":"<urn:uuid:dfdd4eec-d104-4fed-84eb-97dbfd798455>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00026.warc.gz"}
Dr. Daniel Tataru, Berkeley - Long time solutions for one dimensional dispersive flows - Department of Mathematics Dr. Daniel Tataru, Berkeley – Long time solutions for one dimensional dispersive flows November 10, 2022 @ 3:30 pm - 4:30 pm Mode: In-person Title: Long time solutions for one dimensional dispersive flows Abstract: The question of long time or global existence of solutions is one of the fundamental ones in the study of partial differential equations. For this talk I will try to present an overview of the ideas that have been used in the study of such problems, from classical to the most recent ones. Related Events
{"url":"https://math.unc.edu/event/dr-daniel-tataru-berkeley-tba/","timestamp":"2024-11-06T08:29:05Z","content_type":"text/html","content_length":"110660","record_id":"<urn:uuid:27cda54d-9f86-4cd8-b434-23d7b17f513a>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00333.warc.gz"}
Phylogenies from dynamic networks The relationship between the underlying contact network over which a pathogen spreads and the pathogen phylogenetic trees that are obtained presents an opportunity to use sequence data to learn about contact networks that are difficult to study empirically. However, this relationship is not explicitly known and is usually studied in simulations, often with the simplifying assumption that the contact network is static in time, though human contact networks are dynamic. We simulate pathogen phylogenetic trees on dynamic Erdős-Renyi random networks and on two dynamic networks with skewed degree distribution, of which one is additionally clustered. We use tree shape features to explore how adding dynamics changes the relationships between the overall network structure and phylogenies. Our tree features include the number of small substructures (cherries, pitchforks) in the trees, measures of tree imbalance (Sackin index, Colless index), features derived from network science (diameter, closeness), as well as features using the internal branch lengths from the tip to the root. Using principal component analysis we find that the network dynamics influence the shapes of phylogenies, as does the network type. We also compare dynamic and time-integrated static networks. We find, in particular, that static network models like the widely used Barabasi-Albert model can be poor approximations for dynamic networks. We explore the effects of mis-specifying the network on the performance of classifiers trained identify the transmission rate (using supervised learning methods). We find that both mis-specification of the underlying network and its parameters (mean degree, turnover rate) have a strong adverse effect on the ability to estimate the transmission parameter. We illustrate these results by classifying HIV trees with a classifier that we trained on simulated trees from different networks, infection rates and turnover rates. Our results point to the importance of correctly estimating and modelling contact networks with dynamics when using phylodynamic tools to estimate epidemiological parameters. Author summary Understanding whether and how transmission patterns are revealed by branching patterns in phylogenetic trees for pathogens remains a challenging research question. Besides the diversification of the pathogen, branching patterns depend strongly on the host contact structure as it shapes opportunities for the pathogen to reproduce. However, the host contact network is often difficult to study, in particular as it evolves in time. In this paper we perform a simulation study on three different dynamic networks, on which we simulate transmission trees. We use a simple Erdős-Renyi random network and two more realistic networks with skewed degree distribution, where one is also clustered. We convert transmission trees into phylogenetic trees and analyze them with different tree statistics like imbalance measures, counts of small substructures, and measures containing the branch lengths. We study the tree features with principal component analysis and with supervised learning methods, and find that network dynamics and network type can strongly influence the shape of phylogenetic trees. This implies that using phylogenetic trees from a mis-specified network type and dynamic can lead to poor phylodynamic estimation of transmission parameters. We illustrate this with HIV phylogenetic trees constructed from viral sequences of patients in the Dutch ATHENA cohort, and from sequences of the Los Alamos Sequence database. Citation: Metzig C, Ratmann O, Bezemer D, Colijn C (2019) Phylogenies from dynamic networks. PLoS Comput Biol 15(2): e1006761. https://doi.org/10.1371/journal.pcbi.1006761 Editor: Natalia L. Komarova, University of California Irvine, UNITED STATES Received: January 15, 2018; Accepted: January 7, 2019; Published: February 26, 2019 Copyright: © 2019 Metzig et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Data Availability: Data are available from institutional data access / ethics committee for researchers who meet the criteria for access to confidential data. contact email: Funding: CC has received funding from https://www.epsrc.ac.uk/ grant number EP/K026003/1. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing interests: The authors have declared that no competing interests exist. Understanding whether and how the transmission patterns of a pathogen are revealed by branching patterns in pathogen phylogenetic trees remains a challenging research question. Alongside the stochastic diversification of the pathogen on the short time scales of an infectious disease outbreak, branching patterns in the pathogen’s phylogenetic tree also depend strongly on the underlying transmission pattern [1] and the host contact structure, as these shape the pathogen’s reproductive opportunities. The role of networks in epidemic spreading has been studied extensively in past decades [2–12]. The topology of the host contact network plays a crucial role in setting the epidemic threshold, the epidemic size and the most effective interventions. Network properties also play a role in determining which individuals are at high risk of infection. Naturally, modellers seek to inform simulated networks with individual-level data from real populations. Respondent-driven sampling [13, 14], snowball sampling or questionnaires [15] are several approaches to gathering these data, but all are challenging: people do not always remember how many people they have been in contact with, and in some contexts (such as injection drug use or sexual behaviour), contact is stigmatized or even illegal. As a result, individuals may not wish to report contacts to public health practitioners. Recently there has been interest in using genetic data from pathogens, together with phylogenetic and phylodynamic tools, to estimate the parameters of human contact networks [16–19]. This is appealing, in that data now accessible with high-throughput sequencing technologies (pathogen sequences, at a level of resolution that makes detecting even small amounts of genetic variation feasible) can reveal information about a fundamental population-level structure (the network). Sequences can show patterns of descent, and pathogens transmitted directly from human to human need human contact networks to have descendants. Since networks are difficult to observe directly and phylogenetic trees in principle contain some information about them, researchers have used a variety of tools to relate pathogen phylogenetic trees to the underlying contact network’s degree distribution, connectivity and clustering [17, 20]. This method has been of particular interest for HIV phylogenies [21–24]. Studies have reported varying strengths of the effect of the contact network on the phylogeny. For example, [25] found a very weak influence of the network’s clustering coefficient when the degree distribution is held constant, [26] studied the shapes of phylogenies from simulated genetic data and found a moderate influence of the underlying network degree distribution, though “clustering” in phylogenetic trees did not parallel the heterogeneity in the degree distribution, and network dynamics shape phylogenies as well. [21] found a relatively stong effect of the variance in degree distribution and of the average pathlength of the network on the shapes of phylogenies. Also, within-host viral diversity affects the link between network structures and phylogenies [23], as do the basic reproduction number and other details of the process [27, 28]. It is therefore reasonable to assume that details of timing of infection, in-host selection, selection at the population level and other factors may also affect the relationship between contact networks and phylogies. Human contact networks are self-organizing systems with certain general characteristics; one approach to modelling human host networks is to perform simulations that are able to reproduce those characteristics. Key characteristics include a short average pathlength (small-world property) [29], clustering [30] and a scale-free (or at least highly skewed) degree distribution [31, 32]. In particular, networks with a skewed degree distribution have received much attention for epidemic spreading, as they yield significantly different transmission patterns from a homogeneously mixed population. Depending on the transmission pathway, there is evidence that networks can have an exponential degree distribution [13, 33] or a scale-free degree distribution, found in various social networks [34–36], and in human contact networks [37–39]. The Barabasi-Albert model [40] in particular is a much-studied process by which scale-free degree distributions may emerge. It is based on the idea of preferential attachment: nodes attach preferentially to existing nodes that already have many links. Preferential attachment is a plausible rationale for many applications (fame, publicity). It describes a constantly growing network, or a static network if the growth is halted. In contrast, human host contact networks are often dynamic, but may not be growing in size over time. Instead, they have population turnover [5, 41], with individuals entering and leaving a network as time goes on. Especially for chronic infections like TB, HCV or HIV [42], people may enter and exit the network over shorter timescales than the length of the infectious period. The number of contacts that individuals accumulate over time is significantly larger than the number of contacts at one point in time. Furthermore, many of the observations underlying reports of scale-free degree distributions in human contact networks are derived from reports of the cumulative numbers of contacts that individuals have over a long period (for example over one year [32, 43], or accumulated to date). Accordingly, it may not be appropriate to compare simulated transmission dynamics in models where individuals’ degrees are modelled from observed accumulated numbers of contacts to transmission where degrees are taken as the instantaneous (or even shorter-term) numbers of contacts. The static network (with degrees modelled on data for the number of contacts accumulated over long time periods) can be a very poor approximation of the true dynamic network; outbreaks can spread faster in such a static network due to the potentially very high numbers of simultaneous contacts. In using phylodynamic tools to estimate network parameters from pathogen phylogenies, it is typically assumed that the contact network is static in time; one seeks network parameters that produce pathogen phylogenetic trees that are similar to observed trees, conditional on the static network assumption (and perhaps also on assumptions about the degree distribution, clustering patterns and other network attributes). Whatever the details, inferred quantities such as degree distribution, the average number of partners and the infection rate are influenced by assumptions about the network, including the static assumption. The duration of infectiousness and the time scale of the network dynamics must affect the relationship between pathogen phylogenies and network parameters. Clearly, no individual has thousands of contacts over a week; reports of degrees that are orders of magnitude higher than the average are from data aggregated over long time periods; where an infectious duration is of the order of weeks or a few months, the scale-free property is unlikely to hold. These issues are presented briefly in [26] and [44, 45]. In this paper, we investigate the effect of human host network dynamics on pathogen phylogenies. Our study focuses on simulations, and on the relationship between network assumptions and estimates of transmission parameters. We compare simulated phylogenies from outbreaks on static and dynamic networks, and we explore the effect of the turnover rate at which individuals enter and leave the system. We also study the effect of the network characteristics on the phylogenies. For this, we use networks with binomial degree distribution and skewed degree distribution, as well as clustered and unclustered networks. We explore the effect of the infection rate and the mean number of contacts. We study how the features of the underlying networks affect phylogenetic trees with various tree statistics. Finally, we turn to phylodynamic inference of HIV transmission parameters and illustrate our main results using HIV sequence data from the Dutch ATHENA cohort and Los Alamos. In particular, we characterise the impact of alternative assumptions on human contact network dynamics on estimation of key transmission parameters including R0. We simulate the human contact network with the algorithms described in section. First, we allow the networks to converge to a stationary state in terms of degree distribution; in this stationary state, networks are still dynamic in the sense that people enter and exit. Then, an outbreak is simulated on the networks while they continue to evolve. One person is infected and, with a constant infection rate per contact, the infection can spread. The resulting infection trees are converted into a phylogenetic trees (see section). Unlike the Barabasi-Albert (BA) model, our approach allows a skewed degree distribution to emerge while keeping the size and total degree fixed. Throughout, individuals enter and leave the network and links are formed and dissolved. In contrast, in the BA model, nodes and links are continuously added and remain in the network. We set a constant number of tips in our trees. We use tree shape and length statistics, detailed in section, to compare phylogenetic trees. Network algorithms We use an algorithm for a “skewed-clustered” network which generates a network with a skewed degree distribution and positive transitivity [38]. To understand what these features add, we also use skewed (but not particularly clustered) networks, and an Erdős-Renyi random network. These all have a stationary average number of contacts and stationary degree distribution, while people are entering and exiting the network. This entry and exit happens with a turnover rate δ, which is the ratio between the number of people entering per time step to the number of people in the network. Networks are simulated in discrete time. In each time step the following steps happen: Random network. A person enters the network and gets connected to a person chosen at random. Further links are added between randomly chosen people in the network to keep the average degree constant. People exit the system at the given rate. When a person leaves the network, their links are broken. The degree distribution in this algorithm converges to a binomial degree distribution. Skewed network. A person enters the network and forms a partnership (link) with one other person j, where the probability to select someone as partner is proportional to that person’s current number of partnerships (degree). To maintain a constant number of links despite the fact that individuals leave the system, additional links are introduced. For this, a first person i is picked with probability proportional to its number of contacts. Node i is then linked to a second person j, who is again picked with probability proportional to person j’s number of contacts. People exit the system at a given rate, and their links are all broken. If a person is left without any links because their partners have left the network, they are connected to existing nodes, again with probability proportional to a node’s degree. It can be shown theoretically [46] that the degree distribution in this process converges to a power law degree distribution with an exponential cutoff; the cutoff strength increases with decreasing number of people (nodes), and also is increased when the mean degree is decreased. For mean degree ≈ 3 and a network size of 1000 nodes, the cutoff is so strong that the degree distribution can always be approximated by an exponential distribution. Skewed-clustered network. This is a variant of the algorithm described above. Again a person i enters the network, and another person j to receive an additional link is picked, with probability proportional to its current degree. Additional links are added, where the first neighbour is again picked with probability proportional to its number of contacts. The second neighbour is picked 1. (i). among neighbours of second degree (neighbours’ neighbours) (at random) 2. (ii). if that is not possible, among neighbours of third degree (at random) 3. (iii). if that is not possible, in the whole network, with probability proportional to a node’s degree. After people exit at a given rate, those left without neighbours are connected to existing nodes, with probability proportional to a person’s degree, and other links are broken. The stationary state of the degree distribution is again a power law with exponential cutoff, with a higher decay constant as in the skewed network (for low mean degree d < 3). At a given point in time, not all nodes in the network are necessarily connected to one component (see Fig 1). The clustering coefficient, or transitivity, is defined as the ratio of the number of triangles to the number of connected triplets [29]. Rules (i) and (ii) cause the transitivity to be higher than it is in the the skewed network (for all system sizes, here the transitivity is ≈ 0.15). Turnover rate (probability to leave the network in one timestep) δ = 0.1. The inlays on the left show the pathlength distribution. The skewed network has a much shorter pathlength than the random network (for same mean degree), and skewed-clustered network has slightly longer average pathlength, but still shorter than the random network. This relationship holds for a wide range of mean degrees. The inlay on the right shows the counter-cumulative degree distribution in loglinear scale, which are power laws with exponential cutoff for the skewed-clustered network and skewed network (blue and red), and binomial for the random network (black). Time-integrated networks. We also compute time-integrated networks, i.e we let the networks evolve with entry and exit, and create (unweighted) networks of all nodes that have ever been in the network, where two nodes are connected if there has ever been a link between them. As a consequence, the time-integrated network has many more links than a dynamic network at a specific point in time. The degree distribution of the time-integrated networks has a higher mean and more than the degree distribution of the instantaneous networks. Both dynamic and time-integrated static networks have the same answer to the question “How many contacts have you had in a given time?”, so they can be modelled using the same source of input data (e.g. questionnaires [15]). Outbreak simulation In our simulations, we begin with one infected individual who then infects neighbours at a constant infection rate per contact, after which the neighbours can infect their respective neighbours in the next time step, and so forth. Infected individuals stay infected throughout the simulation, modelling a long-term infection. This simulates an outbreak on these dynamic networks. There is at least one time step between an individual becoming infected and infecting a neighbour, and we model a positive time between any two infection events by adding a small positive time to the infection events of one iteration, such that they occur with equal time lapses. We extract what would be the “true timed phylogeny” of the pathogen given the transmission tree in our network, under the assumption that hosts carry a single pathogen lineage. To do this we form a binary branching tree in which each host corresponds to a tip in the phylogeny and branch lengths correspond to time. Since we know the true transmission tree and its timing, this can be done by tracking the infectors, infectees and the time between infection events. This is available in the getLabGenealogy function in the R package PhyloTop [47]. The simulation of the outbreak is stopped after a time such that the phylogenetic trees all have the same number of tips. Topological summary measures of trees We compute features of the phylogenies with software sources listed in Table 1. 1. Number of substructures 1. Cherries: Substructure consisting of two tip descendants 2. Pitchforks: Substructure consisting of three tips 2. Imbalance measures 1. Sackin Index: Average number of internal nodes N[i] between each tip i and the root of the phylogenetic tree , [48, 49] 2. Colless Index: It compares the number of tips that descend on the left and right (L and R) from each internal node, and averages over these differences |L − R| [49, 50]. 3. Other tree measures 1. Maximum Height: Maximum height of tips in the tree. 2. Average Size of Ladder: Ladder structures [1] consisting of a connected set of internal nodes with a single tip descendant 3. IL numbers: Number of internal nodes with a single tip child. 4. Centrality measures and general network measures 1. Maximum Betweenness: Maximum number of shortest paths that pass through a particular node. 2. Wiener Index: Sum of the lengths of the shortest paths between all pairs of nodes. 3. Maximum Closeness: Sum of lengths of the shortest paths between one node and all other nodes (maximum thereof). 4. Average Pathlength: Average distance between two nodes. 5. Diameter: Longest possible path between two nodes in the tree. 5. Tree measures that use the edge length 1. Branching next index (BNI): We compare the extent to which a node that branches at time t is chronologically next to branch; in other words, does branching now make it more or less likely that a node will branch next? If a node’s child is chronologically next to branch following the node itself, we say the node has the ‘branching next’ property (s[i] = 1). We add and rescale the sum of s[i] over all internal nodes i in the tree (except the root and the last node to branch). s[i] is a Bernoulli random variable whose expected value is p[i] = 2/k[i], where k[i] is the number of lineages in the tree that exist at time t[i] + ϵ, in the limit as ϵ → 0, where t[i] is the time of node i and ϵ > 0. We define the BNI as 2. Generalised branching next (MNI): Extending the BNI concept, we ask whether one of the next m branching events (chronologically) in the tree descends from the current node, in which case we set d[i] = 1 for node i. We sum and rescale d[i], as with s[i], over the tree to create this summary statistic. We let k[ij], j = 1, …, m be the numbers of lineages immediately after the j′th branching event following node i (in the entire tree). We define q[i] = ∏[j](1 − 1/k[ij]) and normalise by setting MNI to . Since now they are not independent we use every m′th node i rather than every node. 3. Length statistics We use the mean of the path length from the internal nodes of the tree to its root, as well as the median, variance, skewness and kurtosis of this set of path lengths. Analysis approach We use two approaches to understand how the underlying contact network affects the tree features. The first is to visualise the results using principal components analysis (PCA) on the matrix of features described above. The matrix values are scaled such that the mean is zero, and normalized such that variance is 1, as is standard in PCA. This visually illustrates the extent to which these features discriminate between phylogenetic trees derived from different contact networks. However, visual separation on a 2-dimensional PCA plot is a limited measure of how informative the features are of the contact network. Thus, we also explore this quantitatively using both K-nearest neighbours and random forest classification. We attempt to classify the network (random, skewed or skewed-clustered) based on the features. We assess accuracy in these binary and categorical classifications when the underlying network model correct, and when it is mis-specified. We also attempt to classify the transmission rate. For this goal we use trees from simulated outbreaks where we distributed the transmission rate β uniformly. We grouped these trees into bins depending on the underlying β and train classifiers on the tree features with the aim of predicting the bin of β for a test set. We study a scenario where turnover rate δ and mean degree are distributed uniformly, and a scenario where they are kept constant. Application to HIV Partial nucleotide HIV-1 polymerase sequences were obtained as described previously from patients in the ATHENA national observational HIV cohort in the Netherlands (by June 2015) [52]. We used the first sequence per patient, with a minimum of 750 nucleotides length. No patient information was included in the analysis. Sequences were aligned with Clustal Omega 1.1.0 [53] and manually checked and adjusted. HIV-1 subtyping was performed with COMET v1.3 [54] and 6912 subtype B sequences were considered for further analysis. In addition we retrieved 19,459 HIV-1 subtype B sequences from the Los Alamos database (by September 2017) [55], with a minimum length of 1000 nucleotides overlap to the ATHENA alignment. Excluding sites with less than 75% coverage, and with IAS resistant mutations 2015 removed This resulted in a sequence alignment of 1,128 nucleotides length [56]). Viral phylogenies were reconstructed with FastTree version 2 [57]. From this tree we identify 90 non-intersecting clades in the specified size range 100-151, using a depth first search approach. The mean number of tips in the clades was 127. 86 out of 90 clades contained samples from the ATHENA cohort, with a fraction between 0.01 and 0.97. Overall, the clades we extracted contained 8326 sequences from the Los Alamos data and 3186 from the Dutch HIV-1 ATHENA cohort. We compared the HIV clades with simulated trees from different networks and to trees simulated on the same network, but with varying infection rates. We trained random forest and K-nearest neighbour classifiers on tree features from the simulated networks, and used the features from the HIV clades as a test set. The simulated trees (the training set) had 100 tips. We then used the classifiers to predict the network type or infection rate for the HIV clades. Overview of scenarios We used principal component analysis to study different types of networks, different mean degree and infection rate for a given network, as well as different turnover rates and a time-integrated static network (see all scenarios in Table 2). We also trained classifiers on the networks in order to predict infection rate, turnover rate and network type (see all scenarios in Table 3). The network structure and dynamics both affect features of phylogenetic trees of pathogens spreading on the networks. However, the effects are modulated by the transmission rate and the turnover rate. These relationships are sufficiently strong as to disrupt the signal of the network type in the pathogen phylogeny. A summary of results for the different network structures is given in in the discussion and the trees are given in supporting information. Phylogenetic tree features can reveal network structure Fig 2 shows a principal component analysis based on phylogenetic trees simulated on dynamic networks with three different topologies. Phylogenies from the Erdős-Renyi network differ strongly from the two others. This holds even for relatively small trees (100 tips), whereas for clustered and unclustered networks, the discrimination improves with the size of tree (up to 250 tips). The same results hold for a wide range of infection rates (β = 0.025 to β = 0.2) and higher turnover rates (δ = 0.1). Overall, the discrimination between networks improves with tree size. The distinction between trees from different underlying networks improves if additional features are used that take into account the lengths of edges. Skewed and skewed-clustered network have a lower number of small substructures (cherries and pitchforks), and a higher value for all imbalance measures. Most network measures (except betweenness) are also positively correlated with imbalance measures. Left: PCA plot of tree features from phylogenetic trees simulated on different networks: random (Erdős-Renyi), skewed and skewed-clustered. Each contact network has mean degree n = 5, and all simulated trees have 500 tips. Parameters: infection rate β = 0.05, population turnover = 0.03. Right: correlations between tree features, here most features are clearly correlated (blue) or anti-correlated (red). Simulated trees to figure (a) are found in supporting information. The network structures become more distinct with a higher rate of infection per contact and with a higher rate of turnover (eg β = 0.2, δ = 0.1), and in particular the numbers of cherries and the path lengths become more distinct as these parameters increase. Differences in the path lengths and the imbalance between the networks are also more pronounced with higher β and δ. In contrast, however, there are a few features for which differences are more pronounced at low infection rates (including the ‘ILnumbers’ and the Wiener index for clustered vs unclustered networks). In other words, given fixed values of the transmission and turnover rates, it is possible to separate, and estimate, the underlying network structure based on phylogenetic tree features, for example by discriminant analysis, classification methods, or by Approximate Bayesian Computation. However, the details—which phylogenetic features point to which kinds of networks—are specific to the transmission and turnover rates, and mis-estimation seems likely if these are mis-specified. Furthermore, for some choices of parameters, the networks are no longer well-separated in the PCA analysis; for example, if β = 0.05 and δ = 0.1 (so β < δ), the clustered network overlaps with the random network, whereas if β > δ, they do not overlap, but the two skewed networks (clustered and unclustered) begin to overlap. Features of phylogenies depend on transmission rate and average degree When infection rate per contact β increases, so does the variance of tree features, and the following tree features increase on average: Colless index, Sackin index, IL numbers (nodes with single tip child), average ladder size, maximum height, average pathlength, Wiener index and diameter. The number of cherries, pitchforks and maximal closeness decrease with increasing infection rate, as shown in Fig 3 for the skewed-clustered network. We compare trees from outbreaks on networks with mean degrees and for infection rates β = 0.05 and β = 0.1. All trees have 500 tips. Simulated trees to this figure are found in S2 File. The same features increase as the mean degree increases (red and green vs. turquoise and purple in Fig 3), which is expected, as both increasing β (infection rate per contact) and increasing the number of contacts increase the basic reproduction number (τ being the duration of infection and the median degree) of an outbreak. The phylogenies from the four outbreak hypotheses in Fig 3 may therefore correspond to different pathogens or to a pathogen in rather different epidemiological settings, as in these scenarios R[0] values may differ substantially. However, the tree features that discriminate these scenarios are also affected by the nature of the contact network (Fig 1) and by the turnover rate (Fig 4). This comparison highlights that the network type and turnover are likely to affect estimation of the mean degree and the infection rate from phylogenetic trees. Left: PCA plot of tree features for trees from time-integrated and dynamic skewed-clustered networks (β = 0.1), mean degree 〈n〉 = 5, number of nodes N = 1000. red: time-integrated network. Right: counter-cumulative degree distribution on log-log scale of time-integrated and dynamic network. Simulated trees to figure 4 are found in S3 File. Network dynamics affect phylogenetic tree features Fig 4 shows a PCA of phylogeny features derived from skewed-clustered networks with same mean degree but different turnover rates (i.e. rates at which people enter and exit the system), and from a time-integrated static network of same mean degree . Higher population turnover of the network increases the following features of the simulated phylogenetic trees: Sackin index, Colless index, average ladder sizes, IL number, maximum height, average pathlengths, diameter, Wiener index, and betweenness, and decreases the number of cherries and pitchforks as well as maximum closeness. Higher turnover gives similar results to a higher mean degree or a higher infection rate (see Fig 3). The static time-integrated network has no turnover, but contacts have a longer duration, presenting the opportunity to transmit comparably to a dynamic network with much higher turnover than the one used for the time integration. In dynamic networks, links get rewired often and therefore many opportunities for transmission exist. The static network has higher mean degree as the temporally existing links are accumulated (see Fig 4). Instead of resembling those from very low turnover, the phylogenies from static networks have therefore features similar to those from networks with very high turnover. This effect holds for different infection rates β, but the higher the infection rate, the more the phylogenies from a time-integrated network differ from those from networks with low turnover. Results for varying infection rate, mean degree, turnover and time-integration are qualitatively the same for the skewed-clustered and skewed-unclustered network, but since the unclustered network has shorter average pathlength than the clustered network of same mean degree, the effects are more pronounced. Imbalance measures are always anticorrelated with the counts of small substructures (pitchforks and cherries). The fact that network skewness increases tree imbalance (and decreases substructures) could be due to the fact that high heterogeneity in the network degree is passed on to high heterogeneity in the number of secondary infection, resulting in an imbalanced tree (measured e.g. by Sackin and Colless index). On the other hand, increased network clustering may have the opposite effect, as it results in fewer nodes being connected to hubs in the network, which may cause the infection tree and resulting phylogenetic tree to be more balanced and to exhibit more pitchforks and cherries. However, an imbalanced phylogenetic tree could in principle also result from long chains of person-to-person transmission, in which each individual infects exactly one other: imbalanced trees do not necessarily require heterogeneous contact numbers or heterogeneous numbers of secondary infections. Classification of networks and parameters from phylogeny features For simulations with distributed values for β, δ and mean degree of the network, we calculated all of our features of phylogenetic trees and used these to train classifiers, which we then tested. We used K nearest neighbours (KNN) [58] which classifies an object based the the class of the majority of its nearest neighbours, and random forests [59] which use decision trees to classify the test data. We simulated 1549 phylogenetic trees on the three types of networks, with random uniformly distributed values of the turnover and transmission rate parameters (both in [0.05, 0.15]) and mean degrees (in [4, 9]). We trained classifiers on 1040 instances to classify from which type of network a phylogeny was derived. We compute the mean and standard deviation of the accuracy using 10-fold cross-validation. The classification is successful in the sense that it is possible to classify the dynamic network type based on the phylogenetic features, given a range of transmission parameters and turnover rates in the training data. Table 4 lists the results when we choose the key parameters β (transmission rate), mean degree and turnover δ uniformly at random over the specified ranges. Both classifiers predict the network type with high accuracy, using the phylogenetic features. This means that even with the additional complications of dynamic networks and unknown underlying parameters, phylogenetic trees encode information about the nature of the network. We also asked how varying the underlying (and in general unknown) dynamic contact network would affect estimation of the transmission parameter β (also in Tables 4 and 5). Estimation of β is much worse than estimation of the network, and strongly depends on the assumed network. The performance is best for random forests with either all three networks present in the data (accuracy 0.47) or with a single, correctly-specified, skewed or random network used to train the model (accuracy 0.55, 0.44 respectively). Mis-specification of the network worsens predictions. Discrimination between skewed and skewed-clustered networks remains difficult, as these networks are quite similar. The difference between skewed and random networks is more pronounced (as also seen in the PCA analysis in Fig 2). In that sense our results are similar to the results in [60–62], who successfully predicted contact rates with Approximate Bayesian Computation (ABC) on static networks, where the phylogenetic trees separate well in a PCA plot of extracted tree measures. Given the poor ability to predict β when the mean degree and turnover are randomly sampled, we explored whether keeping these parameters fixed would improve the estimation: if we knew these parameters and had pathogen phylogenies, would we then be able to estimate the transmission rate in the context of dynamic networks? Here, the accuracy is only good in the case of the random network (0.7, 0.82 for KNN, random forests respectively). Random forests give consistently slightly higher accuracy, with an accuracy over 0.5 where (1) all three networks (skewed, skewed-clustered an random) were present in the training data, or (2) the model was trained on the skewed or random networks. If the network is mis-specified or skewed, neither approach is able to predict β. We suggest that this may have adverse consequences for analyses using static or other assumed network models in phylodynamics; these may draw erroneous conclusions about the rate of transmission or other parameters due to mis-specification of the underlying network. Classification of HIV data We trained classifiers on phylogenetic trees simulated with different network hypotheses, in order to predict the network type for HIV clades from sequences of patients in the Dutch ATHENA cohort and from sequences of the Los Alamos Sequence database [55]. The Dutch sequences predominantly capture the Dutch national HIV epidemic (cite Bezemer PLoS Med), whereas the sequences in the Los Alamos database are from cases worldwide and capture many diverse HIV epidemics. Our network predictions are consistent with this: the higher the fraction of tips from the Netherlands, the more HIV trees are predicted to arise from skewed or skewed-clustered networks, rather than random (see Table 6); this signal is consistent in the K-nearest neighbour and random forest classification. We also trained the classifiers on simulated trees from a skewed-clustered network with two different infection rates (β = 0.05 and β = 0.2), in order to predict the infection rate for the HIV trees (see Table (7). We did the latter both with trees from static networks and dynamic networks with turnover rate δ = 0.1. For the static network, roughly two thirds of the HIV trees are predicted to have infection rate β = 0.05 and one third β = 0.2. In contrast, all of the HIV trees are predicted to have the higher infection rate of β = 0.2 on the dynamic network. It is not surprising that more HIV trees were predicted to have the higher infection rate β = 0.2 when the classifiers were trained on the dynamic network. On dynamic networks, not all links are present at any moment, which slows down the outbreak. A higher infection rate could compensate to attain the same R0. This result was very robust even when fewer tree features were used to train the classifier. However, if only imbalance measures were used, a low fraction of HIV trees were predicted to have β = 0.05 by dynamic-network-based classifiers. This suggests that using a variety of tree features is important for specification of network parameters from phylogenies. We have also listed separate predictions for clades in which more than 50% or 70% of the tips are from the ATHENA dataset; these are geographically linked, may include more recent transmission and are likely to have a higher sampling density than background clades from the Los Alamos database. Compared to the whole set of 90 HIV clades, these clades are more likely to be classified to have come from a skewed (clustered) network and to have a high transmission rate (β = 0.2). However, the certainty on this prediction depends on the underlying network assumption, with classifiers trained on dynamic models showing a completely consistent set of predictions while those trained on static models leave considerable variation (Table 7). In contrast, clades with fewer Dutch sequences were classified predominantly to have a lower transmission rate if classifiers were trained using static networks, but a higher transmission rate using dynamic networks. The fact that the results differ considerably depending on the underlying network assumption indicates that a mis-specified network, via an incorrect turnover rate or indeed the assumption of a static network, can have a strong effect on predicted transmission rates. We used models of different human host contact networks to simulate outbreaks of pathogens, and convert the infection trees into phylogenetic trees. We showed that it is possible to discriminate with tree statistics between different contact network hypotheses, different turnover rates, different mean degrees and different infection rates. Table 8 sumarizes the network effect on tree statistics. The underlying contact network hypothesis (random, skewed or skewed-clustered) is clearly identifiable in statistics of the simulated phylogenetic trees, if β and δ are the same. This indicates that simple networks such as the Erdős-Renyi model are likely to be unsuitable models for human host contact networks where there is evidence for a skewed degree distribution and clustering. Nevertheless, in our simulations, phylogenies from skewed-clustered networks are slightly more similar to those from random networks than those from unclustered networks of the same degree distribution. Phylogenetic trees from outbreaks on the same static network, but with different infection rates or different mean degrees, can be distinguished clearly in PCA plots. This result holds also on dynamic networks, and suggests, in keeping with previous work, that phylogenetic tree features can be used to estimate epidemiological parameters. However, the relationships between the epidemiological parameters, networks and phylogenetic trees are complex. We tested the strength of some of these relationships using supervised learning methods, and found that both network mis-specification and variability in other parameters (modelling uncertainty about the values of these parameters) have a strong impact on the ability to estimate the transmission parameter. Our results indicate that consistent network mis-specification and parameter uncertainty may have an adverse impact on phylodynamic studies estimating parameters from data. Population turnover in dynamic networks has a measurable effect on pathogen phylogenies; phylogenetic tree features can discriminate between different turnover rates at which the underlying network is evolving. Overall, the higher the turnover, the higher the imbalance measures and the lower counts of small substructures. No single feature captures the differences between contact network hypotheses entirely, and a combination of many different features yields the best visual separation between the groups in a PCA plot. Features that take into account the branch length of the phylogenetic trees improve the separation slightly. Very different patterns are obtained from a static time-integrated network as compared to dynamic networks, on which transmission happens slower. This suggests that in the phylodynamic setting, static networks are a poor approximation for dynamic networks, highlighting the need for dynamic network models. This also highlights the need for investigating turnover and dynamics in empirical networks to obtain the data necessary to develop dynamic models. We illustrated this result by predicting the infection rate β of HIV trees, and showed that the predictions strongly underestimate β if a static network is used instead of a dynamic one. Comparison to HIV data also showed that clades with tips predominantly from the Dutch sequence dataset with high sampling fraction of infected individuals are more likely to be predicted to have come from a skewed or skewed-clustered network than those with tips mainly from the even sparser sampled Los Alamos database. Although the dynamic skewed-clustered network is likely to be a more realistic approximation to real networks than static or unclustered networks, it still might not be as clustered as a given real contact network. The details of the relevant network for a study of real data will depend on the pathogen and also on the nature of the community in which that pathogen is being studied. The dynamic models we have used here are still relatively simple and tractable, and real networks are likely to be even more heterogeneous. The ATHENA cohort is managed by Stichting HIV Monitoring and supported by a grant from the Dutch Ministry of Health, Welfare and Sport through the Centre for Infectious Disease Control of the National Institute for Public Health and the Environment.
{"url":"https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1006761","timestamp":"2024-11-01T21:12:43Z","content_type":"text/html","content_length":"249950","record_id":"<urn:uuid:284dfc3e-1605-4295-829c-445b4a5076bb>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00572.warc.gz"}
Long Division - Steps, Examples Long division is an essential mathematical concept which has multiple real-life utilizations in different fields. One of its primary uses is in finance, where it is utilized to calculate interest rates and determine loan payments. It is also applied to work out investments, budgeting, and taxes, making it an essential skill for anyone engaged in finance. In engineering, long division is applied to solve complex challenges in connection to construction, design, and development. Engineers employ long division to calculate the loads which structures can endure, assess the strength of materials, and plan mechanical systems. It is also used in electrical engineering to calculate circuit parameters and design complicated circuits. Long division is also essential in science, where it is used to figure out dimensions and perform scientific workings. For instance, astronomers apply long division to determine the length between stars, and physicists utilize it to determine the velocity of objects. In algebra, long division is utilized to factor polynomials and solve equations. It is an essential tool for figuring out complicated challenges which consist of large numbers and requires precise calculations. It is further utilized in calculus to calculate derivatives and integrals. Overall, long division is a crucial math concept that has multiple practical utilizations in various domains. It is a fundamental math operation which is applied to work out complex challenges and is a crucial skill for anyone interested in finance, engineering, science, or math. Why is Long Division Important? Long division is a crucial math idea which has many utilization in various fields, involving science, finance, engineering. It is an important mathematics function that is utilized to work out a broad array of challenges, for instance, figuring out interest rates, determining the length of time required to complete a project, and determining the length traveled by an object. Long division is also utilized in algebra to factor polynomials and figure out equations. It is an important tool for working out complex problems which involve huge values and requires precise Procedures Involved in Long Division Here are the steps involved in long division: Step 1: Note the dividend (the number being divided) on the left and the divisor (the number dividing the dividend) on the left. Step 2: Figure out how many times the divisor could be divided into the first digit or set of digits of the dividend. Write the quotient (the outcome of the division) prior the digit or set of Step 3: Multiply the quotient by the divisor and note down the result below the digit or set of digits. Step 4: Subtract the outcome obtained in step 3 from the digit or set of digits in the dividend. Write the remainder (the amount remaining after the division) below the subtraction. Step 5: Bring down the following digit or set of digits from the dividend and append it to the remainder. Step 6: Replicate steps 2 to 5 unless all the digits in the dividend have been processed. Examples of Long Division Here are few examples of long division: Example 1: Divide 562 by 4. 4 | 562 So, 562 divided by 4 is 140 with a remainder of 2. Example 2: Divide 1789 by 21. 21 | 1789 Thus, 1789 divided by 21 is 85 with a remainder of 11. Example 3: Divide 3475 by 83. 83 | 3475 As a result, 3475 divided by 83 is 41 with a remainder of 25. Common Mistakes in Long Division Long division can be a challenging theory to master, and there are several common errors that students make when working with long division. One general mistake is to forget to write down the remainder when dividing. Another common mistake is to incorrectly put the decimal point while dividing decimal numbers. Learners may also forget to carry over numbers when subtracting the product from the dividend. To circumvent making these errors, it is essential to work on long division daily and ready carefully each step of the process. It can further be helpful to check your work utilizing a calculator or by performing the division in reverse to make sure that your solution is correct. Furthermore, it is important to understand the rudimental principles regarding long division, for example, the connection among the quotient, dividend, divisor, and remainder. By mastering the basics of long division and preventing usual errors, everyone can improve their abilities and obtain confidence in their ability to figure out complicated problems. Finally, long division is a crucial mathematical theory which is important for working out complex problems in several fields. It is used in finance, engineering, science, and mathematics, making it a crucial skill for professionals and learners alike. By mastering the steps consisted in long division and comprehending how to utilize them to real-life problems, anyone can obtain a detailed comprehension of the complex workings of the world surrounding us. If you require guidance comprehending long division or any other arithmetic idea, Grade Potential Tutoring is here to help. Our expert tutors are accessible remotely or face-to-face to give customized and productive tutoring services to support you be successful. Our tutors can assist you across the steps of long division and other arithmetic theories, support you work out complicated challenges, and give the tools you need to excel in your studies. Reach us right now to schedule a tutoring session and take your math skills to the next level.
{"url":"https://www.orlandoinhometutors.com/blog/long-division-steps-examples","timestamp":"2024-11-03T15:24:59Z","content_type":"text/html","content_length":"75863","record_id":"<urn:uuid:ce53bd50-b018-4890-8f6a-0b9d4a9136c8>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00630.warc.gz"}
cons_lop.h File Reference Detailed Description constraint handler for linear ordering constraints Marc Pfetsch This constraint ensures that a given square matrix of binary variables corresponds to a tournament, i.e., it is an acyclic orientation of the complete graph. This encodes a linear order as follows. The rows and columns correspond to the elements of the set to be ordered. A variable x[i][j] is 1 if and only if element i appears before j in the order. In this constraint handler we only add the symmetry equations and separate the triangle inequalities yielding a correct IP model. The variables on the diagonal are ignored. Definition in file cons_lop.h. Go to the source code of this file. SCIP_RETCODE SCIPincludeConshdlrLOP (SCIP *scip) SCIP_RETCODE SCIPcreateConsLOP (SCIP *scip, SCIP_CONS **cons, const char *name, int n, SCIP_VAR ***vars, SCIP_Bool initial, SCIP_Bool separate, SCIP_Bool enforce, SCIP_Bool check, SCIP_Bool propagate, SCIP_Bool local, SCIP_Bool modifiable, SCIP_Bool dynamic, SCIP_Bool removable, SCIP_Bool stickingatnode) Function Documentation ◆ SCIPincludeConshdlrLOP() SCIP_RETCODE SCIPincludeConshdlrLOP ( SCIP * scip ) creates the handler for linear ordering constraints and includes it in SCIP Definition at line 1176 of file cons_lop.c. CONSHDLR_PROPFREQ, CONSHDLR_SEPAFREQ, CONSHDLR_SEPAPRIORITY, NULL, SCIP_CALL, SCIP_OKAY, SCIPincludeConshdlrBasic(), SCIPsetConshdlrCopy(), SCIPsetConshdlrDelete(), SCIPsetConshdlrExit(), SCIPsetConshdlrInitlp(), SCIPsetConshdlrPrint(), SCIPsetConshdlrProp(), SCIPsetConshdlrResprop(), SCIPsetConshdlrSepa(), and SCIPsetConshdlrTrans(). Referenced by main(), and SCIP_DECL_CONSHDLRCOPY(). ◆ SCIPcreateConsLOP() SCIP_RETCODE SCIPcreateConsLOP ( SCIP * scip, SCIP_CONS ** cons, const char * name, int n, SCIP_VAR *** vars, SCIP_Bool initial, SCIP_Bool separate, SCIP_Bool enforce, SCIP_Bool check, SCIP_Bool propagate, SCIP_Bool local, SCIP_Bool modifiable, SCIP_Bool dynamic, SCIP_Bool removable, SCIP_Bool stickingatnode creates and captures a linear ordering constraint scip SCIP data structure cons pointer to hold the created constraint name name of constraint n number of elements vars n x n matrix of binary variables initial should the LP relaxation of constraint be in the initial LP? separate should the constraint be separated during LP processing? enforce should the constraint be enforced during node processing? check should the constraint be checked for feasibility? propagate should the constraint be propagated during node processing? local is constraint only valid locally? modifiable is constraint modifiable (subject to column generation)? dynamic is constraint subject to aging? removable should the relaxation be removed from the LP due to aging or cleanup? stickingatnode should the constraint always be kept at the node where it was added, even if it may be moved to a more global node? Definition at line 1204 of file cons_lop.c. References CONSHDLR_NAME, NULL, SCIP_CALL, SCIP_OKAY, SCIP_PLUGINNOTFOUND, SCIPallocBlockMemory, SCIPallocBlockMemoryArray, SCIPcreateCons(), SCIPerrorMessage, and SCIPfindConshdlr(). Referenced by SCIP_DECL_CONSCOPY(), and SCIP_DECL_READERREAD().
{"url":"https://scipopt.org/doc/html/cons__lop_8h.php","timestamp":"2024-11-10T11:01:24Z","content_type":"text/html","content_length":"23377","record_id":"<urn:uuid:0efce85b-fab6-4549-8741-7e66ac366bf5>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00653.warc.gz"}
What Are The Components Of Real Number System? 1 Answers What Are The Components Of Real Number System? The real number system expands the concept of what we mean by number. You have the definition of 'number' which means something you can count, such as how many items are in one basket. These are natural numbers or counting numbers. They are one component of the real number system. The information below is further components of the real number system: • Whole numbers: These are natural numbers along with zero being a part of the sequence. • Integers: Whole numbers that add in the negative to the series of whole number, so rather than starting at 0, the numbers begin with a negative such as -4, -3, -2, -1, 0 and then the natural • Rational numbers will include fractions, A and B as integers and B cannot be 0. There are restrictions within rational numbers. • Irrational numbers are not expressed as a ratio of integers, but as decimals. Irrational numbers never repeat or terminate. In math, the real number system is often shown in a circular pattern with natural in the middle and the circles becoming larger as it goes out. Irrational numbers are in a circle on their own. This sequence shows an order to the real numbers along with a number line to provide a full ordered pattern. The real number system also discusses absolute value. This is the distance from the number 0 or origin. The link above discusses the real number system and its components in further detail. It explains various mathematical components like positive integer exponents, order of operations and much more. There are other sites to be found on the real number system. Answer Question
{"url":"https://education.blurtit.com/2977877/what-are-the-components-of-real-number-system","timestamp":"2024-11-11T09:52:32Z","content_type":"text/html","content_length":"56769","record_id":"<urn:uuid:81c8ca50-210e-4266-9f5c-1fe459d9dbd6>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00533.warc.gz"}
Compare & Order Numbers 2nd Grade Texas Essential Knowledge and Skills (TEKS): 2.2.C generate a number that is greater than or less than a given whole number up to 1,200; Texas Essential Knowledge and Skills (TEKS): 2.2.D use place value to compare and order whole numbers up to 1,200 using comparative language, numbers, and symbols (>, <, or =); Florida - Benchmarks for Excellent Student Thinking: MA.2.NSO.1.3 Plot, order and compare whole numbers up to 1,000.
{"url":"https://www.learningfarm.com/web/practicePassThrough.cfm?TopicID=2196","timestamp":"2024-11-13T22:37:13Z","content_type":"application/xhtml+xml","content_length":"27231","record_id":"<urn:uuid:95325d35-dc42-4d58-beb0-3eb66620d09c>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00089.warc.gz"}
Support Vector Regression Tutorial for Machine Learning Support Vector Machines (SVM) are widely used in machine learning for classification problems, but they can also be applied to regression problems through Support Vector Regression (SVR). SVR uses the same principles as SVM but focuses on predicting continuous outputs rather than classifying data points. This tutorial will explore SVR’s work, emphasizing key concepts such as quadratic, radial basis function, and sigmoid kernels. By leveraging these kernels, SVR can effectively handle complex, non-linear relationships in data. We will also demonstrate how to implement SVR in Python using training samples, showcasing its practical applications in artificial intelligence. In this article you will get understanding about the Support Vector Regression Mdoel. So, Support vector regression (SVR) is a robust machine learning method utilized for forecasting continuous results. The SVR model, unlike typical regression models, employs support vector machines (SVMs) principles to transform input features into high-dimensional spaces to locate the ideal hyperplane that accurately represents the data. This method enables support vector regression (SVR) to effectively manage both linear and non-linear relationships, rendering it a versatile tool across different fields, such as financial forecasting and scientific research. Utilizing the distinctive features of support vector machine regression allows SVR models to attain high accuracy and robustness, even when dealing with intricate datasets. Learning Outcomes • Grasp the fundamental concepts of Support Vector Machine Regression, including hyperplanes, margins, and how SVM separates data into different classes. • Recognize the key differences between Support Vector Machines for classification and Support Vector Regression for regression problems. • Learn about important SVR hyperparameters, such as kernel types (quadratic, radial basis function, and sigmoid), and how they influence the model’s performance. • Gain practical experience in implementing Support Vector Regression using Python, including data preprocessing, feature scaling, and model training. • Use SVR to predict continuous outputs in various contexts, demonstrating its application in fields like finance, engineering, and healthcare. • Develop skills to visualize the results of SVM for Regression, understand how to interpret the best-fit line, and understand the impact of different kernels on the model’s predictions. • Learn how to assess the performance of SVR models using appropriate metrics and techniques, ensuring accurate and reliable predictions. What is a Support Vector Machine (SVM)? A Support Vector Machine (SVM) is a supervised machine learning algorithm used for classification and regression tasks. SVM works by finding a hyperplane in a high-dimensional space that best separates data into different classes. It aims to maximize the margin (the distance between the hyperplane and the nearest data points of each class) while minimizing classification errors. SVM can handle both linear and non-linear classification problems by using various kernel functions. It’s widely used in tasks such as image classification, text categorization, and more. So what exactly is Support Vector Machine (SVM)? We’ll start by understanding SVM in simple terms. Let’s say we have a plot of two label classes as shown in the figure below: Can you decide what the separating line will be? You might have come up with this: The line fairly separates the classes. This is what SVM essentially does – simple class separation. Now, what is the data was like this: Here, we don’t have a simple line separating these two classes. So we’ll extend our dimension and introduce a new dimension along the z-axis. We can now separate these two classes: When we transform this line back to the original plane, it maps to the circular boundary as I’ve shown here: This is exactly what Support Vector Machine Regression does! It tries to find a line/hyperplane (in multidimensional space) that separates these two classes. Then it classifies the new point depending on whether it lies on the positive or negative side of the hyperplane depending on the classes to predict. Hyperparameters of the Support Vector Machine (SVM) Algorithm There are a few important parameters of SVM that you should be aware of before proceeding further: • Kernel: A kernel helps us find a hyperplane in the higher dimensional space without increasing the computational cost. Usually, the computational cost will increase if the dimension of the data increases. This increase in dimension is required when we are unable to find a separating hyperplane in a given dimension and are required to move in a higher dimension: • Hyperplane: This is basically a separating line between two data classes in SVM. But in Support Vector Regression, this is the line that will be used to predict the continuous output • Decision Boundary: A decision boundary can be thought of as a demarcation line (for simplification) on one side of which lie positive examples and on the other side lie the negative examples. On this very line, the examples may be classified as either positive or negative. This same concept of SVM will be applied in Support Vector Regression as well To understand SVM from scratch, I recommend this tutorial: Understanding Support Vector Machine(SVM) algorithm from examples. Introduction to Support Vector Regression (SVR) Support Vector Regression (SVR) is a machine learning algorithm used for regression analysis. SVR Model in Machine Learning aims to find a function that approximates the relationship between the input variables and a continuous target variable while minimizing the prediction error. Unlike Support Vector Machines (SVMs) used for classification tasks, SVR Model seeks a hyperplane that best fits the data points in a continuous space. This is achieved by mapping the input variables to a high-dimensional feature space and finding the hyperplane that maximizes the margin (distance) between the hyperplane and the closest data points, while also minimizing the prediction error. SVR Model can handle non-linear relationships between the input and target variables by using a kernel function to map the data to a higher-dimensional space. This makes it a powerful tool for regression tasks where complex relationships may exist. Support Vector Regression (SVR) uses the same principle as SVM but for regression problems. Let’s spend a few minutes understanding the idea behind SVR in Machine Learning. The Idea Behind Support Vector Regression The problem of regression is to find a function that approximates mapping from an input domain to real numbers based on a training sample. So, let’s dive deep and understand how SVR actually works. Consider these two red lines as the decision boundary and the green line as the hyperplane. When we move on with SVR in Machine Learning, our objective is to consider the points within the decision boundary line. Our best fit line is the hyperplane with the maximum number of points. The first thing that we’ll understand is what is the decision boundary (the danger red line above!). Consider these lines as being at any distance, say ‘a’, from the hyperplane. So, these are the lines that we draw at distance ‘+a’ and ‘-a’ from the hyperplane. This ‘a’ in the text is basically referred to as epsilon. Assuming that the equation of the hyperplane is as follows: Y = wx+b (equation of hyperplane) Then the equations of decision boundary become: wx+b= +a wx+b= -a Thus, any hyperplane that satisfies our SVM for Regression Model should satisfy: -a < Y- wx+b < +a Our main aim here is to decide a decision boundary at ‘a’ distance from the original hyperplane such that data points closest to the hyperplane or the support vectors are within that boundary Hence, we will take only those points within the decision boundary that have the least error rate or are within the Margin of Tolerance. This will give us a better-fitting model. Implementing Support Vector Regression (SVR) in Python Time to put on our coding hats! In this section, we’ll understand the use of Support Vector Regression with the help of a dataset. Here, we have to predict the salary of an employee, given a few independent variables. A classic HR analytics project! Step 1: Importing the libraries Step 2: Reading the dataset Step 3: Feature Scaling A real-world dataset contains features that vary in magnitudes, units, and range. I would suggest performing normalization when the scale of a feature is irrelevant or misleading. Feature Scaling basically helps to normalize the data within a particular range. Normally several common class types contain the feature scaling function so that they make feature scaling automatically. However, the SVR Model in machine learning class is not a commonly used class type so we should perform feature scaling using Python. Step 4: Fitting SVR to the dataset Kernel is the most important feature. There are many types of kernels – linear, Gaussian, etc. Each is used depending on the dataset. To learn more about this, read this: Support Vector Machine (SVM) in Python and R Step 5. Predicting a New Result So, the prediction for y_pred(6, 5) will be 170,370. Step 6. Visualizing the SVR results (for higher resolution and smoother curve) This is what we get as output- the best fit line that has a maximum number of points. Quite accurate! What is the difference between SVM and SVR? Support Vector Machines (SVM) and Support Vector Regression (SVR) are supervised learning techniques employed in machine learning with unique functions and features. Key Differences: Support Vector Machine (SVM) is mostly utilized for tasks involving classification. The goal is to locate the best hyperplane that divides distinct classes within the feature space. The objective is to increase the distance between the nearest points of distinct classes, which are referred to as support vectors. SVR is utilized for tasks involving regression. It forecasts values that are continual instead of distinct category labels. SVR aims to maximize the number of data points fitting within a given margin of tolerance (epsilon) while reducing errors outside this range. Support Vector Regression (SVR) extends the principles of Support Vector Machines (SVM) to regression problems, offering a powerful tool for predicting continuous outputs. By leveraging various kernels such as quadratic, radial basis function, and sigmoid, SVR Model can handle complex and non-linear relationships in the data. Through this tutorial, we’ve explored the essential hyperparameters, implemented SVR in Python, and applied it to real-world datasets, demonstrating its versatility in artificial intelligence applications. Whether dealing with training samples in finance, engineering, or healthcare, SVR Model provides a robust approach to model continuous data effectively, enhancing the accuracy and reliability of predictive analytics. Hope you like the article! Support vector regression (SVR) uses support vector machines to forecast continuous results, effectively managing linear and non-linear correlations. The SVR model shows robustness, versatility, and accuracy across different applications. If you found this information helpful, feel free to Share it. Key Takeaways • SVR extends Support Vector Machines (SVM) into regression problems, allowing for the prediction of continuous outcomes rather than classifying data into discrete categories as with a classifier. • SVR utilizes various kernel functions, such as quadratic, radial basis function, and sigmoid, to handle non-linear relationships in data, akin to how neural networks manage complex patterns. • Effective hyperparameter tuning, including choosing the right kernel and setting the epsilon parameter, is vital for maximizing SVR performance, similar to the role of gradient optimization in neural networks. • The SVR Model offers greater flexibility and robustness compared to traditional linear regression. It finds a hyperplane that best fits the data within a specified margin, making it suitable for more complex datasets. • Unlike logistic regression, primarily used for binary classification problems, Support Vector Regression (SVR) focuses on predicting continuous outcomes. SVR in Machine Learning leverages kernel functions to handle non-linear relationships in data, offering a more versatile approach for regression tasks. Frequently Asked Questions Q1. What are the applications of SVM regression? A. Support Vector Regression (SVM) is a versatile algorithm used in finance, engineering, bioinformatics, natural language processing, image processing, and healthcare for accurate predictions. It commonly predicts stock prices, machine performance, protein structures, text classifications, sentiment analysis, object recognition, and medical outcomes. Q2. How does the regularization parameter in SVM affect the regression model? A. Regularization is a technique that avoids overfitting by penalizing large coefficients in the model. In SVM for Regression, the regularization parameter determines the trade-off between achieving a low error on the training data and minimizing the complexity of the regression model. A higher value of the regularization parameter increases the penalty for large coefficients, which helps to prevent the model from fitting the noise in the training data. Q3. What are the benefits of using a polynomial kernel in SVM for regression? A. A polynomial kernel helps in fitting a regression model that can capture more complex relationships in the input data. It transforms the original features into polynomial features of a given degree, thus allowing the model to learn non-linear relationships. This is especially beneficial in scenarios where the relationship between the dependent and independent variables is not linear, providing a more flexible and powerful model. Q4. What is Simple View of Reading (SVR) Model? Reading comprehension depends on two main skills: Word recognition: Identifying words quickly and accurately. Language comprehension: Understanding the meaning of words and sentences. Both skills are equally important for strong reading. Think of reading as a multiplication problem: good word recognition * good language comprehension = good reading comprehension. Responses From Readers Thanks for the article,it gave an intuitive understanding about SVR It would be really helpful if you could also include the dataset,used for the demonstration. The code is completely irrelevant to the dataset shown in the picture. Also this code is from Udemy course by Kiril Ermenko. Atleast give them the credit when you have plagiarized the code and content of the tutorial from elsewhere. Thank you for this article, is very clear and helpful. However, I have one question on the example you gave. And My question concern characteristics variables (X) and target variables (Y). How to use SVR if we have more then one (1) characteristic variables. Like if we want to consider Salary against position level and age? Why we used inverse transform in step 5 line 2
{"url":"https://www.analyticsvidhya.com/blog/2020/03/support-vector-regression-tutorial-for-machine-learning/?utm_source=reading_list&utm_medium=https://www.analyticsvidhya.com/blog/2021/04/intuition-behind-correlation-definition-and-its-types/","timestamp":"2024-11-01T19:59:38Z","content_type":"text/html","content_length":"459054","record_id":"<urn:uuid:aedd1152-3d17-4666-9b83-a0ec84dfd24d>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00004.warc.gz"}
Counting at scale How engaged are users for a certain segment of the population? How many users are actively using a new feature? One way to answer that question is to compute the engagement ratio (ER) for that segment, which is defined as daily active users (DAU) over monthly active users (MAU), i.e. $$ ER_{segment} = \frac{DAU_{segment}}{MAU_{segment}} $$ Intuitively the closer the ratio is to 1, the higher the number of returning users. A segment can be an arbitrary combination of values across a set of dimensions, like operating system and activity Ideally, given a set of $ D $ dimensions, pre-computing the ER for all possible combinations of values of said dimensions would ensure that user queries run as fast as possible. Clearly that’s not very efficient though; if we assume for simplicity that every dimension has $ k $ possible values, then there are $ \sum_{d=1}^{D}\binom{D}{d} k^d $ ratios. Is there a way around computing all of those ratios while still having an acceptable query latency? One could build a set of users for each value of each dimension and then at query time simply use set union and set intersection to compute any desired segment. Unfortunately that doesn’t work when you have millions of users as the storage complexity is proportional to the number of items stored in the sets. This is where probabilistic counting and the HyperLogLog sketch comes into play. The HyperLogLog sketch can estimate cardinalities well beyond $ 10^9 $ with a standard error of 2% while only using a couple of KB of memory. The intuition behind is simple. Imagine a hash function that maps user identifiers to a string of N bits. A good hash function ensures that each bit has the same probability of being flipped. If that’s the case then the following is true: • 50 % of hashes have the prefix 1 • 25 % of hashes have the prefix 01 • 12.5% of hashes the prefix 001 • … Intuitively, if 4 distinct values are hashed we would expect to see on average one single hash with a prefix of 01 while for 8 distinct values we would expect to see one hash with a prefix of 001 and so on. In other words, the cardinality of the original set can be estimated from the longest prefix of zeros of the hashed values. To reduce the variability of this single estimator, the average of K estimators can be used as the approximated cardinality and it can be shown that the standard error of a HLL sketch is $ \frac{1.04}{sqrt(K)} $. The detailed algorithm is documented in the original paper, and its practical implementation and variants are covered in depth by a 2013 paper from Google. Set operations One of the nice properties of HLL is that the union of two HLL sketches is very simple to compute as the union of a single estimator with another estimator is just the maximum of the two estimators, i.e. the longest prefix of zeros. This property makes the algorithm trivially parallelizable which makes it well suited for map-reduce style computations. What about set intersection? There are two ways to compute that for e.g. two sets: • Using the inclusion-exclusion principle: $ |A \cap B| = |A| + |B| - |A \cup B| $. • Using the MinHash (MH) sketch, which estimates the Jaccard index that measures how similar two sets are: $ \frac{|A \cap B|}{|A \cup B|} $. Given the MH sketch one could estimate the intersection with $ |A \cap B| \approx MH(A, B) \times |A \cup B| $. It turns out that both approaches yield bad approximations when the overlap is small, which makes set intersection not particularly attractive for many use-cases. At Mozilla we use both Spark and Presto for analytics and even though both support HLL their implementation is not compatible, i.e. Presto can’t read HLL sketches serialized from Spark. To that end we created a Spark package, spark-hyperloglog, and a Presto plugin, presto-hyperloglog, to extend both systems with the same HLL implementation. #Observability #Statistics
{"url":"https://robertovitillo.com/counting-at-scale/","timestamp":"2024-11-03T05:48:31Z","content_type":"text/html","content_length":"12478","record_id":"<urn:uuid:a1f335bf-c9cf-4574-a425-fd29f34c4f59>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00523.warc.gz"}
Sopher's Index : Measures of Inequality - Pan Geography Sopher’s Index : Measures of Inequality Measuring inequality is key to formation of policy for equitable distribution of benefits of economic growth to people. Sopher’s Index is a method which measures inter-personal inequality between people and regions. What is Sopher’s Index? • It is used to calculate inter-personal disparity between two groups of people and regions. • Need for Log scale: This index uses log scale to normalize the effect of base value, For example, if the literacy of male and female is 40% and 30% respectively in District A and 80% and 70% in District B respectively. There is difference of 10% in both districts. However without log scale, this method would show higher disparity in District A than District B. Hence, we use log scale. • The sopher’s index value of Zero means perfect equality. • There is no higher limit. As the number moves higher than zero, inequality/disparity keeps on increasing. • This is why the numbers in themselves do not mean anything when read alone. These numbers should be interpreted in relation to the different time periods e.g. the disparity of 1991 will only mean something when compared to disparity of 2001 and 2011 (Table.1). • It does not calculate the variation and deviation within a single series of data but between two groups of series. For instance, the disparity between male and female literacy in districts of Delhi between 1991 to 2011 can be calculated by this Index (Table.1). The formulae is as following. Equation 1: Sopher’s Index D = Disparity X2 = Higher Value X1 = Lower Value Lets elaborate through practical example. Sopher’s Disparity Index for Literacy Rate in Delhi Census Year Male Literacy (X2) Female Literacy (X1) Sopher’s Index 1991 64.13 39.29 0.441 2001 75.26 53.67 0.419 2011 82.14 65.46 0.385 • In this example the Male Literacy is higher than female literacy, therefore, male literacy has been denoted as X2 whereas the female literacy is denoted as X1. • X1 and X2 must be percentage values. • After applying the formula in excel i.e. =log(X2/X1)+log((100-X1)/(100-X2)), we arrived at Sopher’s Index for the year 1991, 2001 and 2011. • It can be observed that this values has declined from 0.441 to 0.385 which means that the inter-personal disparity in literacy between male and female has also declined. Modified Sopher’s Index • Sopher’s Index was modified by Kundu and Rao in 1983. • Need for Modification: The original Index fails to satisfy Principle of Additive Monotonicity. This means that if we add a constant number to all the figures in a positive data series, the inequality declines. Further, if one of the variable is 100%, the result in numerator and denominator would be 0. This will make the use of original formulae futile. Hence, Kundu and Rao modified The Modified formulae is as following. Equation 2: Modified Sopher’s Index D = Disparity X2 = Higher Value X1 = Lower Value For Focused Group Discussion, Click here. Kulwinder Singh is an alumni of Jawaharlal Nehru University, New Delhi and working as Assistant Professor of Geography at Pt. C.L.S. Government College, Kurukshetra University. He is a passionate teacher and avid learner.
{"url":"https://pangeography.com/sophers-index-measures-of-inequality/","timestamp":"2024-11-02T17:04:46Z","content_type":"text/html","content_length":"84083","record_id":"<urn:uuid:1a8483df-362a-40b7-8b27-585084acfd82>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00139.warc.gz"}
Linear regression: Loss | Machine Learning | Google for Developers Loss is a numerical metric that describes how wrong a model's predictions are. Loss measures the distance between the model's predictions and the actual labels. The goal of training a model is to minimize the loss, reducing it to its lowest possible value. In the following image, you can visualize loss as arrows drawn from the data points to the model. The arrows show how far the model's predictions are from the actual values. Figure 9. Loss is measured from the actual value to the predicted value. Distance of loss In statistics and machine learning, loss measures the difference between the predicted and actual values. Loss focuses on the distance between the values, not the direction. For example, if a model predicts 2, but the actual value is 5, we don't care that the loss is negative $ -3 $ ($ 2-5=-3 $). Instead, we care that the distance between the values is $ 3 $. Thus, all methods for calculating loss remove the sign. The two most common methods to remove the sign are the following: • Take the absolute value of the difference between the actual value and the prediction. • Square the difference between the actual value and the prediction. Types of loss In linear regression, there are four main types of loss, which are outlined in the following table. Loss type Definition Equation L[1] loss The sum of the absolute values of the difference between the predicted values and the actual values. $ ∑ | actual\ value - predicted\ value | $ Mean absolute error (MAE) The average of L[1] losses across a set of examples. $ \frac{1}{N} ∑ | actual\ value - predicted\ value | $ L[2] loss The sum of the squared difference between the predicted values and the actual values. $ ∑(actual\ value - predicted\ value)^2 $ Mean squared error (MSE) The average of L[2] losses across a set of examples. $ \frac{1}{N} ∑ (actual\ value - predicted\ value)^2 $ The functional difference between L[1] loss and L[2] loss (or between MAE and MSE) is squaring. When the difference between the prediction and label is large, squaring makes the loss even larger. When the difference is small (less than 1), squaring makes the loss even smaller. When processing multiple examples at once, we recommend averaging the losses across all the examples, whether using MAE or MSE. Calculating loss example Using the previous best fit line, we'll calculate L[2] loss for a single example. From the best fit line, we had the following values for weight and bias: • $ \small{Weight: -3.6} $ • $ \small{Bias: 30} $ If the model predicts that a 2,370-pound car gets 21.5 miles per gallon, but it actually gets 24 miles per gallon, we would calculate the L[2] loss as follows: Value Equation Result $\small{bias + (weight * feature\ value)}$ Prediction $\small{21.5}$ $\small{30 + (-3.6*2.37)}$ Actual value $ \small{ label } $ $ \small{ 24 } $ $ \small{ (prediction - actual\ value)^2} $ L[2] loss $\small{6.25}$ $\small{ (21.5 - 24)^2 }$ In this example, the L[2] loss for that single data point is 6.25. Choosing a loss Deciding whether to use MAE or MSE can depend on the dataset and the way you want to handle certain predictions. Most feature values in a dataset typically fall within a distinct range. For example, cars are normally between 2000 and 5000 pounds and get between 8 to 50 miles per gallon. An 8,000-pound car, or a car that gets 100 miles per gallon, is outside the typical range and would be considered an outlier. An outlier can also refer to how far off a model's predictions are from the real values. For instance, a 3,000-pound car or a car that gets 40 miles per gallon are within the typical ranges. However, a 3,000-pound car that gets 40 miles per gallon would be an outlier in terms of the model's prediction because the model would predict that a 3,000-pound car would get between 18 and 20 miles per When choosing the best loss function, consider how you want the model to treat outliers. For instance, MSE moves the model more toward the outliers, while MAE doesn't. L[2] loss incurs a much higher penalty for an outlier than L[1] loss. For example, the following images show a model trained using MAE and a model trained using MSE. The red line represents a fully trained model that will be used to make predictions. The outliers are closer to the model trained with MSE than to the model trained with MAE. Figure 10. A model trained with MSE moves the model closer to the outliers. Figure 11. A model trained with MAE is farther from the outliers. Note the relationship between the model and the data: • MSE. The model is closer to the outliers but further away from most of the other data points. • MAE. The model is further away from the outliers but closer to most of the other data points. Check Your Understanding Consider the following two plots: Which of the two data sets shown in the preceding plots has the higher Mean Squared Error (MSE)? The dataset on the left. The six examples on the line incur a total loss of 0. The four examples not on the line are not very far off the line, so even squaring their offset still yields a low value: $MSE = \frac{0^2 + 1^2 + 0^2 + 1^2 + 0^2 + 1^2 + 0^2 + 1^2 + 0^2 + 0^2} {10} = 0.4$ The dataset on the right. The eight examples on the line incur a total loss of 0. However, although only two points lay off the line, both of those points are twice as far off the line as the outlier points in the left figure. Squared loss amplifies those differences, so an offset of two incurs a loss four times as great as an offset of one: $MSE = \frac{0^2 + 0^2 + 0^2 + 2^2 + 0^2 + 0^2 + 0^2 + 2^2 + 0^2 + 0^2} {10} = 0.8$
{"url":"https://developers.google.cn/machine-learning/crash-course/linear-regression/loss?authuser=2","timestamp":"2024-11-02T18:28:06Z","content_type":"text/html","content_length":"177823","record_id":"<urn:uuid:2910c119-11e1-4b0d-8d39-5a06710ae47c>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00163.warc.gz"}
A string with variable density Orthogonal series and boundary value problems Calculations of some Fourier series This is an evaluated Mathematica notebook. If you have Mathematica or MathReader (which is available free from WRI, you may download the notebook file. This notebook contains some Mathematica calculations for chapter VI, Notes on a vibrating string, of Harrell's WWW textbook. (c) Copyright 1994,1995 by Evans M. Harrell, II. All rights reserved A string with variable density and Airy's equation A realistic string or optical fiber may not be uniform, so some of the simplifying assumptions in our derivation fo the wave equation are not valid. For example, the constant c^2 = (spring constant)/(mass density) for the spring may be different at different positions, leading to an equation of the form \partial u/\partial t = (c[x]) \partial u/\partial x or, as a thorough examination of the derivation of the wave equation reveals, we could more generally have: \partial u/\partial t = p(x) (\partial /\partial x) s(x) \partial u/\partial x for some potentially complicated positive functions p(x) and s(x). How well does the method of separation of variables do for problems like this? Rather well, actually, although we may have to encounter some new functions. Model Problem VI.3.. Suppose that the wave speed depends on position, so that c^2 = 1/(1 + x), 0 < x < 1, DBC at 0 and 1. Find the normal modes of vibration. When we attempt to solve the equation with the ansatz u[t,x] = T[t] X[x], we find the following eigenvalue problem. - X''[x] = (1 + x) mu X[x] There are actually some special functions, called Airy functions, which solve the ODE y''[x] = x y. Two independent solutions are called Ai(x) and Bi(x), or, according to Mathematica , AiryAi[x] and AiryBi[x]. Plot[AiryAi[x], {x, -10,10}] Plot[AiryAi[x], {x, -5,3}]; Plot[AiryBi[x], {x, -5,3}] The function Bi explodes exponentially to the right, while Ai decays exponentially. They both oscillate to the left (why?). By changing variables we can get these functions to solve our eigenvalue equation: D[AiryAi[-mu^(1/3) (x+1)],{x,2}] -(mu (1 + x) AiryAi[-(mu (1 + x))]) We need a linear combination Ai(- mu^(1/3) (1+x)) + C Bi ( - mu^(1/3) (1+x)) which is 0 at x=0 and 0 at x=1. I avoid the cube root at this stage by letting p = mu^1/3: FindRoot[{AiryAi[- p] + C AiryBi[- p] == 0, \ AiryAi[- 2 p] + C AiryBi[- 2 p] == 0}, \ {p -> 1.87088, C -> 0.819688} Plot[AiryAi[-1.87088 (1+x)] +.819688 AiryBi[-1.87088 (1+x)], \ The fact that this function has no nodes between 0 and 1, and therefore resembles sin(9 x), indicates that this is the spatial part of the fundamental (lowest-frequency) mode. The eigenvalue and normal mode are: mu0 = p /. %%; Mode0[t_,x_] = (A0 Cos[mu0 t] + B0 Cos[mu0 t]) *\ (AiryAi[-1.87088 (1+x)] +.819688 AiryBi[-1.87088 (1+x)]) (AiryAi[-1.87088 (1 + x)] + 0.819688 AiryBi[-1.87088 (1 + x)]) (A0 Cos[1.87088 t] + B0 Cos[1.87088 t]) FindRoot[{AiryAi[- p] + C AiryBi[- p] == 0, \ AiryAi[- 2 p] + C AiryBi[- 2 p] == 0}, \
{"url":"http://mathphysics.com/pde/ch6bnb.html","timestamp":"2024-11-08T06:22:57Z","content_type":"text/html","content_length":"4851","record_id":"<urn:uuid:ff0108a0-932b-4e6a-9573-a09bbf1b404a>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00772.warc.gz"}
Looking at Matrix Groups In an earlier blog we examined the fascinating mathematical structures called groups. We recall here the properties that make a group, from the definition given: A GROUP is a set G with one binary operation defined on it , and G satisfies the axioms: a) Associative law: a· (b · c) = (a· b) · c b) Identity element (e): there exists an element e in G such that: e · a = a for all a in G and a · e = a c) Inverse element: For any a in G there exists an element a^-1 in G such that: a · a^-1 = a^-1 · a = e There was also the additional property that defined an Abelian Group: d) Only if there exists elements a, b in G such that (a · b) = (b · a), then G is said to be an Abelian Group. (Commutative property). We now want to see how these apply to groups that are posed in matrix form. A matrix is an assembly of numerical quantities in the form: (a11 a12) (a21 a22) Matrix multiplication, the main operation we will need to know, is easily obtained. I.e. say the above matrix is denoted A, and another B = (b11 b12) (b21 b22) then A X B = (a11 a12) (b11 b12) (a21 a22) (b21 b22) = [{(a11b11) + (a12b21)} --{(a11 b12)+ (a12 b22)} ] [(a21b11) + (a22b21) } --{((a21 b12) + (a22 b22)}] For example, let A= (1 2) (1 2) and B = (1 3) (2 2) then A x B= (5 7) (5 7) The reader is invited to verify this for himself. We are now in a position to look at some simple 2 x 2 matrix groups, and also assess whether they're Abelian or not. One example is the special unitary group, SU2. The elements of SU2 are the unitary 2 x 2 matrices with Det = 1 (determinant). [Note: the determinant is taken as follows, using the elements of matrix A, for which Det [A] = (a11 x a22) - (a12 x a21), thus Det [A] = 2 - 2 = 0.] The elements are shown in their standard matrix form in Fig. 1, and the reader should easily be able to verify that the elements form a group. I tcan be seen, for example, that: II= J*J= J^2 = K*K = K^2 = -i, and IIJ = -JII = K. Another interesting group is PSL(2, z) which has generators: s = (0 1) (-1 0) t = (0 -1) (1 - 1) Yet another group with 2 x 2 matrix elements is sl(2) which has elements: h, e and f such that: h = (1 0) (0 -1) e (identity element) = (0 1) (0 0) f = (0 0) (1 0) The elements of the group can easily be shown to obey the relations: [h, e] = h*e - e*h = 2e, [h.f] = h*f - f*h = -2f, and [e,f] = e*f - f*e = h (understanding that matrix subtraction simply follows the rule, e.g. : [A] - [B] {(a11 - b11) (a12 - b12)} {(a21 - b21) (a22 - b22)} using the designated elements assigned earlier for the generic matrices A, B) For the already identified matrices A and B, [A] - [B] = (0 -1) (-1 0) Then there is the famous Klein Viergruppe with members: e (identity), a, b and c. The 2 x 2 matrix members are shown in Fig. 2 along with 4 different operations. The ambitious reader can gain further insights via the following exercises! Practice Problems: 1. For the group PSL(2,z) show that the identity element (e) = s^4 = t^3. 2. For the group sl(2) show that: (a) [h.f] = h*f - f*h = -2f (b) The "Casimir element", C, of sl(2) is defined according to: h^2/ 2 + h + 2f*e find the element 3. Show that the Klein Viergruppe, V4, is Abelian. Solutions to be given in a future blog!
{"url":"http://brane-space.blogspot.com/2010/05/looking-at-matrix-groups.html","timestamp":"2024-11-05T15:31:08Z","content_type":"text/html","content_length":"112791","record_id":"<urn:uuid:c1fe2f53-2116-4029-9779-0c0d8e571578>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00883.warc.gz"}
Distance Between Two Points on Earth Calculator An online calculator to calculate the distance between two points on earth, given their latitudes and longitudes , is presented. The distance in question is the shortest distance, arc of a circle, along the surface of the earth. You may first need to know how to find the latitude and longitude of a position on earth using a laptop or desktop. Mathematical Formulas Let \( \theta_1 \) and \( \phi_1 \) be the latitude and longitue of an initial point (origin) on earth and \( \theta_2 \) and \( \phi_2 \) be the latitude and longitue of an final point (destination) on earth. Let \( \Delta \theta = \theta_2 - \theta_1 \) and \( \Delta \phi = \phi_2 - \phi_1 \) In radians, the central angle \( c \) between the two points on the surface of the earth is given by \[ c = 2 \arctan2 (\sqrt a , \sqrt {1-a} ) \] where \( a = \sin^2(\Delta \theta/2) + \cos(\theta_1) \cos(\theta_2) \sin^2(\Delta \phi/2) \) The distance \( D \) between the two points is given by Haversine formula \[ D = R c \quad \] where \( R \) is the radius of the earth and is approximeted by \( R \approx 6378 \) km. Use of the Calculator Enter the latitude and longitide in decimal format (with the plus or minus sign) as shown below and press calculate distance. Note that if 1) the latitude is given (or known) as as North or South, the sign must be taken into account : example 23.7 North must be entered as 23.7. However for 23.7 South must entered as -23.7. 2) the longitude is given (or known) as East or West, the sign must be taken into account : example 56.7 East must be entered as 56.7. However for 56.7 West must entered as -56.7. More References and Links Latitude and Longitude Coordinate System Find the GPS Latitude and Longitude Using Google Map .
{"url":"https://www.analyzemath.com/Geometry_calculators/distance-between-two-points-on-earth.html","timestamp":"2024-11-01T23:49:23Z","content_type":"text/html","content_length":"112103","record_id":"<urn:uuid:9d95d2f2-4f23-4e8b-a886-cf2d010bbfa7>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00481.warc.gz"}
percentage: Explore its Definition & Usage | RedKiwi Words percentage Definition • 1a rate or proportion per hundred • 2a share of the profits of an enterprise, usually calculated as a percentage of the revenue Using percentage: Examples Take a moment to familiarize yourself with how "percentage" can be used in various situations through the following examples! • The percentage of students who passed the exam is 85%. • The company pays its employees a percentage of the profits. • The percentage of women in the workforce has increased over the years. percentage Synonyms and Antonyms Phrases with percentage • the difference between two percentages, expressed in points rather than as a percentage of one of them The unemployment rate decreased by 2 percentage points. • the amount by which a quantity increases or decreases expressed as a percentage of the original amount There was a 10% increase in sales last quarter. • the difference between an estimated value and the actual value expressed as a percentage of the actual value The percentage error in the measurement was less than 5%. Origins of percentage from Latin 'per centum', meaning 'by the hundred' Summary: percentage in Brief The term 'percentage' [pəˈsentɪdʒ] refers to a rate or proportion per hundred, often used to describe the share of profits or the success rate of an event. It can be expressed as a percentage point, percentage increase/decrease, or percentage error. 'Percentage' is a formal term that is commonly abbreviated as 'percent.'
{"url":"https://redkiwiapp.com/en/english-guide/words/percentage","timestamp":"2024-11-06T13:57:43Z","content_type":"text/html","content_length":"91872","record_id":"<urn:uuid:f397347d-828b-4393-9050-50383df47fc7>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00678.warc.gz"}
Where to put the data in the first place ? | Details | Hackaday.io One BIG conceptual challenge with sorting algorithms is to put each new datum as close to the right place as possible on the first try, so the actual running time gets closer to O(n). The n×log(n) part comes from recursively handling more and larger buffers so the smaller and the fewer, the better and faster. If one datum is placed at the other side of the range, then it will usually perform log(n) operations to reach the final position, by leaps and bounds of more refined lengths. That's why hash-tables and radix/bucket sorts are so efficient (on top of having no comparison with the neighbours) : Data are placed as close as possible to their definitive location, so there are fewer comparisons and moves to perform. But in a hybrid sort algo, this is less obvious. I was looking at the "shared stack" system and beyond its allure, I found that there is a deeper structure and organisation that needed to be addressed, as it seems even more promising. Do you remember 2s complement signed numbers, and how they map to unsigned integers ? Also, look at ring buffers and how they can wrap around integer indices, using modulo addressing. Look also at how we can take one problem from both ends, as we did just before with the basic merge sort turned into a 2-way merge sort. Shake it well and serve with an ice cube. So this gives a "working area" that wraps around, with modulo addressing. Start by putting the first data run in the middle of the working area but since this is modulo, there is no middle anyway. So we start with a point, an origin, and we can work with data on both ends of the dataset mapped to the working area. And this brilliantly solves a BIG problem of fragmentation I had with the merge sort, which is not usually "in place". If I want to merge sort 2 runs that sit on the "free end" of the data set. Usually, the "other end" sits at address 0 and is stuck there. One needs to allocate room "after"/"beyond" the "free end" of the data set, perform the merge there and move the merged-sorted block back to the origin. Forget that now ! Since the "modulo addressing" doesn't care where the start is, there are two ends, and we'll use the FIFO/ring buffer terminology : tail and head. These numbers/pointers respectively point to the lower and higher limits of the dataset in the working area. Memory allocation becomes trivial and there is no need to move an allocated block back in place : just allocate it at the "other end" ! Let's say we have 2 runs at the head of the data set: we "allocate" enough room at the tail and merge from the head to the tail. And head and tail could be swapped. The dataset always remains contiguous and there is no fragmentation to care about ! When the whole dataset must be flushed to disk for example, it requires 2 successive operations to rebuild the proper data ordering, but this is not a critical issue. Which end to merge first ? This will depend on the properties of the 2 runs at both ends. Simply merge the smallest ones, or the pair of runs that have the smallest aggregated size. This changes something though : the "stack" must be moved away from the main working area, and since it will be addressed from both ends (base and top), it too must be a circular buffer. What is the worst case memory overhead ? It's when the sort is almost complete/full, and a tiny run is added and merged, thus requiring 2x the size of the largest run due to the temporary area. In a pathological case, if memory is really short, a smarter insertion sort would work, but it would still be slow... This can work because the main data set is a collection of monotonic runs. They are longer "in the middle" but are not in a particular order. Thus there are in fact 4 combinations possible : • 2 heads to the tail • 2 tails to the head • 1 head added to the tail • 1 tail added to the head This is much more flexible, thus efficient than the "single-ended" traditional algorithms, because we can look at 4 runs at a time, with simpler heuristics than Timsort, thus fewer chances to mess the rules. Here is how the previous iteration of the algo worked/looked: New runs are added to the head when an ascending run was detected, or to the stack when a descending run is found. When the run ends, if it's a descending one, the stack is copied/moved to the head to reverse its direction. This last operation becomes pointless if we organise the runs differently. Let's simply remove the stack. Instead we add the runs in the middle of "nowhere" so there is no base, instead a void the left and to the right of the collection of concatenated runs. We put the first run there. The first/lowest occupied index is the tail, the higher/last is the head. The "horizontal" parts (in green) are first written after the head. But if the slope is found to be downwards, this part is moved to the tail after the remaining data are reverse-accumulated from there (because we can't know in advance the length of the run). Usually, there are rarely more than a few identical data so the move is negligible. It could be avoided but the game is to get the longest run possible so this is fair game, as this block would be moved later anyway. We could start to merge here but we can select a more favourable set of candidates if we have more data, like at least 4 runs, so let's continue accumulating. The next run goes up so it's accumulated on the head side. OK the stream ends here and we only have 3 runs, but we can already select the first pair to be merged: run1 and run3 are the smallest so let's merge them. Since they are on the head side, let's "allocate" room on the tail side: And now we can merge the final run to build the result, and since there are only 2 runs to merge, the result can go either on the tail or the head: Here we see that the merge sort has an inherent space constraint, so the final merge requires at least 2× the total space overall. This example used a 2-way merge operation, but ideally more ways (4?) would be better. The great aspect of using wrap-around addressing is that it can handle any combination and worst-case situation, like when there are only up-runs, down-runs and even alternating runs. The origin does not matter so the whole contiguous block could move around at will with no consequence. OK there might be a small limitation to the scheme I presented above. Ideally we would want to merge more than 2 runs at a time to save time, using 2 ends of each run to parallelise the process, but this is possible only when all runs are on one side, putting the result to the other side. Otherwise, that is : if 2 blocks are at opposite sides, then one of the blocks will already take the place of the merged result, for an in-place merge. This means that the merge sort can't use 2 sides of the runs, or it would overwrite the overlapping original run. It's a bit less efficient but not critical. This means I have to program several variations of the merge algorithms : one for sorting from the lower end first, another to sort from the higher end first, another to sort from both ends at the same time, and an extension to 3 or 4 runs at once. Selecting which one to use will require some heuristics, for sure... For example : if I merge 2 runs on opposite ends, which end should I favour ?
{"url":"https://hackaday.io/project/186859-yams-yet-another-merge-sort/log/210086-where-to-put-the-data-in-the-first-place","timestamp":"2024-11-09T14:26:57Z","content_type":"text/html","content_length":"36658","record_id":"<urn:uuid:43834a48-9ec5-4c30-9fd0-dad4e7bb79c4>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00587.warc.gz"}
Non-locality and "quantum non-contextuality" Petros Wallden 14:00 13th February 2015 ( week 4, Hilary Term 2015 ) Quantum theory permits non-local correlations. The violation of Bell's inequalities is equivalent with proving the non-existence of a joint probability distribution defined on the space of counterfactual outcomes that return the experimental probabilities as marginals. On the other hand, the existence of such joint probability distribution would signify (classical) non-contextuality. Inspired by path integral formulations of quantum theory, we define a quantum analogue of the joint probability distribution which we term "quantum non-contextuality". We require the existence of a joint strongly-positive quantum measure, which is the most natural generalisation of probability measure from a "histories" perspective. This condition, while it includes all the correlations allowed by quantum theory, it restricts the non-local correlations giving (up to some nuances) the "almost quantum correlations" of Navascues et al (2014). This set of correlations appears as a good candiate for generalisations of quantum theory. - Dowker, Henson, Wallden, New J. Phys. 16 (2014) 033033
{"url":"http://www.cs.ox.ac.uk/seminars/1322.html","timestamp":"2024-11-05T06:26:01Z","content_type":"text/html","content_length":"31838","record_id":"<urn:uuid:6006e34d-09df-4bb6-8190-23429a43b26c>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00250.warc.gz"}
Kruskal's Algorithm | CS61B Guide Conceptual Overview Kruskal's algorithm is another optimal way to construct a minimum spanning tree. It's benefits are that it is conceptually very simple, and easy to implement. The idea is that first we sort all the edges of the graph in order of increasing weight. Then, add the smallest edge to the MST we are constructing unless this creates a cycle in the MST. Repeat until V - 1 edges total. Detailed Breakdown In order to optimally check if adding an edge to our MST creates a cycle, we will use a WeightedQuickUnion object. (See Union Find (Disjoint Sets) for a recap on what this is.) This is used because checking if a cycle exists using a WeightedUnionFind object boils down to one isConnected() call, which we know takes $\Theta(\log(N))$. To run the algorithm, we start by adding all the edges into a PriorityQueue. This gives us our edges in sorted order. Now, we iterate through the PriorityQueue by removing the edge with highest priority, checking if adding it forms a cycle, and adding it to our MST if it doesn't form a cycle. Let's see an example of Kruskal's Algorithm in action! Here, we start with a simple graph and have sorted all of its edges into a priority queue. Since the edge DE is the shortest, we'll add that to our UnionFind first. In the process, we'll remove DE from the priority queue. We'll do the same thing with the next shortest path, DC. Now, let's move on to AB. Notice that this time, connecting A and B creates another disjoint set! Unlike Prim's Algorithm, Kruskal's Algorithm does not guarantee that a solution will form a tree structure until the very end. Now, let's connect BC. Since CE and BD would both form cycles if connected, we are done 😄 Here's the final tree: public class Kruskals() { public Kruskals() { PQ edges = new PriorityQueue<>(); ArrayList<Edge> mst = new ArrayList<>(); public void doKruskals(Graph G) { for (e : G.edges()) { WeightedQU uf = new WeightedQU(G.V()); Edge e = PQ.removeSmallest(); int v = e.from(); int w = e.to(); if (!uf.isConnected(v, w)) { uf.union(v, w); Runtime Analysis Left as an exercise to the reader 😉 Someone's been reading too much LADR... (The answer is $\Theta(E\log(E))$by the way. Try to convince yourself why!)
{"url":"https://cs61b.bencuan.me/algorithms/minimum-spanning-trees/kruskals-algorithm","timestamp":"2024-11-13T11:28:03Z","content_type":"text/html","content_length":"364228","record_id":"<urn:uuid:ba9aff9d-5d0a-48e0-8da8-7c029e4fa10e>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00055.warc.gz"}
Optimization and generation of cellular structures for additive manufacturing with mechanical and thermomechanical applications Vu BN (2024) Publication Type: Thesis Publication year: 2024 DOI: 10.25593/open-fau-916 Weight reduction plays an important role in the aerospace industry because total aircraft weight has a significant impact on kerosene consumption. Ideally, the amount of material can be reduced while the properties of an aircraft component, such as stiffness or temperature distribution, remain constant. Components are often only produced in small quantities, making additive manufacturing (3D printing) technologies ideal for such applications. In particular, these types of processes enable the fabrication of three-dimensional components with different levels of detail, which are barely achievable using conventional material processing methods such as injection molding or milling. Additional tools, such as a negative mold for injection molding or a multitude of milling heads, are not required. By-products and waste, which tend to accumulate quickly in machining processes such as milling or drilling, are also kept to a minimum. In conventional processes, a component often has to be split and manufactured as sub-components, which are then reassembled in post-production. The resulting joint interfaces become the mechanical weak spots of the final result. In additive manufacturing, a full component is built by depositing materials layer by layer, without the necessity for subdivisions. However, the prerequisite for this deposition is that each new layer is supported by the layer below it. For this reason, additional support structures are required to prevent the upper layers from collapsing. Powder bed systems with high-precision lasers enable the fabrication of structures with delicate and complex details. The unmelted powder serves as a support structure and can mostly be reused. In this work, we focus on gradient-based optimization of the topology or layout of a component. In particular, we investigate the potential of utilizing physics-based optimization combined with additive manufacturing's capability to produce cellular structures, that is, porous structures with specified patterns, with filigree details. To this end, we incorporate manufacturing restrictions into the formulation of the optimization problem. We refer to the numerical solution of such a problem as a design. A further goal is the interpretation of a topology optimized result to obtain a geometric description, which is stored in data format and can be directly imported by a slicer. Slicer is the abbreviated term for a slicing software package that adds necessary support structures and creates the layers for the manufacturing process. In this case, it is sufficient to consider only the component's enclosing surface, which we describe and approximate using point coordinates and From a mechanical point of view, our assumption is that a material can be regarded as a periodic structure on the microscopic level (also referred to as a micro-structure). It is therefore completely described by the geometry and material composition of a periodic unit cell, which is the smallest representative unit. This has the advantage of allowing us to formulate parametrizations and local manufacturing constraints at the unit cell level. We determine the material properties as a function of the unit cell variables by implementing an established homogenization approach. It is evident from the literature that the use of laminates, that is, materials constructed of layers with different length scales, is ideal in terms of mechanical stiffness. However, such laminates cannot be realized using the current manufacturing technologies; even additive process are not viable. The restriction to periodic cellular structures is therefore also a compromise between technical feasibility and structural complexity. Hence, from a technical point of view, we evaluate a design using a numerical solution of the underlying physical models and partial differential equations. We use the finite element method as a discretization approach because of the substantial number of previous works in the research field of topology optimization. In our case, an optimization algorithm selects the best feasible choice of design variable values,that is, the variables of the unit cell, in each discretized location of the component. However, the resulting spatial material variation violates the assumption of periodicity and thus introduces a model error. Connections between the unit cells are not explicitly modeled in our optimization problems. Therefore, subsequent interpretation and post-processing steps are necessary to avoid interfaces and transition areas with artificially induced weak material behavior in the final component description. The post-processing algorithms rely on heuristics and may cause arbitrary deviations from the homogenized optimization result. It is therefore important to evaluate the final design description through numerical simulation to assess the quality of the design. Overall, the specific contribution of our work is a holistic consideration of each step in the homogenization-based optimization workflow. This includes selecting and parametrizing unit cells, formulating an algorithm for solving such an optimization problem, interpreting the optimized variables to obtain a smooth geometric description, and evaluating the final design. We propose our own ideas for each of these steps and combine them with existing approaches from the literature. An essential aspect of this work is performing a thorough search for a representative load case that clearly demonstrates the mechanical stiffness benefits of cellular structures. For this purpose, we conducted a variety of numerical optimization experiments to fill components with spatially varying cellular structures. In terms of mechanical stiffness, we determine that the use of cellular structures does not provide any advantage because the manufacturing options are too restrictive. This supposition persists, even if we also allow topological changes in the design and thereby violate manufacturability. Finally, to demonstrate that the prospects of additive manufacturing can be exploited in a different context, we conclude this thesis with a thermomechanical example of an injection mold component. Cellular structures show promise in thermal applications due to their large surface area supporting the exchange of thermal energy with the surroundings Authors with CRIS profile How to cite Vu, B.N. (2024). Optimization and generation of cellular structures for additive manufacturing with mechanical and thermomechanical applications. Vu, Bich Ngoc. Optimization and generation of cellular structures for additive manufacturing with mechanical and thermomechanical applications.2024. BibTeX: Download
{"url":"https://cris.fau.de/publications/326938798/","timestamp":"2024-11-04T02:06:11Z","content_type":"text/html","content_length":"13322","record_id":"<urn:uuid:43d4465d-dbbc-4d59-a971-6c2f6b5a67a0>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00679.warc.gz"}
Times Tables Workbook Printable Times Tables Workbook Printable - Web multiplying with a single times table; You can also use the worksheet generator. These multiplication times table worksheets are. Web multiplication facts worksheets including times tables, five minute frenzies and worksheets for assessment or practice. Web free printable multiplication charts (times tables) available in pdf format. To start creating your sheet,. Web a complete set of free printable multiplication times tables for 1 to 12. Web the printable multiplication tables you will find here provide the most basic overview of sets of multiplication facts, either as. Web here you can find the worksheets for the 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 and 12 times tables. Practicing multiplication with selected times tables; Free Printable Times Tables Worksheets To start creating your sheet,. Web printable multiplication worksheets and multiplication timed tests for every grade level, including multiplication facts. Web a complete set of free printable multiplication times tables for 1 to 12. These multiplication times table worksheets are. Web multiplying with a single times table; Printable Multiplication Table Worksheet Printable Word Searches Web free printable multiplication charts (times tables) available in pdf format. Web multiplication facts worksheets including times tables, five minute frenzies and worksheets for assessment or practice. Use these colorful multiplication tables to help your. Practicing multiplication with selected times tables; Web printable multiplication worksheets and multiplication timed tests for every grade level, including multiplication facts. Multiplication Table Worksheet Printable Customize and Print Web the printable multiplication tables you will find here provide the most basic overview of sets of multiplication facts, either as. Web here you will find a selection of printable times tables sheets designed to help your child to learn and practice their 3 times tables. Practicing multiplication with selected times tables; To start creating your sheet,. Web a complete. Multiplication Table Printables & Worksheets Web a complete set of free printable multiplication times tables for 1 to 12. Web here you can find the worksheets for the 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 and 12 times tables. Web multiplying with a single times table; Use these colorful multiplication tables to help your. Web free printable multiplication charts (times tables). Printable Times Table Worksheets 1 10 Web the printable multiplication tables you will find here provide the most basic overview of sets of multiplication facts, either as. Practicing multiplication with selected times tables; Web printable multiplication worksheets and multiplication timed tests for every grade level, including multiplication facts. Web free printable multiplication charts (times tables) available in pdf format. You can also use the worksheet generator. Printable Time Tables Worksheets Customize and Print You can also use the worksheet generator. Web here you will find a selection of printable times tables sheets designed to help your child to learn and practice their 3 times tables. Web multiplying with a single times table; Web free printable multiplication charts (times tables) available in pdf format. Web the printable multiplication tables you will find here provide. Times Tables Free Printable Web multiplying with a single times table; These multiplication times table worksheets are. Practicing multiplication with selected times tables; Web here you can find the worksheets for the 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 and 12 times tables. Web printable multiplication worksheets and multiplication timed tests for every grade level, including multiplication facts. Printable Multiplication Table Worksheets You can also use the worksheet generator. Use these colorful multiplication tables to help your. Web the printable multiplication tables you will find here provide the most basic overview of sets of multiplication facts, either as. Web here you will find a selection of printable times tables sheets designed to help your child to learn and practice their 3 times. Math Time Tables Worksheets Activity Shelter Web the printable multiplication tables you will find here provide the most basic overview of sets of multiplication facts, either as. To start creating your sheet,. Web multiplication facts worksheets including times tables, five minute frenzies and worksheets for assessment or practice. Web multiplying with a single times table; These multiplication times table worksheets are. Printable Times Table Worksheets Customize and Print Use these colorful multiplication tables to help your. Web printable multiplication worksheets and multiplication timed tests for every grade level, including multiplication facts. Web here you will find a selection of printable times tables sheets designed to help your child to learn and practice their 3 times tables. Web the printable multiplication tables you will find here provide the most. Practicing multiplication with selected times tables; Web free printable multiplication charts (times tables) available in pdf format. Web printable multiplication worksheets and multiplication timed tests for every grade level, including multiplication facts. Web the printable multiplication tables you will find here provide the most basic overview of sets of multiplication facts, either as. Web here you will find a selection of printable times tables sheets designed to help your child to learn and practice their 3 times tables. Web a complete set of free printable multiplication times tables for 1 to 12. To start creating your sheet,. You can also use the worksheet generator. These multiplication times table worksheets are. Web multiplication facts worksheets including times tables, five minute frenzies and worksheets for assessment or practice. Web here you can find the worksheets for the 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 and 12 times tables. Web multiplying with a single times table; Use these colorful multiplication tables to help your. Web A Complete Set Of Free Printable Multiplication Times Tables For 1 To 12. Web the printable multiplication tables you will find here provide the most basic overview of sets of multiplication facts, either as. These multiplication times table worksheets are. Web here you can find the worksheets for the 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 and 12 times tables. Web multiplication facts worksheets including times tables, five minute frenzies and worksheets for assessment or practice. To Start Creating Your Sheet,. Practicing multiplication with selected times tables; You can also use the worksheet generator. Web here you will find a selection of printable times tables sheets designed to help your child to learn and practice their 3 times tables. Web free printable multiplication charts (times tables) available in pdf format. Web Printable Multiplication Worksheets And Multiplication Timed Tests For Every Grade Level, Including Multiplication Facts. Use these colorful multiplication tables to help your. Web multiplying with a single times table; Related Post:
{"url":"https://feeds-cms.iucnredlist.org/printable/times-tables-workbook-printable.html","timestamp":"2024-11-04T18:11:24Z","content_type":"text/html","content_length":"26379","record_id":"<urn:uuid:67c8c34e-8f9f-4806-83ce-e9c87b0e1385>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00506.warc.gz"}
Regular cost functions and stabilization monoids 14/03/2016 14 - 15 - 16 - 17 Março, sala 6.2.33 - DM - FCUL Thomas Colcombet (LIAFA, Université Paris Diderot) Departamento de Matemática - FCUL Regular cost functions form a quantitative extension to the classical notion of regular languages. However, contrary to other models, it focuses only on boundedness questions. Hence, cost functions are maps from words to ?∪∞, which are considered modulo ‘preservation of bounds’. The resulting theory resembles a lot the one for regular languages. In particular, many models of acceptors are effectively equivalent: logic (cost monadic second-order logic), automata (Bautomata/ S-automata), expressions (B-rational expression/S-rational expressions), and algebra (stabilisation monoids). In this course, we will introduce some of these objects, present the central results and the remaining open problem. We will be paying a particular attention to the algebraic formalism of stabilisaiton monoids. Stabilisation monoids are ordered monoids enriched with a map from idempotents to idempotents which essentially represents the effect of iterating a large (unbounded) number of times the element. We will see how associating semantics to these objects. We will also show how these can be used in ‘la Schützenberger’ algebraic characterization results yielding important decidable subclasses. 14^ | March | 15:30 – 17:30 15 | March | 9:30 – 11:30 16 | March | 16:00 – 18:00 17 | March | 10:30 – 12:30
{"url":"https://cemat.tecnico.ulisboa.pt/seminar.php?event_type_id=5&locale=pt&sem_id=2312","timestamp":"2024-11-12T12:02:11Z","content_type":"text/html","content_length":"10628","record_id":"<urn:uuid:e788addc-6cbe-4c0c-a0bc-32b00cbb7555>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00335.warc.gz"}
Automating coil length measurement in context of coil length 07 Sep 2024 Automating Coil Length Measurement: A Novel Approach Coil length measurement is a critical process in various industries, including steel, wire, and cable manufacturing. Traditional methods of measuring coil length often involve manual measurements, which can be time-consuming, labor-intensive, and prone to errors. This article presents a novel approach to automating coil length measurement using computer vision and machine learning techniques. Coil length measurement is essential in ensuring the quality and accuracy of products manufactured on coilers. The traditional method of measuring coil length involves manually counting the number of turns or using mechanical counters, which can be prone to errors due to human factors such as fatigue, distraction, or lack of attention to detail. Moreover, manual measurements can be time-consuming, especially for large coils. Our approach to automating coil length measurement involves the use of computer vision and machine learning techniques. The system consists of a camera mounted above the coil, which captures images of the coil’s surface. These images are then processed using image processing algorithms to detect the coil’s edges and calculate its length. The formula for calculating coil length (L) from the number of turns (N) is: L = N * π * r where r is the radius of the coil. To automate this process, we use a machine learning algorithm that can detect the coil’s edges and count the number of turns. The algorithm uses a combination of edge detection techniques, such as Sobel and Canny operators, to identify the coil’s edges. The number of turns is then calculated by counting the number of pixels between the two edges. Our results show that the automated system can accurately measure coil length with high precision and accuracy. The system can detect even small changes in coil length, making it ideal for quality control applications. Automating coil length measurement using computer vision and machine learning techniques offers a significant improvement over traditional methods of measurement. Our approach provides accurate and precise measurements, reducing the risk of human error and increasing productivity. This technology has the potential to revolutionize the way coil length is measured in various industries, ensuring higher quality products and improved efficiency. [1] Smith et al. (2019). “Automated Coil Length Measurement using Computer Vision.” Journal of Manufacturing Systems, 54, 102-115. [2] Johnson et al. (2020). “Machine Learning for Coil Length Measurement.” IEEE Transactions on Industrial Informatics, 16(3), 1441-1452. Related articles for ‘coil length’ : • Reading: Automating coil length measurement in context of coil length Calculators for ‘coil length’
{"url":"https://blog.truegeometry.com/tutorials/education/1d2dd5d7120fa868c9f6239a504b5afe/JSON_TO_ARTCL_Automating_coil_length_measurement_in_context_of_coil_length.html","timestamp":"2024-11-06T07:36:47Z","content_type":"text/html","content_length":"16109","record_id":"<urn:uuid:44e36c4f-0617-468a-9e52-1eca6532db7f>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00697.warc.gz"}
Solution to Problem K – Kruskal IPSC 2006 Solution to Problem K – Kruskal We assume that the reader has basic knowledge of Combinatorial Game Theory (CGT). If not, a great introduction is an electronic text Ferguson: Game Theory (it can be downloaded from Thomas Ferguson's How to decide which are the winning positions and which not? First of all, we should abbreviate this game. We can substract from each heap the nearest smaller prime. We can do this because once a prime is in reach, the player to move will reach it. (The best way to find the nearest prime is a plain linear search with O(sqrt N) primality testing, the primes are reasonably dense.) Now we have another game. If you remove all matches from some heap, you win. If the size of some of the heaps is at most K then the player to move wins immediately after his first move. If not, then making a heap this small means that you lose after the opponent's next move. In other words, the smallest "legal" heap size is K+1, nobody wants to make a smaller heap. Now subtract K+1 from the size of each heap. Thereby we get a classical NIM game. The player with no move loses. (In the previous version of the game, he would be in a situation where all heaps have the size K+1, and thus he has to decrease one of them and lose in the next move.) The NIM game has a known and simple stratege: a position is winning if and only if the bitwise xor of (heap size mod K+1) is non-zero. Some more words about nasty special cases. The problem statement clearly stated that A player wins if after his move the size of some heap is a prime number.. What happens if there is a prime-sized heap in the beginning? If there is at least one other heap, the monkey wins by removing matches from the other heap – the size of the good heap remains prime. The remaining one-heap case is simple.
{"url":"https://ipsc.ksp.sk/2006/real/solutions/k.html","timestamp":"2024-11-04T05:27:22Z","content_type":"text/html","content_length":"5239","record_id":"<urn:uuid:d0895fb6-8a36-4c02-b69e-20112c8961e0>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00615.warc.gz"}
Basic Math/Algebra So yes, we do expect you to be able to do basic math and algebra. We will not make you do calculus or differential equations - just some plain ol' algebra. This means you might memorize an equation with some variables in it and we tell you a few of them - then you calculate for the missing one. Here is an example and the solution... gas law type question A given sample of an ideal gas at 300 K occupies 20.2 L at a pressure of 1.50 atm. How many moles of gas are in this sample? Solution: Use the ideal gas law! \(\longrightarrow PV = nRT\) Now put in all the known values (\(P,V,T,R\)) and solve for the unknown value (\(n\)). \[(1.50\;{\rm atm})(20.2\;{\rm L}) = n(0.08206\;{\rm L\;atm/mol \,K})(300\;{\rm K})\] \[{(1.50)(20.2)\over (0.08206)(300)} = n\] \[ 1.23\;{\rm mol} = n\] Not too bad, right? 1.23 moles of ideal gas is the answer and we just substituted into the equation and then did a little algebra to solve. You might get longer questions to solve - meaning more numbers and math... but, it will not be harder algebra. Just don't get lost along the way to the answer. In the above example I dropped all units after the first line. The units DO work out to end up with only moles for the units of \(n\). Sometimes I drop units in examples in order to show the pure math and not let units clutter my page. You might want to drag the units along for the ride though - in case you get lost. Units will often help lead you to the right way to do the math.
{"url":"https://chembook.org/page.php?chnum=0&sect=5","timestamp":"2024-11-04T21:14:36Z","content_type":"text/html","content_length":"8239","record_id":"<urn:uuid:4fc29ce9-07ff-4248-ba2f-297ad16a8ba6>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00836.warc.gz"}
Matrix square roots – The Dan MacKinlay stable of variably-well-consider’d enterprises Matrix square roots Whitening, preconditioning etc August 5, 2014 — May 13, 2023 feature construction functional analysis high d linear algebra signal processing sparser than thou Assumed audience: People with undergrad linear algebra Interpretations and tricks for matrix square roots. There are two types of things which are referred to as matrix square roots of a Hermitian matrix \(\mathrm{M}\) in circulation: 1. \(\mathrm{X}\) such that \(\mathrm{X}\mathrm{X}=\mathrm{M}\), and 2. \(\mathrm{X}\) such that \(\mathrm{X}\mathrm{X}^{\top}=\mathrm{M}\), when \(\mathrm{M}\) is positive semi-definite. Sometimes either will do; other times we care about which. Sometimes we want a particular structure, e.g. lower triangular as in the Cholesky decomposition. 1 Almost low-rank roots Perturbations of nearly-low rank matrices are not themselves nearly low rank in general, but there exist efficient algorithms for finding them at least. Fasi, Higham, and Liu (2023): We consider the problem of computing the square root of a perturbation of the scaled identity matrix, \(\mathrm{A}=\alpha \mathrm{I}_n+\mathrm{U} \mathrm{V}^{H}\), where \(\mathrm{U}\) and \(\ mathrm{V}\) are \(n \times k\) matrices with \(k \leq n\). This problem arises in various applications, including computer vision and optimization methods for machine learning. We derive a new formula for the \(p\)th root of \(\mathrm{A}\) that involves a weighted sum of powers of the \(p\)th root of the \(k \times k\) matrix \(\alpha \mathrm{I}_k+\mathrm{V}^{H} \mathrm{U}\). This formula is particularly attractive for the square root, since the sum has just one term when \(p=2\). We also derive a new class of Newton iterations for computing the square root that exploit the low-rank structure. Their method works for low-rank-plus-diagonal matrices without negative eigenvalues. Theorem: Let \(\mathrm{U}, \mathrm{V} \in \mathbb{C}^{n \times k}\) with \(k \leq n\) and assume that \(\mathrm{V}^{H} \mathrm{U}\) is nonsingular. Let \(f\) be defined on the spectrum of \(\ mathrm{A}=\alpha \mathrm{I}_n+\mathrm{U} \mathrm{V}^{H}\), and if \(k=n\) let \(f\) be defined at \(\alpha\). Then \[\quad f(\mathrm{A})=f(\alpha) \mathrm{I}_n+\mathrm{U}\left(\mathrm{V}^{H} \ mathrm{U}\right)^{-1}\left(f\left(\alpha \mathrm{I}_k+\mathrm{V}^{H} \mathrm{U}\right)-f(\alpha) \mathrm{I}_k\right) \mathrm{V}^{H}.\] The theorem says two things: that \(f(\mathrm{A})\), like \ (\mathrm{A}\), is a perturbation of rank at most \(k\) of the identity matrix and that \(f(\mathrm{A})\) can be computed by evaluating \(f\) and the inverse at two \(k \times k\) matrices. I would call this a generalized Woodbury formula, and I think it is pretty cool; which tells you something about my current obsession profile. Anyway, they use it to discover the following: Let \(\mathrm{U}, \mathrm{V} \in \mathbb{C}^{n \times k}\) with \(k \leq n\) have full rank and let the matrix \(\mathrm{A}=\alpha \mathrm{I}_n+\mathrm{U} \mathrm{V}^{H}\) have no eigenvalues on \(\mathbb{R}^{-}\). Then for any integer \(p \geq 1\), \[ \mathrm{A}^{1 / p}=\alpha^{1 / p} \mathrm{I}_n+\mathrm{U}\left(\sum_{i=0}^{p-1} \alpha^{i / p} \cdot\left(\alpha \mathrm{I}_k+\mathrm{V}^ {H} \mathrm{U}\right)^{(p-i-1) / p}\right)^{-1} \mathrm{V}^{H} \] and in particular Let \(\mathrm{U}, \mathrm{V} \in \mathbb{C}^{n \times k}\) with \(k \leq n\) have full rank and let the matrix \(\mathrm{A}=\alpha \mathrm{I}_n+\mathrm{U} \mathrm{V}^{H}\) have no eigenvalues on \(\mathbb{R}^{-}\). Then \[ \mathrm{A}^{1 / 2}=\alpha^{1 / 2} \mathrm{I}_n+\mathrm{U}\left(\left(\alpha \mathrm{I}_k+\mathrm{V}^{H} \mathrm{U}\right)^{1 / 2}+\alpha^{1 / 2} \mathrm{I}_k\right)^ {-1} \mathrm{V}^{H}. \] They also derive an explicit gradient step for calculating it, namely “Denman-Beaver iteration”: The (scaled) DB iteration is \[ \begin{aligned} \mathrm{X}_{i+1} & =\frac{1}{2}\left(\mu_i \mathrm{X}_i+\mu_i^{-1} \mathrm{Y}_i^{-1}\right), & \mathrm{X}_0=\mathrm{A}, \\ \mathrm{Y}_{i+1} & =\ frac{1}{2}\left(\mu_i \mathrm{Y}_i+\mu_i^{-1} \mathrm{X}_i^{-1}\right), & \mathrm{Y}_0=\mathrm{I}, \end{aligned} \] where the positive scaling parameter \(\mu_i \in \mathbb{R}\) can be used to accelerate the convergence of the method in its initial steps. The choice \(\mu_i=1\) yields the unscaled DB method, for which \(\mathrm{X}_i\) and \(\mathrm{Y}_i\) converge quadratically to \(\ mathrm{A}^{1 / 2}\) and \(\mathrm{A}^{-1 / 2}\), respectively. … for \(i \geq 0\) the iterates \(\mathrm{X}_i\) and \(\mathrm{Y}_i\) can be written in the form \[ \begin{aligned} & \mathrm{X}_i=\beta_i \mathrm{I}_n+\mathrm{U} \mathrm{B}_i \mathrm{V}^{H}, \ quad \beta_i \in \mathbb{C}, \quad \mathrm{B}_i \in \mathbb{C}^{k \times k}, \\ & \mathrm{Y}_i=\gamma_i \mathrm{I}_n+\mathrm{U} \mathrm{C}_i \mathrm{V}^{H}, \quad \gamma_i \in \mathbb{C}, \quad \ mathrm{C}_i \in \mathbb{C}^{k \times k} . \end{aligned}\] This gets us a computational speed up, although of a rather complicated kind. For a start, its constant factor is favourable compared to the naive approach, but it also has a somewhat favourable scaling with \(n\), being less-than-cubic although more than quadratic depending on some optimisation convergence rates, which depends both on the problem and upon optimal selection of \(\beta_i,\ gamma_i\), which they give a recipe for but it gets kinda complicated and engineering-y. Anyway, let us suppose \(\mathrm{U}=\mathrm{V}=\mathrm{Z}\) and replay that. Then \[ \mathrm{A}^{1 / 2}=\alpha^{1 / 2} \mathrm{I}_n+\mathrm{Z}\left(\left(\alpha \mathrm{I}_k+\mathrm{Z}^{H} \mathrm{Z} \right)^{1 / 2}+\alpha^{1 / 2} \mathrm{I}_k\right)^{-1} \mathrm{Z}^{H}. \] The Denman-Beaver step becomes \[ \begin{aligned} \mathrm{X}_{i+1} & =\frac{1}{2}\left(\mu_i \mathrm{X}_i+\mu_i^{-1} \mathrm{Y}_i^{-1}\right), & \mathrm{X}_0=\mathrm{A}, \\ \mathrm{Y}_{i+1} & =\frac {1}{2}\left(\mu_i \mathrm{Y}_i+\mu_i^{-1} \mathrm{X}_i^{-1}\right), & \mathrm{Y}_0=\mathrm{I}, \end{aligned} \] This looked useful although note that this is giving us a full-size square root.
{"url":"https://danmackinlay.name/notebook/matrix_square_root","timestamp":"2024-11-08T20:04:50Z","content_type":"application/xhtml+xml","content_length":"40284","record_id":"<urn:uuid:7a9007e0-54fc-412c-82c2-15f53b13d8b6>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00578.warc.gz"}
Decoding the Value of 1.63092975 in Logarithms: A Comprehensive Exploration Mathematics, in its many branches, provides tools for understanding and solving various problems across scientific fields. Logarithms, one of these mathematical tools, offer a powerful way to simplify calculations and model natural processes. Among the different values that emerge in the study of logarithms, the specific number 1.63092975 in Log holds a unique place. This article will explore the significance of this number, uncovering its roots in logarithmic principles, its connection to logarithmic scales, and its relevance in practical applications across science and Logarithms, introduced by John Napier in the early 17th century, revolutionized the way people approach multiplication, division, and powers. With logarithms, complex calculations become simpler by transforming multiplicative relationships into additive ones. This transformation has far-reaching implications for various domains, from pure mathematics to applied sciences like chemistry, physics, and economics. The constant 1.63092975 is deeply tied to one such transformation, representing a specific logarithmic value that emerges from the analysis of logarithmic scales and measurement systems. To fully understand this number, it is crucial to delve into the broader context of logarithmic functions, their properties, and their practical uses. Understanding the Concept of Logarithms Before examining the value of 1.63092975 specifically, it is necessary to understand what logarithms are and how they work. In simple terms, a logarithm is the inverse operation of exponentiation. If you raise a number (called the base) to a certain power to get another number, the logarithm tells you what that power is. For example, if 23=82^3 = 823=8, the logarithm base 2 of 8 is 3, written as log⁡28=3\log_2 8 = 3log28=3. Logarithms are useful because they turn multiplication into addition, which simplifies many types of mathematical and scientific calculations. There are several types of logarithms, the most common being the common logarithm (log base 10), the natural logarithm (log base eee, where eee is approximately 2.718), and logarithms with arbitrary bases. Each of these has its own applications depending on the field of study. For example, natural logarithms are commonly used in calculus and mathematical modeling, while common logarithms are often used in engineering and the sciences. Logarithms also have a special relationship with exponential growth and decay. Processes that involve exponential growth, such as population increase or compound interest, are often analyzed using logarithmic scales to provide a clearer understanding of the growth patterns. Similarly, phenomena that involve decay, like radioactive decay or depreciation of assets, can be better understood using logarithmic analysis. The logarithmic scale condenses large ranges of values into more manageable numbers, making it easier to compare and interpret exponential processes. The Role of 1.63092975 in Log The value 1.63092975 in Log is particularly interesting because it represents the logarithm base 10 of a specific number: 43. This means that if you take 10 raised to the power of 1.63092975, you get approximately 43. Written mathematically, this relationship can be expressed as: log⁡1043=1.63092975\log_{10} 43 = 1.63092975log1043=1.63092975 At first glance, this may seem like just another logarithmic value. However, numbers like 43 have practical significance in many scientific and engineering contexts, particularly when working with logarithmic scales or when applying logarithmic transformations to data. The logarithm of 43 is a prime example of how logarithmic values can provide insight into systems governed by exponential For example, the value of 1.63092975 can appear in acoustics, where the measurement of sound intensity levels is often expressed on a logarithmic scale (decibels). In this context, logarithmic values help to compress the wide range of sound intensities that humans can perceive into a more manageable scale. The logarithmic nature of human hearing makes it essential to use such a scale to make meaningful comparisons between different sound levels. Furthermore, values like 1.63092975 could arise when working with log-transformed data in various other scientific fields. The Mechanics Behind Logarithmic Calculation To fully appreciate the importance of the number 1.63092975, it is worth exploring how logarithms are calculated. While logarithms can now be calculated instantly using a calculator or computer, understanding the underlying mechanics of logarithmic calculations helps clarify why certain logarithmic values are so useful in practice. Logarithms were originally calculated using tables that contained precomputed values for logarithmic functions. Before the advent of digital calculators, scientists and mathematicians relied on these tables for complex calculations. The values in these tables were often calculated using series expansions or numerical methods that approximate the values of logarithmic functions. One of the most famous methods for calculating logarithms is the method of successive approximations, which uses the fact that logarithmic functions are continuous and smooth to incrementally converge on an accurate The logarithm base 10 of a number xxx, denoted log⁡10x\log_{10} xlog10x, is defined as the exponent to which 10 must be raised to produce xxx. In general, finding the logarithm of a number that is not an exact power of 10 requires interpolation between known values. For example, 43 is not a power of 10, but it lies between 10 and 100, so its logarithm must lie between 1 and 2. The exact value of log⁡1043\log_{10} 43log1043 can be found using interpolation or by using numerical methods such as the Newton-Raphson method, which refines estimates of the logarithm through successive While these methods may seem antiquated in the modern era of digital computation, they played a crucial role in the development of early scientific research and engineering. Even today, understanding the mechanics behind logarithmic calculations can help scientists and engineers choose the right logarithmic scale for their data, leading to more accurate analyses and better-informed decisions. Logarithmic Scales and Their Applications One of the most important applications of logarithmic functions is the creation of logarithmic scales. Logarithmic scales are used in many different fields to represent data that spans multiple orders of magnitude. This type of scale is particularly useful when dealing with quantities that grow or shrink exponentially, such as population sizes, chemical concentrations, and sound intensity Logarithmic scales compress large ranges of values into smaller, more manageable numbers. This compression is achieved by taking the logarithm of each value in the dataset. For example, instead of plotting a graph with values ranging from 1 to 1,000,000, a logarithmic scale might plot the logarithms of these values, reducing the range to a more compact scale. This makes it easier to visualize data that would otherwise be difficult to interpret. Key Advantages of Logarithmic Scales: • Simplifying Exponential Relationships: Exponential growth or decay, which can be difficult to represent on a linear scale, is much easier to visualize and analyze using a logarithmic scale. This is because logarithmic scales translate exponential relationships into linear ones. • Handling Large Ranges of Data: Logarithmic scales are ideal for representing data that spans several orders of magnitude, such as earthquake magnitudes (Richter scale) or the intensity of sound waves (decibels). • Improving Data Interpretation: By compressing large datasets, logarithmic scales make it easier to identify patterns and relationships that would be difficult to spot on a linear scale. This is particularly useful in fields like finance, biology, and astronomy, where data can vary dramatically across scales. In many cases, the value 1.63092975 in Log, representing the logarithm base 10 of 43, might appear as part of a logarithmic transformation applied to data in these fields. Whether dealing with concentrations in chemistry, populations in biology, or financial trends, the ability to simplify large datasets using logarithmic transformations is invaluable. The Mathematical Properties of Logarithms To appreciate the full significance of the value 1.63092975, it is also important to understand the mathematical properties of logarithms. These properties provide the foundation for many of the practical applications of logarithms in science and engineering. One of the most important properties of logarithms is their relationship to exponentiation. This relationship is captured in the following three fundamental logarithmic rules: 1. Product Rule: log⁡b(xy)=log⁡bx+log⁡by\log_b (xy) = \log_b x + \log_b ylogb(xy)=logbx+logby □ This rule states that the logarithm of a product is equal to the sum of the logarithms of the factors. It simplifies multiplication into addition, which is particularly useful when dealing with large numbers or exponential growth. 2. Quotient Rule: log⁡b(xy)=log⁡bx−log⁡by\log_b \left(\frac{x}{y}\right) = \log_b x – \log_b ylogb(yx)=logbx−logby □ This rule states that the logarithm of a quotient is equal to the difference of the logarithms of the numerator and denominator. It simplifies division into subtraction, which can be helpful in analyzing ratios or proportional relationships. 3. Power Rule: log⁡b(xn)=n⋅log⁡bx\log_b (x^n) = n \cdot \log_b xlogb(xn)=n⋅logbx □ This rule states that the logarithm of a number raised to a power is equal to the power multiplied by the logarithm of the base. This property is particularly important in exponential growth and decay models, as it allows for the simplification of exponential expressions. These properties form the backbone of logarithmic manipulation and provide insight into how logarithmic functions work. The ability to break down complex multiplication, division, and exponentiation into simpler additive or subtractive operations is one of the key reasons why logarithms are so widely used in both theoretical and applied mathematics. Applications in Science and Engineering The practical significance of logarithms, and by extension the value 1.63092975, extends far beyond pure mathematics. Logarithmic functions are essential tools in many scientific and engineering disciplines, providing a way to model, analyze, and interpret data that involves exponential growth, decay, or variation over large scales. Acoustics and Sound Measurement One of the most well-known applications of logarithmic functions is in the measurement of sound intensity levels. The human ear can detect a wide range of sound intensities, from the faintest whisper to the roar of a jet engine. To make this range manageable, sound intensity is measured in decibels (dB), which are based on a logarithmic scale. In this comprehensive exploration, we have decoded the significance of the value 1.63092975 in Log in the realm of logarithms. Far from being a mere number, this value represents the logarithm base 10 of 43, a seemingly simple figure that holds profound importance in both theoretical and applied contexts. Through an understanding of logarithmic principles, their properties, and the ways in which they transform complex multiplicative relationships into simpler additive ones, we have seen how this specific logarithmic value emerges as a vital tool for simplifying real-world problems. Logarithms provide a powerful mathematical framework, enabling us to analyze systems governed by exponential growth and decay. Whether it’s the behavior of populations, the intensity of sound, or financial trends, logarithms compress and clarify large datasets, making them more manageable and interpretable. In fields such as acoustics, chemistry, and physics, the logarithmic transformation, including values like 1.63092975, enables us to break down data over vast scales and understand relationships that would otherwise be obscured. The mathematical properties of logarithms, including the product, quotient, and power rules, are fundamental to their wide application in science and engineering. By simplifying otherwise daunting calculations, logarithms provide an essential bridge between raw data and meaningful insights. Furthermore, the logarithmic scales derived from these properties, such as the Richter scale for measuring earthquakes and the decibel scale for sound intensity, demonstrate the practical power of logarithmic analysis in our everyday world.
{"url":"https://creativeter.com/1-63092975-in-log/","timestamp":"2024-11-11T00:53:39Z","content_type":"text/html","content_length":"210105","record_id":"<urn:uuid:52265a84-f7b6-4837-8b45-b6aeb5945507>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00703.warc.gz"}
Error Messages | HP 17bII specs No solution was found for a Solver equation using the current values stored in its variables. Refer to page 246 in appendix B. A warning-not an error-that the magnitude of a result is too small for the calculator to handle, so it returns the value zero. See page 47 for limits. 4<HY419 9;F: 9H<=:MF" Attempted a two-list SUM calculation using lists of unequal lengths. 288 Error Messages File name : 17BII-Plus-Manual-E-PRINT-030709 Print data : 2003/7/11
{"url":"https://manualsdump.com/en/manuals/hp-17bii/291578/288","timestamp":"2024-11-06T01:35:55Z","content_type":"text/html","content_length":"127645","record_id":"<urn:uuid:bcb0f5c0-470f-4f34-a066-5ced5261c8fd>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00510.warc.gz"}
Practice Exam 2 and 3 are in wrong order. – Q&A Hub – 365 Data Science Practice Exam 2 and 3 are in wrong order. Practice Exam 2 should be about linear regression and exam 3 should be logistic regression. 5 answers ( 0 marked as helpful) Hey Thiha, Thank you for your contribution! We will fix this issue as soon as possible. Kind regards, 365 Hristina There are some errors such as no correct answer in the choices in the exam questions I think. You should add the `report` feature in the exam questions. Hey again Thiha, Could you please refer to the questions that you think are incorrect? Are they from Practice Exam 2? Thank you! Kind regards, 365 Hristina Hi Hristina, Not from the practice exam. Course Exam 'Mathematics' No. 14 - A * A^T , I think answer should be [[18,9,26],[9,5,14],[26,14,45]]. It is not contained. I may be wrong. Course Exam 'Machine Learning with Python' No. 25, I think it is a typo error. It should be 'All but one advantage and one disadvantage are incorrect' instead of 'correct'. Sorry if I am wrong. Hey Thiha, Regarding the Mathematics exam, you are right. We will correct this mistake. Regarding the Machine Learning with Python exam, the interpretation of the answer is "All advantages and disadvantages are correct with the exception of one advantage and one disadvantage." However, we realize that the current formulation of the answer might be confusing, so we will rewrite the question such that there is no misunderstanding. Thank you again! Kind regards, 365 Hristina
{"url":"https://365datascience.com/question/practice-exam-2-and-3-are-in-wrong-order/","timestamp":"2024-11-03T00:33:52Z","content_type":"text/html","content_length":"121292","record_id":"<urn:uuid:4912970a-6e4b-4eb8-a308-9565556cfdae>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00364.warc.gz"}
quadratic semidefinite programming We propose an algorithm for computing the projection of a symmetric second-order tensor onto the cone of negative semidefinite symmetric tensors with respect to the inner product defined by an assigned positive definite symmetric fourth-order tensor C. The projection problem is written as a semidefinite programming problem and an algorithm based on a primal-dual path-following … Read more
{"url":"https://optimization-online.org/tag/quadratic-semidefinite-programming/","timestamp":"2024-11-05T12:32:11Z","content_type":"text/html","content_length":"92288","record_id":"<urn:uuid:dc5bc9f0-ab03-4a03-b605-97b25fd496e6>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00889.warc.gz"}
How Do I Calculate Angles In A Triangle - TraingleWorksheets.com Equation For Interior Angles Of A Triangle Worksheet – Triangles are one of the most fundamental geometric shapes in geometry. Understanding triangles is crucial to studying more advanced geometric concepts. In this blog We will review the various types of triangles that are triangle angles. We will also explain how to calculate the areas and perimeters of a triangle, and give an example of every. Types of Triangles There are three kinds of triangles: … Read more
{"url":"https://www.traingleworksheets.com/tag/how-do-i-calculate-angles-in-a-triangle/","timestamp":"2024-11-13T11:32:24Z","content_type":"text/html","content_length":"47362","record_id":"<urn:uuid:49c5a477-401d-48c9-8fa2-00e29656d07d>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00313.warc.gz"}
Help with a formula not allowing me to format as number May 14, 2020 10:53 AM I have rate fields low, mid, high. The rate fields are lookups - formatted as decimals. Then I have a formula field IF(rate = “low”, {low rate}, IF(rate = “mid”, {mid rate}, IF(rate=“high”, {high rate},""))) which results in the correct number, but the formula will not allow me to format as a number. What am I overlooking? May 14, 2020 11:30 AM May 14, 2020 11:30 AM May 14, 2020 11:57 AM May 14, 2020 12:15 PM
{"url":"https://community.airtable.com/t5/formulas/help-with-a-formula-not-allowing-me-to-format-as-number/td-p/114000","timestamp":"2024-11-03T16:29:00Z","content_type":"text/html","content_length":"331429","record_id":"<urn:uuid:9f748868-685b-49f3-8a58-82cd3e5f0239>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00857.warc.gz"}
Maths question stumps internet There is a maths question making the rounds again, in fact, this particular problem has had people stumped for YEARS. But, how could a simple little math question cause so much drama? How difficult is it really? Well, as it turns out, this particular maths question actually has more than one answer. The question is, can you figure any of them out? So, depending on which way you were taught math in high school, and, if you actually remember any of it, your answer will be different. Take a look at the math problem that’s been causing quite a stir! Whether you love or hate math, this problem is fun to try solve. don’t forget to share it with family and frriends and see how well they did! Remember, don’t cheat! That will just take all the fun out of it! What is 8÷2(2+2)? So, depending on what you learned, you would have to use one of two methods to figure this maths question out. These two methods are known as PEMDAS or BODMAS. Which did you learn in school? Have a look at what Mike Breen, American Mathematical Society public awareness officer, had to say in The Sun about these two common methods. Generally, the USA uses PEDMAS, while Brittan and Australia prefer to use BODMAS. “The way it’s written, it’s ambiguous. In math [sic], a lot of times there are ambiguities. Mathematicians try to make rules as precise as possible.” So, if you are using the BODMAS method: Brackets, orders, division, multiplication, addition, subtraction. But, if you are using the PEMDAS method, you’d work it out like this: Calculate inside the brackets first, then multiply before dividing last. The answer to the maths question Are you ready for it? What was your answer? If you came out with the answer 1 or 16, both would be correct! Did you enjoy this math problem? If you did, you may like to give these a go, too! Less than 2% of people get this one right! Can you solve it? Tiffy Taffy Do you think you know the answer? CLICK HERE to see if you got it right! Here is another one, Can You Train Your Brain To Figure Out The Answer To This Tricky Math Sum? Confident that you know the right answer to this math puzzle? CLICK HERE to see if you did indeed! But, why stop there? Can You Solve This Tricky, Yet Simple Math Problem? The above maths question is tricky! Did you get it right? Find out HERE! 1. ‘DOES IT ADD UP? Viral maths problem leaves people stumped so can YOU work out the answer?’ The Sun Lauren Windle. Published July 19, 2021.
{"url":"https://mysticalraven.com/news/22372/maths-question-stumps-internet","timestamp":"2024-11-06T02:57:48Z","content_type":"text/html","content_length":"58462","record_id":"<urn:uuid:3aec725c-09ed-479c-b547-655b664f8344>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00676.warc.gz"}
Combination with Replacement Calculator • Enter 'n' (total items) and 'r' (selection count). • Check "Allow Zero Selection" if needed. • Click "Calculate" to compute the result. • View the result and calculation details below. • Use "Calculation History" to track previous calculations. • Click "Clear" to reset the inputs and results. • Click "Copy Result" to copy the result to the clipboard. Advanced Features Calculation Details Calculation History What is Combination with Replacement? Combination with replacement is a concept in combinatorics that refers to the number of ways to select a certain number of items from a set, allowing for the possibility of selecting the same item more than once (with replacement). All Formulae Related to Combination with Replacement 1. Number of Combinations with Replacement: The number of ways to choose ‘r’ items from a set of ‘n’ distinct items with replacement is given by the formula:scssCopy codeC(n + r - 1, r) Where: □ C(n, k) represents the binomial coefficient, calculated as C(n, k) = n! / (k! * (n – k)!). □ ‘n’ is the total number of distinct items in the set. □ ‘r’ is the number of items to be selected from the set. 2. Number of Combinations with Replacement (Repetition Allowed): If you have ‘n’ types of objects, and you want to choose ‘r’ objects from them with replacement, the formula for the number of combinations is:Copy coden^r Where ‘n’ is the number of distinct types of objects, and ‘r’ is the number of objects to be chosen. 3. Total Combinations with Limited Choices: If you have ‘n’ types of objects, and you want to choose ‘r’ objects from them with replacement, but you can choose from only ‘k’ distinct types of objects (k ≤ n), then the formula is:Copy codek^r Where ‘k’ is the number of distinct types of objects you can choose from, and ‘r’ is the number of objects to be chosen. 4. Total Combinations of N Objects with R Repeated: If you have ‘n’ objects in total, and you want to choose ‘r’ of them with replacement, and each object can be repeated a maximum of ‘m’ times (0 ≤ m ≤ r), then the formula is:scssCopy codeC(r + n - 1, r - m) Where: □ ‘n’ is the number of distinct objects. □ ‘r’ is the total number of objects to be chosen. □ ‘m’ is the maximum number of times an object can be repeated in the selection. A Combination with Replacement calculator or the concept itself can be applied in various fields and situations where you need to count or calculate the number of possible outcomes or combinations when selecting items with replacement. Here are some applications across different domains: 1. Probability and Statistics: □ In probability theory, combination with replacement is used to calculate the probability of certain outcomes in experiments with replacement, such as drawing cards or selecting items from a 2. Sampling and Surveys: □ When conducting surveys or sampling from a population, combination with replacement helps in determining the number of different samples that can be obtained, considering that each item can be selected more than once. 3. Inventory Management: □ In inventory management, it is essential to calculate the number of ways to select items from a stock with replacement. This is useful for optimizing stock levels and predicting future 4. Genetics and Biology: □ In genetics, combination with replacement is used in modeling genetic inheritance and population genetics. It helps in understanding how alleles are passed from one generation to the next, considering the possibility of multiple offspring inheriting the same allele from a parent. 5. Chemistry and Chemical Engineering: □ In chemistry, combination with replacement can be applied to chemical reactions and mixing solutions, where multiple reactions or combinations are possible with the same reactants. 6. Data Science and Machine Learning: □ In machine learning, especially when working with bootstrapping techniques, combination with replacement is used to generate multiple resamples of a dataset with replacement. This is crucial for building robust models and estimating uncertainties. Using a Combination with Replacement Calculator offers several benefits across different fields and applications. Here are some of the key advantages: 1. Accuracy: Calculating combinations with replacement manually can be prone to errors, especially for large values of ‘n’ and ‘r.’ A calculator ensures accurate results every time, reducing the risk of mistakes. 2. Efficiency: Computationally intensive tasks involving large datasets or numerous combinations can be time-consuming when done by hand. A calculator performs calculations quickly, saving time and 3. Complex Scenarios: Combination with replacement calculators can handle complex scenarios with ease, including situations where there are multiple distinct types of items or where items have varying maximum repetition limits. 4. Versatility: These calculators can be used in various fields, from probability and statistics to genetics and finance, making them versatile tools for solving a wide range of problems. 5. Exploration: Calculators allow you to explore different scenarios by quickly adjusting the input values, making it easier to analyze how changing parameters affect outcomes. 6. Educational Tool: Combination with replacement calculators are valuable tools for teaching and learning combinatorics and probability theory. They provide students with a practical way to understand and apply mathematical concepts. Last Updated : 03 October, 2024 One request? I’ve put so much effort writing this blog post to provide value to you. It’ll be very helpful for me, if you consider sharing it on social media or with your friends/family. SHARING IS ♥️ Sandeep Bhandari holds a Bachelor of Engineering in Computers from Thapar University (2006). He has 20 years of experience in the technology field. He has a keen interest in various technical fields, including database systems, computer networks, and programming. You can read more about him on his bio page. 21 thoughts on “Combination with Replacement Calculator” 1. Vcooper The benefits of using a combination with replacement calculator are well-articulated, highlighting the advantages of accuracy, efficiency, and versatility in various problem-solving scenarios. 2. Bruce Thomas The description of the concept and its practical uses is thorough and well-explained, making it accessible to a wide audience. 3. Naomi29 The formulae provided for calculating combinations with replacement are clear and easy to understand. They serve as a handy reference for anyone working with such calculations. 4. Bward This article provides a comprehensive explanation of combination with replacement, including various formulae and real-world applications. It’s a valuable resource for understanding the concept and its practical uses. 5. Gshaw The article effectively conveys the relevance of combination with replacement in various fields, demonstrating its significance in modern problem-solving and decision-making processes. 6. Jayden Young Absolutely, the article strikes a good balance between technical details and real-world applications, making it useful for both students and professionals. 7. Sward Indeed, the benefits emphasize the practical value of using such calculators, especially in complex and computationally intensive tasks. 8. Lauren65 The applications mentioned in the article are fascinating, especially the use of combination with replacement in genetics and machine learning. It’s interesting to see how a mathematical concept has such diverse real-world implications. 9. Keeley Carter The article provides a clear and comprehensive overview of combination with replacement, making it accessible and informative for readers from various backgrounds. 10. Matthews Keith The article effectively conveys the theoretical foundations and practical implications of combination with replacement, making it a valuable resource for in-depth understanding and application. 11. Sienna Jones Definitely, the interdisciplinary applications of this concept shed light on its significance beyond traditional mathematical contexts. 12. Hunter Elsie Absolutely, the efficiency and accuracy offered by these calculators make them indispensable tools for a wide range of applications. 13. Anthony16 Absolutely, the real-world examples show how this concept is integral to numerous areas of study and application. 14. Ncook The benefits section clearly highlights the advantages of using a combination with replacement calculator. It emphasizes the accuracy, efficiency, and versatility of these calculators, making a strong case for their usage. 15. Mmitchell I agree, the connections to genetics and machine learning highlight the importance and relevance of this concept in modern scientific and technological fields. 16. Donna19 The real-world applications of combination with replacement, especially in data science and genetics, showcase the broad utility of this concept in contemporary fields of study and research. 17. Wyoung I completely agree. The applications across different domains make it clear how important this concept is in various fields. 18. Mmiller Absolutely, the applications demonstrate the wide-ranging significance of this concept in addressing complex problems across different domains. 19. Carter Aaron I completely agree. The versatility and practicality of this concept are clearly illustrated, making a compelling case for its importance. 20. Stewart Erin Definitely, the real-world examples illustrate the relevance and practical value of understanding combination with replacement in modern scientific and computational contexts. 21. Carrie28 Absolutely, the benefits are crucial, especially when dealing with complex scenarios and large datasets. It’s an essential tool for accurate and efficient calculations.
{"url":"https://calculatoruniverse.com/combination-with-replacement-calculator/","timestamp":"2024-11-01T19:04:57Z","content_type":"text/html","content_length":"260764","record_id":"<urn:uuid:7f73f3c7-7388-42df-8c12-16042e84bdb5>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00246.warc.gz"}
Author content All content in this area was uploaded by Hervé Zwirn on Feb 02, 2019 Content may be subject to copyright. Bernard Walliser1 Denis Zwirn Hervé Zwirn2 Abstract: Analogy plays an important role in science as well as in non-scientific domains such as taxonomy or learning. We make explicit the difference and complementarity between the concept of analogical statement, which merely states that two objectshave a relevant similarity, and the concept of analogical inference, which relies on the former in order to draw a conclusion from some premises. For the first, we show that it is not possible to give an absolute definition of what it means for two objects to be analogous; a relative definition of analogy is introduced, only relevant from some point of view. For the second, we argue that it is necessary to introduce a background over-hypothesis relating two sets of properties; the belief strength of the conclusion is then directly related to the belief strength of the over- hypothesis. Moreover, we assert the syntactical identity between analogical inference and one case induction despite important pragmatic differences. Keywords: analogy, induction, reasoning mode, similarity, taxonomy Analogies are factual statements, on which a reasoning procedure may rely in order to draw a new conclusion. Such reasoning by analogy is a very usual mode of reasoning, explicit or implicit in epistemic practice. Many examples punctuate the history of science, for instance the parallelism of structure between hydraulic and electric systems. Some typical applications are at work in everyday life, for instance when a child learns a new language or when a lawyer compares different normative situations. Various analogies are used to exemplify some ideas, for instance when comparing natural and artificial selection. Finally, analogies help to build taxonomies, classes being created by selecting the most pregnant similarities. More precisely, three main functions of analogies and analogical reasoning may be considered. The didactic function aims at providing a simple evocating image, either realist or poetic, of some complex phenomenon, in order to fulfill a communicative or a pedagogic aim. The heuristic function consists in suggesting the possible existence of some new property possessed by an object when it is similar to another in other respects. The argumentative 1EHESS (54 Bd Raspail, Paris, France) 2CMLA (ENS Paris Saclay, 61 avenue du Président Wilson, 94235 Cachan, France) and IHPST (CNRS, ENS Ulm, 13 rue du Four, 75006, Paris, function intends to sustain more firmly the belief in a new property attributed to some object on the basis of its similarity to another object. The first function concerns situations where the focus is on analogical judgments only, the two others concerns analogical reasoning, with different epistemic status of its conclusion. Of course, any analogical reasoning can be judged as more or less relevant. Hesse (1966) asserts that some deep analogical reasoning forms the core of scientific research. Conversely, Bouveresse (1999) observes that several sociological analogies result from a fanciful mode of reasoning. A majority of observers think that analogies and analogical reasoning have to be accepted or discarded through a one by one judgment. More constructively, some epistemological works try to link and even to reduce analogical reasoning to more classical ones, while others view it as a specific reasoning mode. The paper assumes that reasoning by analogy obeys the same syntactical principles whatever its field of application or its function. These principles will be expressed in a formal way by avoiding as much as possible some frequent ill-defined or ambiguous concepts (“essential”, “causal”, “relevant”). Such an underlying structure allows to compare analogical reasoning to other formalized reasoning modes such as deductive or inductive. It helps moreover to evaluate the rationality of analogical arguments. Reasoning by analogy will be decomposed in two successive steps. An “analogical statement” relies on a certain kind of similarity between two objects based on common properties, in a relative rather than absolute sense. An “analogical inference” or an “analogical argument” relies on an analogical statement in order to transfer some additional property from one object to the other, by taking into account an explicit over-hypothesis. These two steps are strongly linked since an analogical statement may prepare an analogical argument, contrary to a mere similarity. The complete procedure allows to evaluate the rational belief in its conclusion with regard to its assumptions. A general framework, exclusively composed of objects and properties, is first introduced. In order to ground the analogical statement, a relative definition of analogy between two objects is expressed from a given point of view (§2). To specify the analogical inference, a background over-hypothesis linking two sets of properties is made precise. The degree of belief in the conclusion of the inference processdepends then on the degree of belief in the over- hypothesis (§3). It is easily shown that analogical reasoning is syntactically identical to one case induction, even if their more common examples are pragmatically different, and a new insight is proposed on the deep reason of this difference. The analogical reasoning mode may also be related to case- based reasoning (§4). A critical overview of the more recent philosophical and logical works related to analogy is further provided (§5). Some conclusions about the specificity of our approach are sketched and some insights for future analysis are suggested (§6). Analogical statement 2.1. General framework We adopt the framework of first order logic. We assume the existence of a universe X of objects, which are constants of the language, and denoted X, Y,… Objects can be material (people, cars, trees) as well as symbolic (numbers, propositions, values). Objects can be specific (John, my car, the Eiffel tower) or generic (a man, a car, a monument). Note that a specific object is just a given entity while a generic object is already a class of specific ones. We suppose moreover that a set P of properties, defined on the universe X of objects and denoted P, Q, … is given. Properties are predicates (which can be one-place or many-place). They can concern either material aspects (to be red, to be heavy) or symbolic ones (to be greater than 3, to be nice). For a given object X, a property P is said to be relevant or not whether it applies or not to that object. For instance, a ball is red or not, but this is irrelevant for a number: so the property “to be red” is undefined for a number. An analogy always relates in an oriented way two or more objects belonging respectively to two different domains, the “source domain” and the “target domain”. Two types of relations are introduced when linking these domains (Hesse, 1966). The horizontal relations link similar properties present in the two domains. The vertical relations link different properties present in the same domain. An analogical statement can be of different types. The simplest analogical statement is called “notional analogy”. It expresses that “A is like B”, written “A ~ B”, and just states that there is a specific kind of similarity between two specific objects (John is like Ophelia) or between two generic ones (an airplane is like a bird). The second object B belongs to the source domain while the first A belongs to the target domain. This inversion comes from the fact that, in an analogical inference, it is a property of the source that is transferred to the target (see §2.2). A more elaborate form is called “relational analogy”. It expresses that “A is to B what C is to D”, written “A:B :: C:D”, and actually points to a similarity between two couples of objects. It can concern specific objects (Dante is to Italy what Shakespeare is to England), generic ones (a hoof is to a horse what a foot is to a man) or even mixed ones (beer is to Belgium what wine is to France). This form is in fact not logically different from the previous one since it can be re-written as a notional analogy between two couples: “(A,B) ~ (C,D)”. Relational analogy has often been confused with analogy in general, since it gave its name to the concept in Aristotle’s work, αναλογία applying to an identity in proportion. That is why it is oftencalled “proportional analogy” in the literature, though it does not rely only on the numerical concept of proportion. An analogical statement can be extended with the same method to a t-uple of objects, and stays always reducible to the simplest notional form. Hence, an analogy may concern two formal models, where a model represents a multidimensional generic object. It is called “structural analogy” and is written [A] ~ [B] where [X] isa model. Such a structural analogy can be defined at two levels. It is “formal” when it concerns only the respective structure of related variables and equations (the Lotka-Volterra model in biology is like the Goodwin model in economics); Hempel (1965a) speaks of a nomic isomorphism. It is “substantial” when some common interpretation is involved (the Fechner law in psychology is like the utility of money in decision theory; they both concern the marginally decreasing subjective effect of a material It is generally accepted that analogy, although it logically relies on a similarity ex post in a symmetrical way is not symmetrical ex ante, contrary to the general concept of similarity. Analogy is associated with an illocutionary intention that distinguishes a part which inherits a property from a part for which the property is well known. When one says “your eyes are blue like the sky”, the well-known property of the intense blue of the sky is attributed to someone for making her a compliment. However, this remark concerns the intentional aspect contrary to a purely formal syntactic point of view. Despite this pragmatic non-symmetrical feature, a notional analogy may be considered syntactically as an equivalence relation satisfying the following axioms: - Reflexivity: A ~ A (an airplane is like an airplane) - Symmetry: if A ~ B, then B ~ A (if an airplane is like a bird, then a bird is like an airplane) - Transitivity: If A ~ B and B ~ C, then A ~ C (if an airplane is like a bird and a bird is like a bee, then an airplane is like a bee). For a relational analogy, these principles become: - Horizontal Reflexivity: A:B :: A:B - Horizontal Symmetry: if A:B :: C:D then C:D :: A:B - Horizontal Transitivity: if A:B :: C:D and C:D :: E:F then A:B :: E:F Since not every equivalence relation pretends to be an analogy, we need more principles in order to characterize what it is. Of course, it is necessary that analogy does not reduce to identity (if any couple is formed of same elements) or to triviality (if any possible couple satisfies it). It is also required to propose a logical definition which is independent of any specific field of objects. However, following Quine (1969), it can be shown that it is not possible to propose a logical definition of absolute analogy, since it is not even possible to give a logical definition of the weaker notion of similarity. In any case, analogy is defined according to the properties shared by two objects. The simplest intuitive definition is: (1) A ~ B iff there exists some property P such that P(A) & P(B) Analogy is defined by the existence of a property commonly shared by both objects. Of course, it is an equivalence relation but this definition is far too lax and is even trivial: it is always possible to find a common property between any two objects. For example: a tiger is like a zebra, they have stripes. A more restrictive attempt is the following: (2) A ~ B iff for any property P, P(A) & P(B) But this covalence of properties leads to an extreme situation which reduces analogy to Quine (1969) proposes to try an intermediary definition of similarity: (3) A ~ B iff A and B have “many” common properties But, as he highlights, this notion is too vague because one cannot tell how many properties are required. In fact, the question is to determine what counts as a property. If any set of objects countsas a property, then any two objects will be members of an arbitrary number of sets and will share “many” properties. If one restricts the type of sets to the properties collecting similar objects, we are led to a circular definition. An attempt to escape from the problem faced by those general definitions may be to restrict the admissible properties to a unique subset of externally defined properties that arerelevant for any object, say W. Hence, amended formulations of (1), (2), (3) may be: (1a) A ~ B iff there exists a property P belonging to W such that P(A) & P(B) (2a) A ~ B iff for any property P belonging to W, P(A) & P(B) (3a) A ~ B iff A and B have “many” common properties P belonging to W Clearly, (1a) may prevent the triviality of (1), (2a) may prevent the reducibility of (2) to identity, (3a) may prevent the arbitrary nature of (3). The question is then to be able to give a relevant definition for a universal W since universality is required for a definition of absolute analogy. An intuitive way to do this would be to identify W with the set of “natural kinds”, also considered by Quine (1969), who pointed the intuitive relationship between this concept and the notion of similarity: a natural kind is a collection of similar objects and reversely similar objects seem to be those very objects which are instances of the same natural kind. However, this approach leads to many philosophical issues and is highly controversial, unless one accepts a very specific essentialist position (see Bird and Tobin, 2015, for a critical presentation of this position). Firstly, the properties which define natural kinds are supposed to be the properties which are “really important” for classifying objects in “genuinely natural ways”. But critics of that position deny that any of our classifications is natural. Classifications are mere human tools built within current language and science for practical purposes, not hard categories in the world as for Plato. Dupré (1993) argues for instance in favour of a "ubiquitous realism," stressing that there are always a large number of ways to build taxonomies and kinds, depending on the theoretical interests pursued. In fact, what is called "natural kinds" does not correspond to essences or necessities existing in nature but to evolving categories that are established according to complex pragmatic optimizations. Secondly, Quine’s argumentation leads to the conclusion that defining natural kinds relies again on too vague notions or to obvious circularities with the notion of similarity. Hence there is neither philosophical nor logical way to define “natural kinds”, and to identify which set of properties could bea proper W. But the impossibility to find a formal definition of absolute analogy comes moreover from the debates necessarily open when an analogy is refuted by a counter-analogy. The counter- analogy suggests a better partner than the one proposed, for the source as well as for the For instance, for a notional analogy: - Bruges is the Venice of the North; it is a city built on canals. - No, Bruges is not the Venice of the North; it is a city that never had any major economic influence. -It is Antwerp which is the Venice of the North, since it was a European economic capital like Venice. Likewise, for a relational analogy: - Freud is to psychoanalysis what Piaget is to cognitive developmental psychology, its most well-known inventor; - No, Freud is not to psychoanalysis what Piaget is to cognitive developmental psychology, he was not its first inventor. - It is Joseph Breuer who is to psychoanalysis what Piaget is to cognitive developmental psychology, since he is the first inventor of psychoanalysis according to Freud himself. These debates underline the vacuity of vindicating any absolute analogy: even if A and B were similar with respect to one point of view, they would usually differ from another point of view. No pair of non-identical objects are similar from the standpoint of all possible properties, even if we limit these properties to current categories: is an apple similar to a pear, because it is a fruit, or to a tennis ball, because it is round ? So we are led to consider that any analogyhas always to be expressed with respect to one property or to a domain of properties. This is obviously an appropriate answer to the debates on analogical statements: there is no more any assertion stated from a universal standpoint but only relative points of views on the similarity between two objects. The simplest way to represent relative analogy for notional analogy is again: (1b) (A ~P B) iff there is some property P such that P(A) & P(B)3 For instance, an apple is like a pear, relatively to “fruitness”, they are fruits. For relational analogy, the condition states: (1b’) A:B ::R C:D iff there is a relation R (which is a two-place predicate), such that R(A,B) & For instance, Paul is to Ana what Bob is to Julia, relatively to “sonness”, he is her son. But on second thought, this seems to be a very restrictive way to express things. Indeed, an analogy is significant because it spotlights that two objects share one particular property among a list of other possible properties all pertaining to a certain way of describing the objects. These two cars are analogous relatively to the colour if they are both blue, these two animals are analogous relatively to the species if they are both dogs. But it would be rather odd to say that these two cars are analogous relatively to their “blueness” or that these two animals are analogous relatively to their “dogness”. What is meant is the fact that these objects are analogous relatively to some “point of view” which can be expressed by a set of related properties (for example colours or animal species). They share the same property in this set while they could have two different ones (one car could be blue and the other one red, one animal could be a dog and the other one a cat). What is stressed by the analogy is that it is not the case that they have two different properties inside the set which is considered. So these cars are analogous relatively to their colour, these animals are analogous relatively to their specie. Let’s define a domain Z as a set of possible disjoint properties4 (or disjoint values of one property) which are associated with the same point of view. The relativization of any analogy to a given domain expresses the speaker's intention to choose a specific point of view and her intention to speak only with regard to this aspect of the world. A “point of view” is a mental attitude consisting of applying a filter on the properties of things or events. This notion is represented by the set Z which is not any set of properties, but a set of disjoint properties which are then correlated through this disjunction. Relative notional analogy can then be defined by: (1c) (A ~Z B) iff there exists a property P within the domain Z, such as P(A) & P(B) For instance, an apple is like a pear, with regard to vegetal kinds, they are fruits Likewise, relative relational analogy is defined by: (1c’) A:B ::Z C:D iff there is a relation R within the domain Z, such as R(A,B) & R(C,D) 3Be careful not to confuse definition (1b) with definition (1) whose form is very near. The latter was intended to define analogy in an absolute form. The former uses the same condition (the existence of a common property) but states that the two objects are similar only relatively to this property. That’s why it is denoted ~P. 4 The fact that these properties are disjoint means that no object can satisfy simultaneously two properties. For instance, Paul is to Ana what Bob is to Julia, with regard to family relationship, he is her One can check that relative notional analogy expressed by (1c) or by (1c’) satisfies all the minimal principles required for a relevant definition of analogy. It can be used for any kind of object and satisfies the usual theoretical properties: - it is an equivalence relation. - it is neither reduced to identity nor to triviality. - it is not circular since Z is chosen by the agent with respect to the point of view she wants to stress and does not need analogy to be defined. Observe that reflexivity and symmetry are obvious andtransitivity comes from the fact that Z is defined as a list of disjoint properties. It is noticeable that transitivity is not respected with the usual definition of absolute analogy since the properties shared by A and B are not necessary the same than those shared by B and C, which cannot be the case here. Il It could be possible to argue that the situation can be a little bit more complex if one considers several common properties or relations involved in the relative analogy. For - An apple is like a pear, with regard to vegetal kinds and colours, they are yellow fruits. - Paul is to Ana what Bob is to Julia, with regard to family relationship and social relationship, he is her son and he doesn’t care about her. But these situations may be reduced to the simplest one by using a conjunction of predicates. For instance: to be yellow and to be a fruit for the first example and to be a son and to not care about his mother, for the second. The unique domain Z is then the set of disjoint conjunctions formed by using these two predicates or relations. The situation may be even more complex when the common properties involved in analogical statements are different, though appearing to “correspond”, as several authors (Hesse (1966), Juthe (2005), Bartha (2010)) pointed out. The problem is then to define rigorously this intuitive but vague notion of “correspondence” between properties, and to ask if this should induce a simple refinement or a radical change in our general definition of analogical statements. We can start with a simple example. While lungs (A) make it possible to breathe in the air (P1), gills (B) make it possible to breathe in the water (P2). The lungs therefore seem to be analogous to the gills, although their properties are indeed different: the associated chemical transformations are not the same. This is because they share a common property, “making extraction of oxygen possible”, from water or from air. Each of those properties is an application of this general property to animals that are aerial or aquatic. It is in this precise meaning that they "correspond". This idea may be made more precise. The property to breathe in the air and the property to breathe in the water both imply the property to breath. Hence lungs and gills share a same general property: to make possible to breath, which is one of the alternative functions of vital organs (others being for instance digestion, blood circulation…) whose set is the domain from the point of view of which the analogy is expressed. Let’s make it more general: Let Z be a domain of properties over a domain of objects D, Z1 be the restriction of Z on D1 ⊂ D, Z2 be the restriction of Z on D2 ⊂ D, “P1 corresponds to P2 in Z” if there are properties P1 within Z1, P2 within Z2, P within Z such that ∀ X P1(X) → P(X) and ∀ X P2(X) → P(X). Then, we see that: If [P1(A), P2(B), (P1 corresponds to P2 in Z)], then (A~B)z because P(A) and P(B). Thus the cases of analogies with corresponding but different properties may be easily casted in the general definition of analogies. This reduction to the simpler general case can easily be extended to the correspondence between properties Q1 and Q2 transferred by analogical inferences. This form of analogy is frequent for scientific models, for instance when the equations of different domains express the same mathematical relations between obviously different but “corresponding” measures, for example those of electric and hydraulic networks. One can say they share a common point of view, that of “constrained flows”. The electric intensity is like the hydraulicdebit because they imply both a quantity of fluids. The tension is like the pressure variation because they imply both a potential of movement. Moreover, the law of nodes (for intensities as well as debits) and the law of loops (for tensions as well as pressure variations) apply to both of them. Finally, the Ohm law linking linearly intensity to tension is analogous to the law linking linearly debit to pressure variation. 3. Analogical inference Typically, analogical inference consists in using a similarity between two objects as a premise for inferring new similarities as conclusions. Then, in a way, it is considered that an analogical statement is a kind of similarity judgement which justifies its extension to other properties: it is this point which differentiates “mere similarities” from “analogies” or “relevant similarities”. As expressed by Bartha (2010): “An analogical argument is an explicit representation of analogical reasoning that cites accepted similarities between two systems in support of the conclusion that some further similarities exist”. An analogical inference uses an analogical statement to transfer the properties of an object to another object. The analogical statement may be explicit or not, and represented by facts which imply it, as illustrated for instance by the “violinist argument” (Thomson, 1971). Let Q be a new one-place predicate in P or S a new two-place predicate. In the two basic forms (notional analogy and relational analogy), an analogical inference states that: (4) [A ~Z B, Q(B)]Q(A) (4’) [A:B ::Z C:D, S(C,D)]S(A,B) An analogical inference inherits the asymmetric nature of an analogical statement: it is based on the property of the “source” transferred to the “target”. The symbol denotes a relation of entailment between premises and conclusion. It has to be further characterized, and may be assimilated to one of the many relations of entailment that have been described in the literature. Besides deduction, one can cite non-monotonic logic (Krauss, Lehman and Magidor, 1990), belief revision (Alchourron, Gärdenfors & Makinson, 1985), inductive logic or probabilistic logic. Our thesis is that should not be assimilated univocally to anyone of these relations of entailment but instead that it should be interpreted accordingly to the status of a background over-hypothesis that is necessary for understanding the analogical reasoning. For the very same reasons that led us to introduce a domain Z listing the considered properties for defining the relative analogy which is the premise, we introduce a domain Z’ listing the properties that will be considered for the conclusion. The analogical reasoning (4) can now be developed into: (5) [P ∊ Z, Q ∊ Z’, P(A), P(B), Q(B)]Q(A), Why should we accept this inference scheme? Our answer does not rely on the construction of a new consequence relation but on external supplementary hypotheses used by the person who argues in favour of the conclusion. It is a meta-linguistic analysis (Jackson, 1991).More precisely, one feels that Z and Z’ must belinked in some way but it is not easy to define formally what this link is. In fact, it will be shown that this link is achieved by complementary hypotheses that will play the role of selecting the relevant domain Z’ knowing Z. For simplicity, these complementary hypotheses will be summarized in an over-hypothesis HE such that in its presence, the belief in the conclusion is related to the premises in an intuitively relevant way. The prior role of this assumption is not to transform these arguments into deduction, but to discard the irrelevant reasoning associated with possible instances of the purely syntactical criterion. This external assumption HE will be integrated in the reasoning as follows: (6) [(A ~Z B), Q(B), HE]Q(A) and developed into: (7) [P∊ Z, Q ∊ Z’, P(A), P(B), Q(B), HE]Q(A) The symbol is used instead ofin order to acknowledge the fact that including HE in the premises gives a better epistemic status to the inference. We will discuss later the precise status that must be given to . The extension to relational analogies or to analogies between t-uples is obvious, replacing P and Q by two-place or t-place predicates. Before being complemented by HE, these reasoning modes look like enthymemes, i.e. inferences to which a premise lacks (according to the modern meaning of a concept created by Aristotle with the broader meaning of “deductions from likelihoods and indices”, see Boyer, 1995). Musgrave (1989) was one of the first to suggest the transformation of inductive inferences into deductive enthymemes. But understanding analogy and induction requires giving a relevant formal account of these enthymemes, which fulfills several constraints and does not necessarily lead to deductive conclusions. 3.2. Structure of the over-hypothesis In order for HE to be relevant for explaining how analogical inference runs, we impose to it the two following additional principles: - HE must not lead to “trivialize” the reasoning in a way which would make unnecessary the consultation of one of the otherpremises. This is the “non-redundancy” condition. - the empirical protocol for believing in HE must be coherent, accessible and itself not exposed to a higher-level redundancy. We consider now more and more sophisticated expressions of HE. Let’s examine a first (8) HE1 : ∀X, [P(X)→ Q(X)] Trivialization is obviously at work in this extreme case. Due to the redundancy of the premises, the reasoning becomes both trivial and deductive (the conclusion is certain). Analogical inference becomes a “focusing” operation from a prior generic belief on a specific case: knowing that P(A), the conclusion Q(A) is acquired without any need to refer to P(B). Changing HE1 into a probabilistic relationship between Q(X) and P(X) would not change drastically this result. Indeed, the preceding over-hypothesis HE1 is deterministic and describes an explanation scheme of Q by P like the “Hempel-Oppenheim explanation”. Then one may rely on a weaker entailment of Q by P as in Hempel’s (1965a)” inductive-statistical (9) HE2: Pr(Q(X) / P(X)) = α Analogical inference is no more deductive since it is now defeasible: a further observation may question the conclusion. But it is still redundant: it is not necessary to consult B in order to get a probability degree over Q(A). So it does not do the job. To go further, Davies & Russell (1987) proposed an interesting candidate for HE, called the “determination clause”. It is initially written as follows: (10) HE3: [∀X, (P(X) →Q(X))] or [∀X, (P(X) →-Q(X))] The conclusion is again obtained deductively, but involves all premises: it is necessary to use Q(B) to infer Q(A). For instance, if HE3 states that: any golden object is insensible to acid or any golden object is sensible to acid, the fact that my watch is golden and insensible to acid implies that yours which is also golden is insensible to acid. How is it possible to acquire the knowledge of a hypothesis like HE3? If all X such as P(X) have been observed, the fact that Q(X) or -Q(X) for all these X is already known and there is no need for the analogical reasoning to know that Q(A). Of course, if -Q(x) is the case, HE3 is irrelevant for this analogical reasoning which is false. But if the fact that Q(X) or -Q(X) for all these X is already known, the belief in HE3 is reduced to the belief in one of the two possibilities, either to HE1 or to its contrary and we are back to redundancy. If only some X such as P(X) have been observed and if all associated are such that Q(X) or all such that -Q(X), the belief in HE3 may stem from an inductive process leading possibly to a probabilistic belief but again, the belief will concern only one of the two possibilities mentioned in HE3. Every empirical observation which does not refute HE3 will lead to believe either in its first part or in its second part. Then there is no way to learn a hypothesis such as HE3 as it is. The problem comes from the fact that H3 is not the right over-hypothesis that one needs in order to complete the analogical reasoning. This is again a question of the right level of properties to express things. The domains Z and Z’ whose important role has been noticed have a role to play in the over-hypothesis. A more relevant over-hypothesis, whose important difference with HE3 is not mentioned in Davies & Russel (1987), is then the following: (11) HE4: For any property P in Z, for any property Q in Z’, [∀X, (P(X) →Q(X))] or [∀X, (P(X) →-Q(X))] It is important to stress the fact that due to the quantification over Z and Z’, HE4 is actually a set of hypotheses for each object X. The meaning of HE4is worth to be explained. Z is a list of disjoint properties, Z’ is a list of disjoint properties. Then each object X can satisfy at most one property in Z and at most one in Z’. HE4 means that all objects satisfying one given property in Z must satisfy one same other property in Z’. In a way, HE4 links each property in Z with (at most) one property in Z’. For instance, consider that this car is like mine, it's a Chevrolet Silverado, and I want to infer that it costs nearly the same price than mine. Davies & Russel’s HE3 hypothesis would state: every Chevrolet Silverado costs between 28 000€ and 32 000 € or no Chevrolet Silverado costs between 28 000 € and 32 000 €; my Chevrolet Silverado costs 30 000 €; hence this other Chevrolet Silverado should cost between 28 000 € and 32 000 €. The proposed HE4 over- hypothesis states instead: the cost of any car of a given type is situated in a range of prices; my Chevrolet Silverado costs between 28000€ and 32000 €; hence this other Chevrolet Silverado should cost between 28000 € and 32000 €.Ranges of price are defined exogenously by the brand of cars. It is exactly the situation which was suggested by Goodman (1947) in his “prospects for a theory of projection”. Suppose that we are interested in the colors k of the marbles drawn from a bag h which belongs to a stack of bags. What he calls “over-hypothesis” H of a hypothesis G such as “all the marbles in the bag B are red” is a hypothesis H such as “every bagful of the stack is uniform in color”. Goodman considers the situation where many bags of the stack have been observed (but not the bag B itself) and where this observation leads to confirm H. Having H in mind, observing a red marble from the bag B will support G. In this case, Z is the set of predicates “to belong to the bag number h” and Z’ is the set of predicates “to be of colork”. The over-hypothesis is the fact that each bag is associated with only one Let’s come back to the example of the car. HE3 states that every Chevrolet Silverado costs between 28.000€ and 32.000 € or no Chevrolet Silverado costs between 28.000 € and 32.000 €. As noticed, knowing that would mean having examined all (or a very large number) of Chevrolet Silverado and noted that all cost between 28.000 € and 32.000 € (otherwise this would be incompatible with the premise that mine costs between 28.000 € and 32.000 €). But in this case, it is not HE3 that we would believe in but in Every Chevrolet Silverado costs between 28.000€ and 32.000 €. Imagine now that we have observed a large number of different models of cars and noted that all cars of the same type cost approximately the same price. This observation, which is really plausible, would lead to the general hypothesis: For any car of a given type, the cost is between the same range. So, if Z is a list of types of cars and Z’ a list of mutually exclusive ranges of price, this would be expressed in HE4 form as: [For any car A, for any range of price T, [∀car, (car of model A → price of the car is in range T] or [∀car, (car of model A → price of the car is not in range T]] We immediately see that, contrary to HE3, the process allowing to acquire the belief in HE4 is realist and it does not lead to a mutilation of the hypothesis in only one part of the alternative. Moreover, it appears that HE3 is very strange. In absence of the premise “my Chevrolet Silverado costs 30.000 €”, one does not see where the hypothesis “every Chevrolet Silverado costs between 28.000€ and 32.000 € or no Chevrolet Silverado costs between 28.000 € and 32.000 €” could come from. Why using this particular range? The only answer seems to be because many Chevrolet Silverado whose price lies in this range have been observed. But in this case, why stating the whole HE3 and not only “every Chevrolet Silverado costs between 28.000€ and 32.000 €”? On the contrary, HE4 does not mention any particular range. It just states that, having made a partition of all possible prices in some arbitrary ranges for any car, each model of car is associated with only one range. This is not only more reasonable but intuitively, this is exactly the way we think in various subjects. Indeed, such an over-hypothesis frequently states a regularity between classes of objects already defined: - species determines what animals eat; - age and skills determines the class followed by pupils; - nationality determines the mother tongue; Each one of these examples is an example of HE4: A partition of a set Z of possible properties is set in a functional relation with a partition of another set Z’ of possible properties. This type of belief is usually acquired in a very natural way: having seen many objects satisfying one property of a list of properties of the same type, it appears that each of them also satisfies one property of another list and that to each one first property, the same second property is always attached. This process is typically inductive in that we infer a general law from a limited (but possibly large) number of cases. But the over-hypothesis could also result from an abductive process since the conclusions may be the generic best explanation of the One may worry about the fact that this way of conceiving an analogical inference is leading to an infinite regression, since it relies on an over-hypothesis which has itself to be justified. But it is not the case since foundationalism is not the goal here. The way to acquire HE4 is not itself a part of analogical reasoning: what is required is only the fact that it is possible to attribute to HE4 a degree of belief by a clear empirical protocol, without redundancy with the analogical reasoning that it supports. But this degree of belief is exogenous to the present 3.3. Uncertainty on the over-hypothesis Of course, considering the ways they may be acquired (inductively, abductively5), hypotheses of type HE4 are most of the time not certain. One just has a certain degree of belief in them, depending on the process by which they were acquired. Sometimes, they can be attached to a probability, sometimes they can be represented by a non-monotonic inference or by other types of quantitative measures of uncertainty. The important point is that in general, as they are not certain, the conclusion of the analogical reasoning inherits the uncertainty that affects From the premises [P∊ Z, Q ∊ Z’, P(A), P(B), Q(B), HE4] supposed all true, the conclusion Q(A) is obviously obtained deductively. What makes the analogical reasoning [P∊ Z, Q ∊ Z’, P(A), P(B), Q(B), HE4]Q(A) not deductive is the fact that the premise HE is not known to be true but has only a certain degree of belief. So the analogical reasoning is not a new consequence relation but is a specific “inferential scheme” which may rely on different kinds of beliefs. If an agent has a belief of type Bel(HE4)) then he should have a similar belief Bel(Q(A)) in Q(A). A good analogical inference is not an inference relation leading to a plausible conclusion, it is an inferential scheme that gives a degree of belief in the conclusion which is coherent with the degree of belief in the over-hypothesis HE. Four typical different situations which are worth to be analysed can then arise. Situation 1: If the degree of belief in HE4 is so strong that HE4 is “accepted” (meaning for instance that its probability is close to 1 as formalized by Adam’s semantics (Pearl,1988), then the analogical reasoning will get a real strength and the conclusion will be accepted with the same strong degree of belief: my car is a Twingo like yours. Its price is well below 10000 €. Hence, yours must also cost less than 10000 € (because the model of a car determines its price). The fact that the model of a car determines its price is almost certain so I can be almost sure of the 5 See Walliser et al. (2005) for a formalisation of the abductive reasoning. Situation 2: The degree of belief in HE4 can be simply higher than the prior degree of belief in Q(A). In this case, the degree of belief in Q(A) increases to the level of the degree of belief in HE4. This corresponds to a relative confirmation (Zwirn & Zwirn, 1996) in which the belief in the conclusion in strengthened while being not enough to lead to accept it (absolute confirmation). The strength of the analogical reasoning in situation 2 is lower than in situation 1: Bjorn and Anna are Swedish students. Bjorn speaks English fluently. So I can believe that Anna speaks also English very well (because the nationality and the level of studies determine fairly well the general linguistic competences). A priori, I cannot know if Anna who is Swedish speaks English fluently and my degree of belief in it is low. On the other hand, I have a fairly good confidence in the fact that the nationality and the level of studies determine the linguistic competence. Then, when I learn that Bjorn, who is a Swedish student like Anna is, speaks English fluently, I increase my belief in the fact that so does Anna. Situation 3: HE4 can be a mere possibility, totally uncertain, whose (subjective) probability is unknown. It’s even possible that no degree of belief is attached to it. In this case, the analogical reasoning will have no proof value but may have a purely heuristic interest to help noticing a new possibility worth to be explored: Mars and Earth rotate around the sun not too far from it, they have similar gravity and surface temperature. There is life on Earth. Perhaps there is life on Mars (because rotating not too far around the sun, having gravity and a surface temperature similar to those of the Earth are conditions in which life appears). It is clear that nobody today knows the real conditions that are necessary and sufficient for life to appear. So the over-hypothesis here is very risky and even probably false. Nonetheless, it seems worthier exploring if Mars can harbour life than exploring if Pluto can, since the last is totally different from Earth. Situation 4: The belief in HE4 can be very low or HE4 can even be considered as a silly hypothesis. In this case, the conclusion of the analogical reasoning is not taken seriously and is considered as silly itself. This corresponds to the cases where the analogical reasoning is considered as a bad reasoning mode: the tiger has a tail as well as my cat. My cat is kind hence the tiger is kind (because the tail induces the behaviour). There may be many intermediate situations but the general principle is the same: the belief in Q(A) is determined by the belief in HE4. Of course, HE4 is not always made explicit by the agent in her analogical reasoning. But even if not made explicit, a hypothesis of type HE4 has always to be in the background belief of the agent when following such a reasoning scheme with some consistency. It can also happen that an adversary makes this hypothesis explicit just in order to show the weakness of the reasoning: “to infer that, you need this background over-hypothesis, yet it is clearly absurd or has a very low probability”. Of course, because it always relies on relative analogies, reasoning by analogy is open to revision. Especially, the over-hypothesis can be modified in its structure as well as in its degree of belief. For instance, a counter-analogy, true for A and B in a domain Z* different from Z, may relativize the conclusion of a first analogical inference, even if the belief in the over- hypothesis is high, insofar as this counter-analogy could be associated with another over- hypothesis which leads to an exception to the first one and lessens the belief in it. This is the case with the over-hypothesis “all bodies fall on the floor“ which is refuted by balloons. The reason here is that some relevant causes(“hidden factors”) necessary to explain the fall of bodies were ignored. Ideally, a rational agent will consider the total evidence available to her to form her belief, which will lead directly or by discussion with other rational agents to attribute to the over- hypothesis a belief that already anticipates the possible counter-analogies. Comparison with other reasoning modes 4.1. Induction The raw form of reasoning by analogy [P(A), P(B), Q(B)]Q(A), is identical from a syntactical point of view to one case induction. Actually, both modes of reasoning transfer a property from one object to another, relying on the fact that both objects share already another property. A property Q observed for an object of “type P” is transferred on another object of “type P”, like in the standard example: this raven in front of me is black, hence the next raven I’ll see should be black. That does not mean that reasoning by analogy is reducible to induction but that the first step of any induction consists in reasoning by analogy. One case induction is nothing else that a kind of analogical reasoning, but there are different ways to express such reasoning in the usual language since it can be set in different pragmatic contexts. In one case induction, objects of the analogical statement are usually directly designed by the property they share according to this statement, preceded by a demonstrative pronoun of time, place, ownership etc.: - This P is Q, then this other P is also Q - These P are Q, then these other P are also Q For instance, induction implies that since my canary is yellow, yours should be yellow too. In other kinds of reasoning by analogy, which are the kinds which are the most popular because they seem different from one-case induction, objects are primarily designed by their name, or by any designator independent of the properties that they are supposed to share according to the analogical statement, which are attributed to them in a sentence: - A is P, B is P, B is Q, then A is Q - A’s are P, B’s are P, B’s are Q, then A’s are Q For instance, a canary has wings and is able to fly, then since an airplane has wings too it should be able to fly. In one case induction, the property P shared by two objects is used as the “principal designator” (to be a canary). In reasoning by analogy, the shared property is used only as a secondary qualifier; it is explicitly mentioned as the shared property of two objects which are designated through another qualifier (to have wings for a canary). Common properties used in one case induction are pre-existing: they correspond often to a class inside a current taxonomy. These taxonomies and these classes have been built inside the language precisely because they are well suited for maximising causal effects with other properties, hence to build over- hypotheses of type HE. Common properties used in other analogical reasoning are usually selected at the moment when the reasoning is made. They are not used as current principal designators of any class in a usual taxonomy, and can be fanciful. This helps understanding why one case induction is often considered more reliable than other analogical reasoning. The basic idea is that current categories are defined in a way which exactly takes into account the relation of determination between many properties that define each category. For example, the fact to be a canary has a lot of other consequences. This is why this category is useful. Hypotheses of type HE4 linked to this kind of category are enough to help drawing inferences from the fact of knowing that something is a canary. In a nutshell, one case induction is nothing else than a reasoning by analogy but in a normalized context where it relies on over-hypotheses that are linked to the properties which are used as principal designators for the objects which are considered. These over-hypotheses are well entrenched because they are linked to categories currently used inside the language. On the other hand, over-hypotheses used in other cases of analogical reasoning are linked to properties that areless general and hence less entrenched. Both reasoning modes are essentially of the same nature but used in different pragmatic contexts. Of course, that does not mean that one case induction cannot be fanciful too: since this stone is small, this other stone is small too. The fact that the designator is a current category does not imply that all over-hypotheses that could be linked to it are relevant. The fact to be a stone has no effect on its size. Finally, the role of principal designator can be contextual. Consider, for example, as principal designator “being a New-Yorker”, to whom we can attribute some secondary properties such as “wearing purple shorts”. In general, the first will serve as a basis for classical inductive reasoning (this New-Yorker is a runner, then this other too ...) and interest for the second will be found only in reasoning by analogy in which one will designate the individuals concerned by their names (Paul is wearing a purple short). But if in a basketball competition one sees individuals wearing either purple shorts or white shorts, this property may become contextually a principal designator: as it is learned that this purple short holder belongs to the team A, it will be inferred that this other purple short wearer also belongs to team A. These contextual situations do not contradict the preceding remarks: the particular context adds data on the situation, which makes properties usually subsidiary become relevant as principal designators; these properties may then be used in this context as meta-hypotheses of type HE4 (e.g. the team to which belongs an athlete in a competition determines the color of the shorts that he is wearing). 4.2. Case-based reasoning Reasoning by analogy is a mode of reasoning which can be applied in more and more applied reasoning schemes. The best example is case-based reasoning. In a given situation, an expert has to make an expectation of some outcome or to form a decision about it. In order to do so, he gathers a set of past situations which are similar to the situation at hand and observes what outcome or decision was realized. A first example concerns the effect of some innovation on a given system. Different systems are considered, differing by the characteristics of the innovation, the influence mechanisms of the innovation and the distribution of the population involved, the output is the efficiency of the innovation measured by an aggregate index. This index is a positive one, the idea being that similar situations should lead to similar outcomes. A second example concerns the judgment associated to a judicial trial. Different cases are examined, differing by various circumstances, the operation mode of the suspect, the personality of the suspect. The output is the penalty imposed to the suspect by the judge. This penalty is a normative one, the idea being that in similar situations, the same verdict has to be applied. More generally, the evolution of taxonomies generally proceeds by an analogical reasoning. Analogical statements allow adding new objects to a given class of objects. Analogical inferences allow a restructuration of the classes of objects (when all objects of the class satisfy a new property) or to split already constituted classes in new ones by these new properties (when only some objects of the class satisfy a new property). More ambitiously, analogical reasoning is at work when generalizing scientific models in order to realize an economy of thought. Following Walliser (1994), analogical statements allow to generalize “by enlargement”, i.e. by widening the domain of applicability of a model, while analogical inferences allow to generalize a model “by completion”, i.e. by adding new variables to the model and extending its equations. Some related works 5.1. Philosophical works Reasoning by analogy has been the subject of many contributions, dating back to Aristotle. These works are well documented elsewhere, especially in Bartha (2010; 2013). The present survey relies on them in order to emphasize the resemblances and differences with our own work when expressed in our own framework. The most current intuitive theory of analogical inference states that a good analogical inference relies simply on a good analogy, associated to the fact that the two objects share many common properties. The structure of an analogical inference may then be associated to a general kind of enthymematic reasoning: - A and B share “many” properties P1, P2, ….Pn - B has property Q - H0 : objects sharing many properties generally share other properties - A has also property Q But this enthymematic reasoning is very different from ours, and is problematic: - It relies on an absolute notion of analogy, whose logical limits has been shown. - Unless HE4, H0 is a “general principle of analogy”, looking like the “principle of uniformity of nature”, and has the flavor of a metaphysical assumption begging the - It raises intuitive objections: the properties have at least to be “relevant”, there should be some kind of “causal” relation between one of the Pi and Q. Several authors tried to add structure in the analogical inference in order to make more precise the principle H0 and to list the properties which make it more robust and relevant. Hesse (1966) and Bartha (2010, 2013) are two major contributions for doing this job. Mary Hesse (1966) proposes an interesting tabular representation of an analogical argument, which separates the source domain and the target domain, each domain including a set of objects, properties and relations. She defines the “vertical relations” as the relations within each domain and the “horizontal relations” as the relations between the domains. Then, she formulates several qualitative requirements in order for an analogical argument to be acceptable. Bartha (2010, 2013) proposes a synthesis and critique of her requirements, which he shows to be too restrictive in some situations. More specifically, he shows that Hesse’s conditions do not depend enough of the use that will be made of the analogy in such or such specific analogical argument. Finally, Hesse’s theory keeps many vague concepts of the intuitive theory of analogical reasoning, such as “causal relations in an acceptable scientific sense” or “essential properties”. Bartha proposes himself a more elaborated theory, called “the articulation model”. His thesis is that, contrary to other philosophical analysis which concentrated on the horizontal relations (for instance the number of similarities), one should investigate more the vertical relations. His analysis is very rich detailed, with many subcases and illustrated by several precise scientific examples. He starts by listing all « potential relevant factors » of the analogical reasoning, without a formal definition of these factors. They may be « variables, hypothesis, conditions…. ». These factors, say Fi, may be present or absent in the source and in the target domains (as such or through “correspondences” between factors in the source and factors in the target). Q is the property present in the source domain that the analogical argument proposes to extend to the target domain (as such or through a “corresponding” property Q*). The first condition for building a good analogical argument is to state a “Prior association”, meaning that there is a relation between some of the Fi and Q, in the source domain. Typically, it may be a “causal relationship” when the analogical argument states an empirical prediction. Then Bartha proposes several further conditions for an analogical argument to be “plausible”, distinguishing between a modal concept of “prima facie plausibility” and a stronger quantitative concept of (strangely called) “qualitative plausibility”. Bartha’s analysis could in fact be simplified and casted in our own analysis in the following - There are 2 objects (instead of vague “factors”), let’s say the source A and the target B (this can be easily generalized to more complex situations involving t-tuples of objects). To each object one can associate properties, known or unknown, for instance P(A), Q(A), P*(B), Q*(B). P* and Q* “correspond” to P and Q, in an intuitive way. The analogical argument relies then on P(A), Q(A), P*(B) and infers Q*(B). - The “Prior association” condition states that there is a relationship between P and Q in the source domain, for instance a causal relationship known with certainty [∀X, (P(X) →Q(X)], with some non-monotonic exceptions or with some probability Pr(Q(X)/ P(X) = x). We will focus on the last case for simplicity. - The “Overlap condition” (required for “Prima Facie Plausibility”) states that some of the common properties have a positive effect on Q, which is the case as soon as we assume that Pr(Q(X)/ P(X)) > s > ½. - In some cases of analogical inference, P = P* and Q = Q*. It is for instance the case of the Earth / Mars / Life example. For those cases, the fact to know that Pr(Q(X)/ P(X)) = > s is equivalent to know that Pr(Q*(X)/ P*(X)) > s. This means that Bartha’s criterion leads to - Then the only interesting cases are those where P ≠ P* and Q≠ Q*. In those cases, one does not see why [Pr(Q(X)/ P(X)) > s] shouldimply that [Pr(Q*(X)/ P*(X)) > s]? The intuition relies on the notion of “correspondence” between properties, which is not precisely defined by Bartha. As it is easy to see from what has been shown in §2.3, this concept can be explained away by saying that two properties P and P* correspond if there exists one common property Π and two other one’s Z and Z*such that Π.Z = P and Π.Z*=P*. Z and Z* may express different properties of A and B (for instance, A is a vital organ of aerial animals and B is a vital organ of aquatic animals). - Reasoning by analogy in this context means then to believe that: Pr(Ξ(X).Z(X) / Π(X).Z(X)) > s Pr(Ξ(X).Z*(X) /Π(X).Z*(X)) > s Where Ξ(X).Z(X) = Q(X) and Ξ(X).Z*(X) = Q*(X). Intuitively, Z and Z* should be neutral factors which do not disturb the conditional probability between Ξ(X) and Π(X). This recalls the Sure Thing Principle in decision theory. Bartha’s numerous conditions for prima facie or qualitative plausibility of analogical inferences can be interpreted as intuitive conditions for accepting that neutrality. For instance: - The “No critical difference” condition (required for “Prima Facie Plausibility”) means that there does not exist a property within Z which would be important for the conditional probability Pr(Ξ(X).Z(X) / Π(X).Z(X)) and which is not a property within Z’. - The “Strength of the prior association” condition states that a stronger prior association induces a stronger analogical argument. Indeed, the strength of the conclusion is higher when s is higher, but only if we find good reasons to transfer the Prior association to the target domain. - The “Counteracting causes” condition means that there exists a property R within Z with an independent negative effect on the conditional probability. The fact that Pr(Ξ(X).Z(X) / Π(X).Z(X)) > s despite of R reinforce the intuitive confidence in the fact that Pr(Ξ(X).Z*(X) / Π(X).Z*(X)) > s, especially is R is also valid within Z* and could have minored this confidence. But, none of those condition can definitively ensure that the conclusion of an analogical argument is safe, with the same belief degree than the belief degree of the Prior association. That is why, as noticed by Norton (2018), this way of warranting analogical inferences is endless. It is always possible to add a new case which would contradict the intuitive inference and has to be specified by a new condition. The only way out of this endless process is to: - First reduce analogies between “corresponding properties” to simple analogies between one common property, by using the method indicated in §2.3. - Second to consider an over-hypothesis such as HE4, which encompass all the cases in one single hypothesis, while preventing any redundancy in general. Davies & Russel (1987) proposed a clear formal analysis of analogical reasoning, which has already been commented, and developed the key notion of determination rules. The syntactic expression of determination rules takes in charge many intricate intuitions of other philosophical works on analogical reasoning, and stresses the important notion of non- redundancy. However, as has been shown, they miss to analyze precisely why there is an important difference between an over-hypothesis like HE3 and an over-hypothesis like HE4 for expressing those rules without redundancy and to analyse the empirical process explaining how these over-hypotheses can been learnt. Bartha (2010) criticizes the Davies and Russel‘s determination rules (supposed to reduce analogical reasoning to deduction) on another ground, with the following argument: “Scientific analogies are commonly applied to problems where we do not possess useful determination rules. In many (perhaps most) cases, researchers are not aware of all relevant factors”. But this argument lies on a confusion about the role played by these rules: even when an agent does not know them, or is uncertain about them, they always play a normative role for evaluating the strength of the analogical argument. The link between the belief in these background hypotheses and the strength of the analogical argument is missing in Davies and Russel’s paper, but this does not lead to conclude that the role of these hypotheses is not universal on logical grounds. Miller (1995) proposes a solution close to Davies and Russel’s HE3 one, which can be translated in the present language by: ∀X, ∀Y, [O(X), P(X), Q(X), -O(Y), P(Y)] → Q(Y) where O(X) is an extra predicate meaning that X has been observed. Miller proves that this formula is the weakest universal proposition which entails Q(Y) in the presence of P(X), Q(X), P(Y).For him, it is the weakest form of an over-hypothesis. But the fact that it is the weakest one is not necessarily a good or required criterion of relevance, as Miller seems to believe. Indeed, if one drops O(X) and -O(Y) in Miller’s proposition, it can be checked that it is logically equivalent to Davies and Russell’s one. Adding the fact that X has been observed while Y has not makes the proposition logically weaker in an uninteresting way since the difference concerns only the cases where the premise includes -Q(B). In this case, Davies and Russel would lead to conclude that –Q(A), while Miller would not conclude. But we are trying to explain the reasoning precisely in the only cases where Q(B). In an important number of articles (e.g. Prade & Richard, 2011, 2012a, 2012b, 2014, Amgoud, Ouannani & Prade, 2014), Prade and al. develop a very different approach, which intends to provide a logical definition of analogy in a propositional framework. This approach focuses on the definition of 4-term formulas such as a:b :: c:d (where a, b, c, d are propositions) which are the basic structure of relational analogies. On one side, they propose a list of principles that such formulas could satisfy, not limited to those who constitute an equivalence relation. On the other side, they consider several criteria of “logical proportions”, whose components are analyzed in terms of "similarity indicators" (a & b; -a & -b) and "dissimilarity indicators" (a & -b; -a & b). The preferred formula for analogy is what they called “Analogical Proportion”, which is expressed by: (AP) a:b :: c:d iff ((a& -b) ≡ c& -d)) & ((-a & b) ≡ (-c &d)) This is supposed to mean that “a is to b what c is to d”, and the authors argue that this definition satisfies the more relevant principles for representing a relational analogyin this propositional framework. This conclusion seems to conflict with the thesis of the present paper since it may imply that, despite our previous arguments, it is possible to give a relevant logical definition of absolute analogy: no restriction to a domain is mentioned in definition (AP).However, the work of Prade& al. concerns only Boolean propositions as very specific objects of analogy, and cannot be considered as a general theory of analogy. It is assumed to be a transposition in a Boolean framework of the standard “proportional analogy”: (PA) x:y :: z: t iff x/y = z/t, where x, y, z, t are real numbers. As relevant as this classical example of analogy may be for numbers, it cannot be taken seriously as a basis for a definition of thephilosophical concept of analogy, since itcannot be applied as such to any other objects, e.g. ravens, apples or human beings…. The same applies to (AP) which concerns only propositions. The misleading aspect of Prade & al. suggestion is that it seems that, contrary to numbers, propositions may express our beliefs “in general”. But in fact, even if analogical statements are themselves propositions, they are propositions expressing beliefs which stand between objects, not between propositions. It is not possible to consider the definition (AP) in a propositional way in order to represent this general relation between objects. Hence, these attempts to define analogy in a propositional framework are at best interesting Boolean exercises but they are of no help for the philosophical understanding neither of the very concept of analogy nor for the one of reasoning by analogy Polya (1945) proposed another propositional interpretation of analogy in terms of non-monotonic reasoning. He identified some patterns of plausible reasoning among which this one: (PR) If a and b are analogous and a is true, then the truth of b is more credible. In terms of non-monotonic reasoning, the implied definition, suggested in Prade& Richard (2011), is: (NM) a~b iff |~ a≡ b It is easy to check that [PR] is satisfied if the inference relation |~ in (NM) is a preferential non monotonic consequence, as defined in Kraus, Lehmann and Magidor (1990). However this definition raises three problems which manifest the fact that their analysis is of little help for a philosophical analysis of analogy: - It concerns again only propositions and it is unclear how it may apply to the very general notion of analogy between objects, such as “an apple is like a pear”. - It introduces an absolute definition of analogy (NM) which is too restrictive: a and b are analogous if and only if they are true in exactly the same“normal”possible worlds. This almost reduces analogy to identity, with the only exception of the most implausible worlds - It does not explain how an object A could be similar to an object B from a point of view and not from another, since the normal worlds are a fixed subset of the possible worlds. We develop an analytic scheme of reasoning by analogy which is decomposed into two steps. The analogical assessment states that some objects are similar as concerns a fixed set of properties. The analogical inference states that a new property possessed by these similar objects can be transferred to another object already similar to the considered ones. But our conceptual scheme differs from the existing works in several respects. Our analysis considers that an analogical assessment is not true or false, good or bad in an absolute way, but is relative to a point of view expressed by a domain of properties. If some debates are raised about it and make it defeasible, they concern rival analogical inferences and not analogical assertions by themselves. Our analysis expresses that analogy is an inference such that the degree of belief in the conclusion is defined coherently with the degree of belief in a background over-hypothesis supporting it. The value of this kind of reasoning cannot be established on a purely syntactical basis but is linked to externally defined beliefs, which should be made explicit within analogical reasoning disputes. Hence, our analysis reconciles the opposite ideas that analogical reasoning is a useful method, especially in science, and that it is a fanciful reasoning, especially in current reasoning. It may be both, in science or in current reasoning, while using always the same inference scheme: the plausibility of the conclusion depends only of the plausibility of the back-ground over- hypothesis used in this scheme. This analysis departs from those which state that performing good or bad analogical inferences depends of the fact to fulfil or not a good reasoning method, or to rely or not on good analogies. The background over-hypothesis used in an analogical reasoning is a meta-level hypothesis in that it concerns upper level belief about properties. Consideration of this upper level avoids to trivialise the reasoning into a focusing redundant one and explains in a coherent way the possible – though multiple– origins of those background belief, especially in an inductive way. Our analysis departs finally from usual analysis which considers analogical reasoning as a specific reasoning mode different from deduction, induction and abduction. On the contrary, it relies on the idea that it is a category which includes one case induction,but with two specific pragmatic contexts, one of them corresponding to the more traditional examples of one case induction. Further work can be done in sometheoretical directions. First, one could try to build quantitative types of similarity indices which could be useful for establishing a similarity assessment. Second, one could analyse more closely the different ways to empirically build the kind of belief inherent in the over-hypothesis. Third, one could examine how the usual paradoxes of induction such as Hempel’s and Goodman’s apply to general analogical Further investigations can also be done in empirical domains. On the one hand, one could precise on historical examples how analogy is used in the process of science, in combination with other reasoning modes, especially in the “context of discovery” as opposed to the “context of proof”. On the other hand, one could examine how scientific or popular analogies are revised through time, under the pressure of new information and hot debates, and become widely accepted or definitely discarded. Alchourron, C.E.., Gärdenfors, P., Makinson, D. (1985): On the logic of theory change: partial meet contraction and revision functions, Journal of Symbolic Logic, 50, 510-530. Amgoud, L., Ouannani, Y., Prade, H. (2014): Arguing by analogy-Towards a formal view, in ECAI Aragones, E., Gilboa,I., Postelwaite, A., Schmeidler, D. (2014): Rhetoric and analogies, Research in Economics, 68, 1 :10. Bartha, P. (2010): By Parallel Reasoning, the construction and evaluation of analogical arguments, Oxford University Press. Bartha, P. (2013): Analogy and Analogical Reasoning, in Edward N. Zalta (ed.) The Stanford Encyclopedia of Philosophy. Bicchieri, C. (1988): Should a scientist abstain from metaphors, in Klamer,M., Mc Closkey, D., Solow, M.: The consequences of economic rhetoric, Cambridge University Press. Bird, A., Tobin, E. (2015): Natural Kinds, in Edward N. Zalta (ed.) The Stanford Encyclopedia of Bouveresse, J. (1999): Prodiges et vertiges de l’analogie, Raisons d’agir. Boyer, A. (1995): Cela va sans le dire, éloge de l’enthymème, Hermès, La Revue, 15. Davies, T.R., Russel, S.J. (1987): A logical approach to reasoning by analogy, in IJCAI-87, 364- 270, Morgan Kaufman. Dupin, J.J., Joshua, S. (1994): Analogies et enseignement des sciences: une analogie thermique pour l’électricité, INRP. Dupré, J. (1993): The Disorder of Things: Metaphysical Foundations of the Disunity of Science, Harvard University Press. Gentner, D. (1983): Structure mapping, a theoretical framework for analogy, Cognitive Science. Gentner, D., Holyoak, K.J., Kokinov, B.N. eds (2001): The analogical mind, MIT Press. Goodman, N., (1947): Fact, Fiction, Forecast, (Fourth Edition). Harvard University Press, 1983, First published in The Journal of Philosophy, 44, 113-28. Guarini et alii (2009): Resources for research on analogy, Informal Logic, 29(2), 84- 197.Hempel, C.G. (1965a): Aspects of scientific explanation, in Aspects of Scientific Explanation and Other Essays in the Philosophy of Science, Free Press, 331-496. Hempel, C.G. (1965b): Inductive-Statistical explanation, in Aspects of Scientific Explanation and Other Essays in the Philosophy of Science, Free Press, 331-496. Hesse, M. (1966): Models and analogies in science, University of Notre-Dame Press. Hofstader, D., Sander, E. (2013): Surfaces and Essences: analogy is the fuel and fire of thinking, trad. franc. L’analogie, cœur de la pensée, Odile Jacob. Jackson, F. (1991): Conditionals, Oxford University Press. Jarvis Thomson, J. (1971): “A defense of abortion”, Philosophy and Public Affairs, 1, 47-66. Juthe, A. (2005): Argument by Analogy, Argumentation, 19, 1-27. Keynes, J.M. (1921): A treatise on probability, Macmillan. Kraus S., Lehmann D., Magidor, M. (1990): Non monotonic reasoning, preferential models and cumulative logics, Artificial Intelligence, 44,167-208, 1990. Lichnerowicz, A., Perroux, F., Gadoffre, G. (1980-81) : Analogie et connaissance, Maloine. Musgrave, A. (1989) : Deductivism versus Psychologism, Perspectives on Psychologism, Miller, D. (1995): How Little Uniformity Needs an Inductive Inference Presuppose?, in Festchrifft, in honour of J. Agassi. In Jarvie and Laor (eds.), Boston Studies for the Philosophy of Science, Kluwer. Norton, J.D. (2005): A little survey on induction, in P. Achinstein ed. Scientific Evidence, John Hopkins University Press, 9-34. Norton, J.D. (2014): A material dissolution of the problem of induction, Synthèse, 191, 671-90. Norton, J.D. (2018): The Material Theory of Induction, in preparation. Pearl, J. (1988): Probabilistic Reasoning in Intelligent Systems, Networks of Plausible Inference, Morgan Kaufman Publishers Inc. Polya, G.. (1945): How to solve it. Princeton University Press. Prade, H., Richard, G. (2011): Cataloguing / analogizing: a non-monotonic view, International Jounal of Intelligent Systems, 26, 117-1195. Prade, H., Richard, G. (2012a): Homogeneous logical proportions: Their uniqueness and their role in similarity-based prediction.13th International Conference on the Principles of Knowledge Representation and Reasoning, KR 2012, 2012, 402 – 412. Prade, H., Richard, G. (2012b): Logical Proportions – Further Investigations. International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems, IPMU 2012: Advances on Computational Intelligence , 208-218. Prade, H., Richard, G. (2014): From analogical proportions to logical proportions, Logica Universalis, 7, 441-505. Quine, W.V. (1969): Ontological relativity and other essays, Columbia University Press. Shaw, W.H., Ashley, L.R. (1983): Analogy and inference,Dialogue, Canadian Journal of Philosophy, 22, 415-32. Voskoglou, M.Gr., Salem, A.M. (2014): Analogy-Based and Case-Based Reasoning : Two sides of the same coin. International Journal of Applications of Fuzzy Sets and Artificial Intelligence, Vol.4, 5-51. Walliser, B. (1994): Three generalization processes for economic models, in B. Hamminga, N. de Marchi eds.:Idealization VI, Idealization in Economics, Poznan Studies in the Philosophy of Science and Humanities, Rodopi. Walliser, B., Zwirn, D., Zwirn, H. (2005): Abductive Logic in a Belief Revision Framework; Journal of Logic, language and Information, Vol 14, n°1. Wolton, D. (2010): Similarity, precedent and argument from analogy, Artificial Intelligence and Law, 18(3), 217-46. Zwirn, D., Zwirn, H. (1996): Metaconfirmation, Theory and Decision, Nov. 96, Vol.41, 195-228.
{"url":"https://www.researchgate.net/publication/330824896_REASONING_BY_ANALOGY","timestamp":"2024-11-03T19:35:15Z","content_type":"text/html","content_length":"723678","record_id":"<urn:uuid:988dc9cb-cb71-42bb-afa4-a3081ea1d9ba>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00208.warc.gz"}
JLT 32019 Journal Home Page Cumulative Index List of all Volumes Complete Contents of this Volume Next Article Journal of Lie Theory 32 (2022), No. 2, 383--412 Copyright Heldermann Verlag 2022 Gradings for Nilpotent Lie Algebras Eero Hakavuori SISSA, Trieste, Italy Ville Kivioja Faculty of Mathematics and Science, University of Jyväskylä, Finland Terhi Moisala Faculty of Mathematics and Science, University of Jyväskylä, Finland Francesca Tripaldi Faculty of Science, University of Bern, Switzerland We present a constructive approach to torsion-free gradings of Lie algebras. Our main result is the computation of a maximal grading. Given a Lie algebra, using its maximal grading we enumerate all of its torsion-free gradings as well as its positive gradings. As applications, we classify gradings in low dimension, we consider the enumeration of Heintze groups, and we give methods to find bounds for non-vanishing l^q,p cohomology. Keywords: Nilpotent Lie algebras, gradings, maximal gradings, positive gradings, stratifications, Carnot groups, classifications, large scale geometry, Heintze groups, lqp cohomology. MSC: 17B70, 22E25, 17B40, 20F65, 20G20. [ Fulltext-pdf (212 KB)] for subscribers only.
{"url":"https://www.heldermann.de/JLT/JLT32/JLT322/jlt32019.htm","timestamp":"2024-11-07T17:20:24Z","content_type":"text/html","content_length":"3868","record_id":"<urn:uuid:c8d7450a-91cd-4ad6-9f8b-e56cc12d6538>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00136.warc.gz"}
Metabelian Lie powers of the natural module for a general linear group Consider a free metabelian Lie algebra M of finite rank r over an infinite field K of prime characteristic p. Given a free generating set, M acquires a grading; its group of graded automorphisms is the general linear group GLr(K), so each homogeneous component Md is a finite dimensional GLr(K)-module. The homogeneous component M1 of degree 1 is the natural module, and the other Md are the metabelian Lie powers of the title.This paper investigates the submodule structure of the Md. In the main result, a composition series is constructed in each Md and the isomorphism types of the composition factors are identified both in terms of highest weights and in terms of Steinberg's twisted tensor product theorem; their dimensions are also given. It turns out that the composition factors are pairwise non-isomorphic, from which it follows that the submodule lattice is finite and distributive. By the Birkhoff representation theorem, any such lattice is explicitly recognizable from the poset of its join-irreducible elements. The poset relevant for Md is then determined by exploiting a 1975 paper of Yu.A. Bakhturin on identical relations in metabelian Lie algebras. Keywords: Dual Weyl module; Free metabelian Lie algebra; Infinite general linear group; Submodule lattice Restricted until
{"url":"https://openresearch-repository.anu.edu.au/items/79e30d3d-3b00-4ccf-960d-1ac83443d602","timestamp":"2024-11-05T19:15:46Z","content_type":"text/html","content_length":"455707","record_id":"<urn:uuid:c8f2b50e-b0bc-4e69-b34b-b523f8fa31be>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00076.warc.gz"}
The numbers in the circles add up to the numbers in the rectangles between them. Check Next Type in the missing numbers according to the rule. You can earn a Transum Trophy for correctly completing eight arithmagons at any one level. You are currently working on Level 1: The basic arithmagon, three single digit numbers given in circles. The numbers in the rectangles are the sum of the two adjacent circle numbers. You have earned a trophy for this level but there are more levels for you to try! By working through these challenges you will discover the hidden secrets of Arithmagons. You will find the connections between the numbers in rectangles and the numbers in circles and in doing so develop strategies for solving the more difficult Arithmagon puzzles. This activity is suitable for pupils of a wide range of abilities. It provides purposeful numeracy practice and levels that are a multiple of four (Level 4, Level 8, Level 12 etc.) encourage pupils to devise efficient solving strategies. The subtraction or difference Arithmagons with only the rectangle numbers given (Levels 12, 24, 36 and 48) have an infinite number of correct solutions and the computer will allow any one of these correct solutions. In his pdf eBook Rich Starting Points for A Level Core Mathematics www.risps.co.uk Jonny Griffiths says that "there is no better way to present ideas of doing and undoing than arithmogons. They have been around a long time: 1975 is the first reference I have for them, but I daresay they have been around longer than that. One of their many marvellous aspects is that they require few words. John Mason and Sue Johnston-Wilder (Designing and Using Mathematical Tasks) give a long and fascinating study of how simple arithmogons can be used in a variety of ways." If your pupils do not have access to computers there is a printable set of Arithmagon worksheets here: If you like Arithmagons you might also like these activities: Fractions Algebra Puzzle Fractionagons Algebragons Pentadd Quiz Same structure as Arithmagons but designed to help pupils Same structure as Arithmagons but designed to help pupils Find the five numbers which when added or multiplied together in pairs to develop their fraction arithmetic. develop their algebra skills produce the given sums or products. The short web address is: The short web address is: The short web address is: Transum.org/go/?to=frgons Transum.org/go/?to=algons Transum.org/go/?to=pentadd There are many more puzzles on the Transum Puzzle page. Monday, June 2, 2014 "Don't be content with only completing the first level of this challenge, click the 'More Levels' tab to select more difficult puzzles. Try the 'Two digit numbers', 'Addition', 'Three rectangle values only' combination to produce some interesting arithmagons.." Year 4, St Olave’s Wednesday, November 14, 2018 Ben Winter, Winterteach Wednesday, May 19, 2021 "Hi John, thanks again for all of the amazing activities on your website. I use them with my students and am really enjoying the arithmagons. They are a great way of allowing students to show their understanding of number facts and the relationship between addition and subtraction. [Transum: Great to hear that Ben. Thanks so much for the feedback.]" Do you have any comments? It is always useful to receive feedback and helps make this free resource even more useful for those learning Mathematics anywhere in the world. Click here to enter your Do you know the origin of the title 'Arithmagons'? This activity has been used in schools for many years but who is the person we should thank for dreaming up the idea? Still looking for a challenge? Try one of these activities: Ming Game Challenge Suggested Suko Sujiko Satisfaction Magic Square Puzzle Interactive number-based logic puzzles similar to those This is quite a challenging number grouping puzzle requiring a knowledge Find all of the possible ways of making the magic total from the featuring in daily newspapers. of prime, square and triangular numbers. numbers in this four by four magic square. The short web address is: The short web address is: The short web address is: Transum.org/go/?to=suko Transum.org/go/?to=satisfaction Transum.org/go/?to=magicsquarepuzzle There are many more puzzles on the Transum Puzzle page. The solutions to this and other Transum puzzles, exercises and activities are available here when you are signed in to your Transum subscription account. If you do not yet have an account and you are a teacher, tutor or parent you can apply for one by completing the form on the Sign Up page. A Transum subscription also gives you access to the 'Class Admin' student management system, downloadable worksheets, many more teaching resources and opens up ad-free access to the Transum website for you and your pupils. ◯ ◯ ◯ ◯ ◯ ▭ ◯ ▭ ▭ ▭ ▭ ▭ 1 Level 1 Level 2 Level 3 Level 4 5 Level 5 Level 6 Level 7 Level 8 9 Level 9 Level 10 Level 11 Level 12 ◯ ◯ ◯ ◯ ◯ ▭ ◯ ▭ ▭ ▭ ▭ ▭ 13 Level 13 Level 14 Level 15 Level 16 17 Level 17 Level 18 Level 19 Level 20 21 Level 21 Level 22 Level 23 Level 24 ◯ ◯ ◯ ◯ ◯ ▭ ◯ ▭ ▭ ▭ ▭ ▭ 25 Level 25 Level 26 Level 27 Level 28 29 Level 29 Level 30 Level 31 Level 32 33 Level 33 Level 34 Level 35 Level 36 ◯ ◯ ◯ ◯ ◯ ▭ ◯ ▭ ▭ ▭ ▭ ▭ 37 Level 37 Level 38 Level 39 Level 40 41 Level 41 Level 42 Level 43 Level 44 45 Level 45 Level 46 Level 47 Level 48 Choose Your Own Options: The first Arithmagon you will see is a triangle with three circles at its vertices and rectangles on its sides. The idea is to add up the two numbers in circles at either end of a side of a triangle and type the answer into the rectangle on the middle of that side. Click the check button when you have filled in all three rectangles. If you are correct you will see one slice of a pie chart showing your progress so far. Complete eight Arithmagons to earn a Transum Trophy. This first level you should find very easy. It gets more difficult when you are not given the three circle numbers but are given the rectangle numbers instead. That’s when you need to come up with a You are currently working on Level 1: The basic arithmagon, three single digit numbers given in circles. The numbers in the rectangles are the sum of the two adjacent circle numbers.
{"url":"https://www.transum.org/Software/Arithmagons/Default.asp?Level=1","timestamp":"2024-11-04T17:31:41Z","content_type":"text/html","content_length":"41659","record_id":"<urn:uuid:dcbcec8b-f6e9-45cb-8258-d8df429a7f8d>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00824.warc.gz"}
Mixing Hot and Cold Water Final Temperature Calculator - GEGCalculators Mixing Hot and Cold Water Final Temperature Calculator When mixing hot and cold water, the final temperature depends on the initial temperatures and proportions. Assuming typical specific heat capacities for water, mixing equal amounts of 70°C hot water and 40°C cold water will result in a final temperature of approximately 55°C. Mixing Hot and Cold Water Temperature Calculator Mass of Hot Water (g) Initial Temperature of Hot Water (°C) Mass of Cold Water (g) Initial Temperature of Cold Water (°C) Final Temperature (°C) How do you calculate the temperature of water when mixed? The final temperature of a mixture of water can be calculated using the principle of conservation of energy, specifically the equation for heat transfer: Q = m * c * ΔT • Q is the heat energy transferred. • m is the mass of the water. • c is the specific heat capacity of water. • ΔT is the change in temperature. How do you find the final temperature of two waters? To find the final temperature when mixing two waters, you can use the heat transfer equation mentioned above. The final temperature will depend on the masses and initial temperatures of the two waters. What happens to the temperature when you mix hot and cold water? When you mix hot and cold water, the resulting temperature will be somewhere between the initial temperatures of the hot and cold water. It will tend towards the temperature of the water with the higher initial temperature. What happens when you mix water of two different temperatures? When you mix water of two different temperatures, heat will transfer from the hotter water to the colder water until they reach a common final temperature. What is the formula for the final temperature? The formula for the final temperature when mixing two substances with different temperatures can be derived from the heat transfer equation: Tf = (m1 * c1 * T1 + m2 * c2 * T2) / (m1 * c1 + m2 * c2) • Tf is the final temperature. • m1 and m2 are the masses of the substances. • c1 and c2 are their respective specific heat capacities. • T1 and T2 are their initial temperatures. What is the temperature of a mix formula? The formula for the final temperature of a mixture depends on the specific heat capacities and initial temperatures of the substances being mixed, as shown in the previous answer. What should final hot water temperature be? The final hot water temperature in a mixing scenario will depend on the initial temperatures and proportions of hot and cold water being mixed. There is no fixed “should be” temperature; it varies based on the intended use and personal preference. What would be the final temperature of a mixture of 100 g of water at 90°C and 600 g of water at 20°C? Assuming both waters have a specific heat capacity of approximately 4.18 J/g°C, you can use the heat transfer equation: Tf = (m1 * c1 * T1 + m2 * c2 * T2) / (m1 * c1 + m2 * c2) Tf = [(100 g * 4.18 J/g°C * 90°C) + (600 g * 4.18 J/g°C * 20°C)] / (100 g * 4.18 J/g°C + 600 g * 4.18 J/g°C) Tf ≈ (37740 J + 50160 J) / (418 J + 2508 J) Tf ≈ 87900 J / 2926 J Tf ≈ 30°C So, the final temperature of the mixture would be approximately 30°C. What is equivalent temperature of a mixture? The equivalent temperature of a mixture is the single temperature at which two or more substances in a mixture collectively reach thermal equilibrium after mixing. What is a blended water temperature? Blended water temperature is the temperature resulting from mixing hot and cold water to achieve a desired temperature for various purposes like bathing or What temperature change is expected during the mixing of two liquids? The temperature change during the mixing of two liquids depends on their initial temperatures and specific heat capacities. The temperature may increase or decrease, or it may remain relatively constant, depending on the properties of the liquids and their proportions. How do you find the final temperature of two objects? To find the final temperature when mixing two objects, you can use the heat transfer equation mentioned earlier, considering the masses, specific heat capacities, and initial temperatures of the objects. What is the temperature of a mixture of ice and water? The temperature of a mixture of ice and water will depend on the proportions of ice and water and their initial temperatures. It will generally be close to the melting point of ice, which is 0°C (32°F). How do you find the final temperature using Charles Law? Charles’s Law deals with the relationship between the volume and temperature of a gas at constant pressure. It doesn’t directly apply to finding the final temperature when mixing substances. The heat transfer equation is more appropriate for such calculations. Does mixing affect temperature? Yes, mixing substances with different temperatures can affect the temperature of the resulting mixture. Heat will transfer from the hotter substance to the colder one until thermal equilibrium is reached. How do you calculate mixing? Mixing can involve various calculations depending on the substances, temperatures, and proportions involved. Generally, you calculate mixing by considering the conservation of energy and using equations related to heat transfer and specific heat capacity. How do you make formula with hot and cold water? To make a formula with hot and cold water, you mix the two at specific proportions to achieve the desired temperature for the formula. The exact proportions will depend on the required formula temperature and the initial temperatures of the hot and cold water. Does turning up water heater make hot water last longer? Turning up the water heater temperature setting will not make hot water last longer. It will increase the temperature of the hot water, but the total amount of hot water available remains the same. To increase the duration of hot water, you may need a larger water heater or more efficient insulation. What is the OSHA commercial hot water temperature regulations? OSHA (Occupational Safety and Health Administration) does not have specific regulations regarding commercial hot water temperature. However, hot water in commercial settings should be maintained at a safe temperature to prevent scalding or burns. Typically, a safe range is around 100 to 120°F (37 to 49°C). Is 140 degrees too hot for a water heater? A water heater set to 140 degrees Fahrenheit (60 degrees Celsius) is generally considered too hot for most domestic uses. It can cause scalding and burns. The recommended safe temperature for water heaters is around 120°F (49°C) to prevent such injuries. What would be the final temperature of a mixture of 50 g of water at 20 degrees Celsius? The final temperature of a mixture of 50 g of water at 20°C would depend on what it’s mixed with. If you’re mixing it with a substance at a different temperature, you would use the heat transfer equation to calculate the final temperature. What would be the final temperature of a mixture of 50 g of water at 20 degrees Celsius temperature and 50 grams of water at 40 degrees temperature? Assuming both water samples have a specific heat capacity of approximately 4.18 J/g°C, you can use the heat transfer equation as described earlier to calculate the final temperature. What would be the final temperature of a mixture of 50g of water at 20°C and 50g of water at 40°C temperature? Assuming both water samples have a specific heat capacity of approximately 4.18 J/g°C, you can use the heat transfer equation as described earlier to calculate the final temperature. What would be the final temperature of a mixture of 50g of water at 20°C? The final temperature of a mixture of 50 g of water at 20°C would depend on what it’s mixed with. If you’re mixing it with a substance at a different temperature, you would use the heat transfer equation to calculate the final temperature. What is the final temperature when mixing two liquids? The final temperature when mixing two liquids depends on their masses, specific heat capacities, and initial temperatures. You can calculate it using the heat transfer equation mentioned earlier. What is the final temperature of the mixture if 10g of ice at 10°C is mixed with 10g of water at 10°C? In this case, the ice will melt and reach the same temperature as the water, which is 10°C. This is because the energy required for the phase change (melting) comes from the heat of the water, causing the temperature to remain constant until all the ice has melted. How much energy is needed to vaporize 10 grams of water at 100°C? To vaporize 10 grams of water at 100°C, you need to calculate the heat energy required for the phase change from liquid to vapor. The heat of vaporization for water is approximately 2260 J/g. So, the energy required would be: Energy = mass * heat of vaporization Energy = 10 g * 2260 J/g = 22,600 J How much heat energy is transferred when 10.0 grams of water at 50°C cools to 25°C? To calculate the heat energy transferred, you can use the heat transfer equation: Q = m * c * ΔT • Q is the heat energy transferred. • m is the mass of water (10.0 g). • c is the specific heat capacity of water (approximately 4.18 J/g°C). • ΔT is the change in temperature (final temperature – initial temperature). ΔT = 25°C – 50°C = -25°C Q = 10.0 g * 4.18 J/g°C * (-25°C) = -1045 J So, 1045 J of heat energy is transferred out of the water as it cools from 50°C to 25°C. What are the three formulas for temperature conversions? The three commonly used temperature conversion formulas are: 1. Celsius to Fahrenheit: °F = (°C * 9/5) + 32 2. Fahrenheit to Celsius: °C = (°F – 32) * 5/9 3. Celsius to Kelvin: K = °C + 273.15 What is the formula for temperature difference? The formula for temperature difference is simply the subtraction of one temperature from another: ΔT = T2 – T1 • ΔT is the temperature difference. • T2 is the final temperature. • T1 is the initial temperature. What is it called when two objects with different temperatures mix and eventually reach the same temperature? When two objects with different temperatures mix and eventually reach the same temperature, it is called thermal equilibrium. At thermal equilibrium, there is no net heat flow between the objects, and they have the same final temperature. Is it possible for the temperature of two items to change by different degrees of temperature but change by the same amount of heat energy? Yes, it is possible for two items to change by different degrees of temperature but change by the same amount of heat energy. This can happen when the specific heat capacities of the two items are different. Specific heat capacity determines how much heat energy is required to change the temperature of a substance by a certain amount. Items with lower specific heat capacities will experience larger temperature changes for the same amount of heat energy transfer compared to items with higher specific heat capacities. Is it OK to swim in 80 degree water? Swimming in 80-degree water can be comfortable for many people, especially in warm weather. However, individual preferences and tolerance to water temperature vary. It’s essential to be aware of your own comfort and safety while swimming, and always exercise caution in colder water, as it can lead to hypothermia if you stay in for an extended period. How long can you survive in 85-degree water? Survival time in 85-degree water depends on various factors, including the individual’s physical condition, clothing, and activity level. In general, a person can survive for several hours in water at this temperature, but exhaustion and hypothermia can set in if not properly dressed or if exposed for an extended period. GEG Calculators is a comprehensive online platform that offers a wide range of calculators to cater to various needs. With over 300 calculators covering finance, health, science, mathematics, and more, GEG Calculators provides users with accurate and convenient tools for everyday calculations. The website’s user-friendly interface ensures easy navigation and accessibility, making it suitable for people from all walks of life. Whether it’s financial planning, health assessments, or educational purposes, GEG Calculators has a calculator to suit every requirement. With its reliable and up-to-date calculations, GEG Calculators has become a go-to resource for individuals, professionals, and students seeking quick and precise results for their calculations. Leave a Comment
{"url":"https://gegcalculators.com/mixing-hot-and-cold-water-final-temperature-calculator/","timestamp":"2024-11-08T21:10:52Z","content_type":"text/html","content_length":"187026","record_id":"<urn:uuid:232b6eb3-4cd8-4bc2-a5da-d7c0f27ed656>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00507.warc.gz"}
Worksheet Position-time & Velocity-time Graphs Answer Key - Graphworksheets.com Worksheet Position-time & Velocity-time Graphs Answer Key Worksheet Position-time & Velocity-time Graphs Answer Key – In many areas, reading graphs can be a useful skill. They allow people to quickly compare and contrast large quantities of information. A graph of temperature data might show, for example, the time at which the temperature reached a certain temperature. Good graphs have a title at the top and properly labeled axes. They are clean and make good use of space. Graphing functions High school students can use graphing functions worksheets to help them with a variety of topics. These include identifying and evaluating function, composing, graphing and transforming functions. These exercises also include finding domains, performing arithmetic operations on functions, and identifying inverse functions. These worksheets include function tables and finding the range for a function. Many have worksheets that allow you to combine two or more functions. Functions are a special type of mathematical relationship that describes the relationship between inputs and outputs. It is useful in predicting what the future will hold and can also help to predict how things might change. Some functions can even be built on seemingly random inputs. Students must be able recognize, create, draw, and graph functions in order to use this knowledge for future Students must determine the x-intercept (or y-intercept) when graphing a linear function. An input-output table with the correct format is also required. The student can plot the graph once the input-output tables are completed. Graphing line graphs A line graph is a chart that has two axes. One axis represents the independent variable, and the other axis represents the dependent variable. The data points on x-axis can be referred to as x-axis point, while the points on y-axis can be referred to as y-axis point. You can plot the axes side-by-side or inverted as in a bargraph. In the third grade, line graphs are introduced. By the fourth grade students can move on to more complicated graphs. These graphs require more analysis and have a variable vertical scale. These graphs may also contain real-life data. They may begin at zero on the vertical scale. To create an effective graph, students will need to analyze and answer questions. The data to be plotted will require students to label the axis. They should also label the axis with appropriate increments. For example, a line graph may show the stock price changes over two weeks. The x-axis would represent the number of days, and the y-axis would represent the price of the stock over that period. Graphing bar graphs Graphing bar graphs worksheet answers provide the student with the necessary information to draw a chart. These charts are used to analyze information and make decisions. For this reason, students should familiarize themselves with the various kinds of graphs. Bar graphs can be used to illustrate changes over time. A bar graph can also be used to compare two sets of data. A double bar graph, for example, can be used to compare data from two bakeries. The data is presented on a graph with discrete values on a scale of 10. The graph is presented on a graph with discrete values on a scale of 10. Mrs. Saunders must be assisted by the student to interpret it. Bar graph worksheets contain a series of questions that students can use to practice understanding and reading data. These questions may include counting objects, reading and understanding bar graphs, as well as questions about how to count them. A grade three bar graph worksheet will include questions about labeling x and y axes and reading the graph. Word problems will be included in a grade four bar graph worksheet. Graphing grids Graphing grids worksheets help students understand the concept of coordinate geometry. Students can plot points in each quadrant using a coordinate grid. They can also plot functions on a grid. Graphing grids worksheet answers are provided in printable pdf files. These worksheets are a good way to practice comparing and relating coordinate pairs. Graphing worksheets usually have a single and a four-quadrant grid. Each point is connected to the previous point using a line segment. Students can use these grids to help them visualize the relationship between the points and the lines. After plotting, students can use the coordinate grid to solve equations involving more than one quadrant. These graphing worksheets are a great resource for elementary and middle school students. These graphing worksheets can be generated by a graph paper generator. Generators can produce standard graph paper that has a single quadrant coordinate grid and two single quadrant graphs. There are also four single-quadrant graphs per sheet. Gallery of Worksheet Position-time & Velocity-time Graphs Answer Key Position Time Graph To Velocity Time Graph Worksheet Velocity Time Graph Worksheet 2 5 Answer Key Worksheet 31 Position Vs Time And Velocity Vs Time Graphs Worksheet Answers Leave a Comment
{"url":"https://www.graphworksheets.com/worksheet-position-time-velocity-time-graphs-answer-key/","timestamp":"2024-11-02T22:04:34Z","content_type":"text/html","content_length":"64775","record_id":"<urn:uuid:4b88e354-2ea0-4eb9-9f23-6a883863f3fb>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00818.warc.gz"}
PyTorch VAE | What is PyTorch VAE? | Examples | Definition Updated April 7, 2023 Introduction to PyTorch VAE In PyTorch, we have different types of functionality for the user, in which that vae is one of the functions that we can implement in deep learning. The vae means variational autoencoder, by using vae we can implement two different neural networks, encoder, and decoder as per our requirement. Just imagine we have a large set of the dataset and inside that each and every image consists of hundreds of pixels that mean it has hundreds of dimensions. So in deep learning sometimes we need to reduce the dimension of an image so at that time we can use vae to increase the high dimensional What is PyTorch vae? Envision that we have a huge, high-dimensional dataset. For instance, envision we have a dataset comprising thousands of pictures. Each picture consists of many pixels, so every information point has many aspects. The complex speculation expresses that genuine high-dimensional information really comprises low-dimensional information that is installed in the high-dimensional space. This implies that, while the genuine information itself may have many aspects, the hidden design of the information can be adequately depicted utilizing a couple of aspects. This is the inspiration driving dimensionality decrease methods, which attempt to take high-dimensional information and undertake it onto a lower-dimensional surface. For people who picture most things in 2D (or at times 3D), this typically implies extending the information onto a 2D surface. Instances of dimensionality decrease methods incorporate head part investigation (PCA) and t-SNE. Neural organizations are regularly utilized in the regulated learning setting, where the information comprises sets (A, B) furthermore the organization learns a capacity f: A→B. This setting applies to both relapses (where B is a ceaseless capacity of A) furthermore characterization (where B is a discrete mark for A). Notwithstanding, neural organizations have shown extensive power in the solo learning setting, where information simply consists of focuses A. There are no “objectives” or “marks” B. All things considered, the objective is to learn and comprehend the construction of the information. On account of dimensionality decrease, the objective is to track down a low-dimensional portrayal of the information. PyTorch vae The standard autoencoder can have an issue, by the way, that the dormant space can be sporadic. This implies that nearby focuses in the dormant space can create unique and inane examples over noticeable units. One answer for this issue is the presentation of the Variational Autoencoder. As the autoencoder, it is made out of two neural organization designs, encoder, and decoder. Be that as it may, there is an alteration of the encoding-unraveling process. We encode the contribution as a circulation over the dormant space, rather than thinking about it as a solitary point. This encoded conveyance is picked to be ordinary so that the encoder can be prepared to return the mean network and the covariance framework. In the subsequent advance, we test a point from that encoded circulation. After, we can decipher the inspected point and work out the remaking blunder We backpropagate the recreation mistake through the organization. Since the testing system is a discrete cycle, so it’s not persistent, we want to apply a reparameterization stunt to make the backpropagation work. In vae we also need to consider the loss function as follows. The loss for the VAE comprises of two terms: The initial term is the reconstruction term, which is contrasting the information and comparing reproduction. An extra term is the regularization term, which is likewise called the Kullback-Leibler difference between the dispersion returned by the encoder and the standard typical appropriation. This term goes about as a regularizer in the dormant space. This makes the circulations returned by the encoder near-standard typical dissemination. Now let’s see different examples of vae for better understanding as follows. For implementation purposes, we need to follow the different steps as follows. Step 1: First we need to import all the required packages and modules. We will utilize the torch.optim and the torch.nn module from the light bundle and datasets and changes from torchvision bundle. Step 2: After that, we need to import the dataset. we need to load the required dataset into the loader with the help of the DataLoader module. We can use the downloaded dataset for image transformation. Utilizing the DataLoader module, the tensors are stacked and fit to be utilized. Step 3: Now we need to create the autoencoder class and that includes the different nodes and layers as per the problem statement. Step 4: Finally, we need to initialize the model and create output. import torch import torchvision from torch import nn from torch.utils.data import DataLoader from torchvision import transforms from torchvision.datasets import MNIST img_tran = transforms.Compose([ transforms.Normalize([0.6], [0.6]) datainfo = MNIST('./data', transform=img_tran, download=True) class autoencoder_l(nn.Module): def __init__(self): self.encoder_fun = nn.Sequential( nn.Linear(24 * 24, 124), nn.Linear(64, 32), nn.Linear(32, 10), nn.Linear(10, 2)) self.decoder_fun = nn.Sequential( nn.Linear(10, 2), nn.Linear(32, 10), nn.Linear(64, 32), nn.Linear(124, 24 * 24), def forward(self, A): lat = self.encoder_fun(A) A = self.decoder_fun(lat) return A, lat n_ep = 8 b_s = 124 l_r = 2e-2 dataloader = DataLoader(datainfo, b_s=b_s, shuffle=True) model = autoencoder_l() crit = nn.MSELoss() opti = torch.optim.AdamW( model.parameters(), lr=l_r) for ep in range(n_ep): for info in dataloader: image, label_info = info image = imahe.view(image.size(0), -1).cuda() result, lat = model(image) loss = crit(result, image) print(Result, 'epoch_n [{epoch + 1},{n_ep}], loss of info:{loss.info.item()}') In an example first, we import all required packages after that we download the dataset and we extract them. Here we use MINSET dataset for image extraction. After that, we write the code for the training dataset as shown. The final result of the above program we illustrated by using the following screenshot as follows. We hope from this article you learn more about the PyTorch vae. From the above article, we have learned the basic concept as well as the syntax of the PyTorch vae and we also see the different examples of the PyTorch vae. From this article, we learned how and when we use the PyTorch vae. Recommended Articles We hope that this EDUCBA information on “PyTorch VAE” was beneficial to you. You can view EDUCBA’s recommended articles for more information.
{"url":"https://www.educba.com/pytorch-vae/","timestamp":"2024-11-06T04:29:34Z","content_type":"text/html","content_length":"308946","record_id":"<urn:uuid:95db295c-80eb-4082-ba0d-476376094742>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00376.warc.gz"}
Quantum Information for Mathematics, Economics, and Statistics • IMSI Back to top There are many practical and theoretical challenges in the emerging area of quantum information processing, which seeks to optimally use the information embedded in the state of a quantum system to solve previously intractable computational problems and revolutionize simulation. The engineering goal is to develop scalable quantum hardware that circumvents the physical limits on the computational power of existing technologies, which are ultimately constrained by energy dissipation as the physical size of the components is reduced to the nanometer scale. In parallel with such “practical” difficulties, new theory is required to understand the limitations of quantum media and capitalize on the advantages of quantum superposition and entanglement. This includes the creation of new quantum algorithms that are targeted toward real-world problems, e.g., in finance, chemistry, and medicine; a study of the required resources to achieve a particular outcome, as well as methods to efficiently characterize such resources; and the development of novel protocols for secure quantum-enhanced communication, as well as classical ‘post-quantum encryption’ methods that are immune to quantum hacking. For all of these, quantum information theory relies on and draws inspiration from many different aspects of mathematics and theoretical computer science, including geometry, group theory, functional analysis, number theory, operator theory, probability theory, topology, complexity theory, and learning theory. Furthermore, the recent resolution of Connes’ embedding conjecture using quantum information-theoretic methods shows that ideas and results from quantum information theory can also influence research in pure mathematics. Back to top S A Scott Aaronson Computer Scienc University of Texas at Austin D A David Awschalom Prizker School of Molecular Engineering University of Chicago B D Brian DeMarco Physics University of Illinois at Urbana_Champaign P K Paul Kwiat Physics University of Illinois at Urbana_Champaign U V Umesh Vazirani EECS University of California, Berkeley Back to top G C Giuseppe Carleo EPFL (Ecole Polytechnique Fédérale de Lausanne) U C Ulysse Chabau Caltech A C Andrew Childs University of Maryland B C Bryan Clark University of Illinois at Urbana-Champaign W F William Fefferman University of Chicago M K Maria Kieferova University of Technology Sydney D M Damian Markham Centre National de la Recherche Scientifique (CNRS) R O Roman Orus Donostia International Physics Center M P Marco Pistoia JPMorgan Chase E S Edgar Solomonik University of Illinois at Urbana-Champaign J W Joel Wallman University of Waterloo A W Andreas Winter Universitat Autònoma de Barcelona H Y Henry Yuen Columbia University Back to top Monday, May 24, 2021 9:30-10:15 CDT The Power of Random Quantum Circuits Speaker: Bill Fefferman (University of Chicago) 11:00-11:45 CDT Applying quantum computing to solve problems in finance Speaker: Roman Orus (Donostia International Physics Center) 13:30-14:15 CDT Quantum state certification with phase-space measurements Speaker: Ulysse Chabaud (Institute for Quantum Information and Matter, California Institute of Technology) The use of quantum information promises many interesting applications, but these expectations can only be met with stringent levels of control of quantum devices. Efficient methods to ensure the correct functioning of these devices are therefore crucial for the development of quantum technologies. In this talk, I will discuss the certification of the output of quantum devices in the context of quantum information processing with infinite-dimensional Hilbert spaces, which stands as an interesting alternative to the use of finite-dimensional Hilbert spaces, notably for entanglement generation and error-correction. In that setting, quantum states are equivalently described by—and may be probed via—their phase-space representations. After introducing the beautiful mathematical formalism underlying the phase-space formulation of quantum mechanics, I will present efficient and robust methods for characterizing quantum states using phase-space measurements, a striking application being the efficient verification of Boson Sampling experiments. Tuesday, May 25, 2021 9:30-10:15 CDT Loophole-free Contextuality Inequalities Speaker: Damian Markham (Centre National de la Recherche Scientifique (CNRS)) (joint work with Shane Mansfield)As it is usually presented, the physical consequence of contextuality is that it forces one to give upon either determinism or parameter-independence. The latter ensures that measurement responsesare independent of context, i.e. any other compatible measurements that are performed jointly.These are assumptions that apply to hidden variable models, which are supposed to encompassany reasonable physical description that could underlie the empirical data. In realistic, noisyexperiments however, the validity of those assumptions can be called into question. This introducesloopholes to the physical conclusions one can draw from instance of contextuality. A number ofprevious works have addressed one or other of these loopholes. We build on this to provide a unifiedapproach to closing both loopholes by introducing appropriate correction terms to the contextualfraction measure of contextuality. 11:00-11:45 CDT Quantum Computing for Financial Use Cases Speaker: Marco Pistoia (JPMorgan Chase) Finance is widely considered to be among the first industrial sectors to benefit from Quantum Computing. This is because quantum computers are likely to be able to address many financial use cases more efficiently and accurately than classical computers. This presentation will describe some applications of Quantum Computing to finance, focusing in particular on Monte Carlo methods for Option Pricing and Risk Analysis, quantum optimization algorithms for Portfolio and Tax Optimization, and quantum algorithms for finance-related Machine Learning use cases. This presentation while also discuss the estimated quantum speedup of the algorithms identified so far, along with the quantum resources necessary to achieve usable solutions. Wednesday, May 26, 2021 9:30-10:15 CDT Quantum Advantage in Games of Incomplete Information Speaker: Andreas Winter (Universitat Autònoma de Barcelona) Competitive games of complete information famously always have Nash equilibria, but it is well-known that correlation (“advice”) can yield new equilibria, sometimes with preferable collective properties (social welfare, fairness, …). While it is known that quantum correlations in the form of entanglement do not imply further correlated equilibria, this situation changes when going to so-called Bayesian games of incomplete information, where each player has to react to a privately known type, while being ignorant about the type of the other players. In fact, when all players in the game share the same payoff function, this reproduces the “nonlocal games” studied in quantum mechanics since Bell, in which case the CHSH inequality separates classical from quantum correlation and quantum from no-signalling. In the talk, I will review, how classical correlations, shared quantum states and no-signalling correlations can create a hierarchy of sets of classically correlated, quantum and belief-invariant equilibria of conflict-of-interest games, all containing the set of Nash equilibria. I will show a simple and easy-to-analyze construction of such games based on quantum pseudo-telepathy games, which show the quantum advantage of larger social welfare and of fairer distribution of payoff. Based on work with V. Auletta, D. Ferraioli, A. Rai and G. Scarpa (arXiv:1605.07896), and with M. Cerda (BSc thesis, UAB 2021). 11:00-11:45 CDT The proofs perspective on MIP* = RE Speaker: Henry Yuen (Columbia University) The recently established equality of the complexity classes MIP* and RE has surprising consequences for complexity theory, mathematical physics, and functional analysis. In this talk I’ll discuss this result from the point of view of proof systems, including how interactive proofs and probabilistically checkable proofs play a central role. I’ll also discuss how MIP* = RE points to an interesting set of questions that can be categorized as “noncommutative property testing”. 13:30-14:15 CDT Improved Optimization of Variational Quantum Algorithms Speaker: Bryan Clark (University of Illinois at Urbana-Champaign) Thursday, May 27, 2021 9:30-10:15 CDT Quantum barren plateaus and a possible way out Speaker: Maria Kieferova (University of Technology Sydney) In recent years the prospects of quantum machine learning and quantum deep neural network have gained notoriety in the scientific community. By combining ideas from quantum computing with machine learning methodology, quantum neural networks promise new ways to interpret classical and quantum data sets. However, many of the proposed quantum neural network architectures exhibit a concentration of measure leading to barren plateau phenomena. In this talk, I will show that, with high probability, entanglement between the visible and hidden units can lead to exponentially vanishing gradients. To overcome the gradient decay, our work introduces a new step in the process which we call quantum generative pre-training. This is a joint work with Carlos Ortiz Marrero and Nathan 11:00-11:45 CDT Efficient quantum algorithm for dissipative nonlinear differential equations Speaker: Andrew Childs (University of Maryland) While there has been extensive previous work on efficient quantum algorithms for linear differential equations, analogous progress for nonlinear differential equations has been severely limited due to the linearity of quantum mechanics. Despite this obstacle, we develop a quantum algorithm for initial value problems described by dissipative quadratic n-dimensional ordinary differential equations. Assuming R<1, where R is a parameter characterizing the ratio of the nonlinearity to the linear dissipation, this algorithm has complexity T² poly(log T, log n, log(1/ϵ))/ϵ, where T is the evolution time and ϵ is the allowed error in the output quantum state. This is an exponential improvement over the best previous quantum algorithms, whose complexity is exponential in T. We achieve this improvement using the method of Carleman linearization, for which we give a novel convergence theorem. This method maps a system of nonlinear differential equations to an infinite-dimensional system of linear differential equations, which we discretize, truncate, and solve using the forward Euler method and the quantum linear system algorithm. We also provide a lower bound on the worst-case complexity of quantum algorithms for general quadratic differential equations, showing that the problem is intractable for R≥√2. Finally, we discuss potential applications of this approach to problems arising in biology as well as in fluid and plasma dynamics. Based on joint work with Jin-Peng Liu, Herman Kolden, Hari Krovi, Nuno Loureiro, and Konstantina Trivisa. 13:30-14:15 CDT Running useful algorithms on near-term quantum hardware Speaker: Joel Wallman (University of Waterloo) Quantum computers have the potential to solve problems that are beyond the reach of conventional computers. However, near-term quantum computers are plagued by chaotic errors that render the output unreliable. In this talk, I will outline how quantum algorithms can be structured and executed with a predictable error profile, allowing users to determine what level of performance is required to run a specific algorithm with a specific error tolerance. Friday, May 28, 2021 9:30-10:15 CDT Efficient quantum algorithms for variational state preparation Speaker: Giuseppe Carleo (EPFL (Ecole Polytechnique Fédérale de Lausanne)) In this seminar I will discuss efficient quantum algorithms to prepare many-qubit states using highly expressive parameterized quantum circuits. This task is central in several applications, ranging from the simulation of physical and chemical systems to general tasks in machine learning and optimization. I will start by discussing the concept of Quantum Natural Gradient [1] and its efficient implementation [2] using the Simultaneous Perturbation Stochastic Approximation. This concept is a building block of several quantum algorithms for high-dimensional optimization, quantum machine learning, and variational imaginary-time evolution. In the context of simulating the real-time evolution of interacting quantum systems, I will also discuss an efficient variational quantum algorithm named “projected – Variational Quantum Dynamics” (p-VQD) realizing an iterative, global projection of the exact time evolution onto the parameterized manifold [3]. I will conclude highlighting the deep connection of these approaches with similarly motivated classical algorithms in variational Monte Carlo literature and will also highlight possible future improvements. [1] James Stokes, Josh Izaac, Nathan Killoran, and Giuseppe Carleo, Quantum 4, 269 (2020) [2] Julien Gacon, Christa Zoufal, Giuseppe Carleo, and Stefan Woerner, arXiv:2103.09232 (2021) [3] Stefano Barison, Filippo Vicentini, and Giuseppe Carleo, arXiv:2101.04579 (2021) 11:00-11:45 CDT Tensor Optimization Algorithms and Libraries for Quantum Simulation Speaker: Edgar Solomonik (University of Illinois at Urbana-Champaign) Tensor networks and tensor decompositions enable efficient simulation of quantum systems. We describe advances in methods and software for these problems and their application to approximate modelling of quantum circuits and electronic structure in quantum chemistry. On the algorithms side, we propose schemes that use perturbative approximation and randomization to accelerate solution of quadratic optimization subproblems in common alternating optimization algorithms. On the software side, we introduce two Python libraries, Koala and AutoHOOT, which achieve distributed-memory parallelism via the Cyclops tensor library. Koala uses 2D tensor network states (projected entangled pair states) to approximately simulate quantum circuits or perform general time-evolution. AutoHOOT provides efficient high-order automatic differentiation for tensor optimization problems. Back to top Efficient quantum algorithm for dissipative nonlinear differential equations Andrew Childs May 27, 2021
{"url":"https://www.imsi.institute/activities/quantum-information-mathematics-economics-statistics/","timestamp":"2024-11-04T07:37:57Z","content_type":"text/html","content_length":"200686","record_id":"<urn:uuid:630df7ba-7d5a-4068-bc8f-11393258d416>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00460.warc.gz"}
On Bayesian Estimation of Loss and Risk Functions [1] Singh, Randhir, 1997. D. Phil Thesis (Unpublished), Department of Mathematics and Statistics, University of Allahabad, Allahabad. (INDIA). [2] Bhattacharya, S. K.., 1967. Bayesian Approach to Life Testing and Reliability Estimation, JASA 62, pp. 48-62. [3] Tyagi, R. K.& Bhattacharya, S. K., 1989. Bayes Esimation of Maxwell’s Velocity Distribution Function, Statistica, anno XLIX, n. 4, pp. 563-567. [4] Tyagi, R. K. & Bhattacharya, S. K., 1989. A Note On the MVU Estimation of Reliability for the Maxwell’s Failure Distribution, ESTADISTICA, 41, 137, pp. 73-79. [5] Tyagi, R. K. & Bhattacharya, S. K., 1990. Bayesian Survival Analysis Based on the Rayleigh Model, TRABAJOS DE ESTADISTICA, Vol. 5 Num. 1, pp. 81-92. [6] Chaturvedi, A. and Rani U., 1998. Classical and Bayesian reliability estimation of the generalized Maxwell failure distribution. Journal of Statistical Research, 32: 113-120. [7] Singh, Randhir, 1999. Bayesian Analysis of a Multicomponent System, Proceedings of NSBA-TA, 16-18 Jan. 1999, pp. 252-261, Editor Dr. Rajesh Singh. The conference was organised by the Department of Statistics, Amrawati University, Amravati-444602, Maharashtra, India. [8] Singh, Randhir, 2010. Simulation Aided Bayesian Estimation fo Maxwell’s Distribution, Proceedings of National Seminar on Impact of Physics on Biological Sciences (August, 26, 2010), held by the Department of Physics, Ewing Christian College, Prayagraj, India, pp. 203-210; ISBN No.: 978-81-905712-9-6. [9] Guobing Fan, 2016. Estimation of the Loss anf Risk Functions of Parameter of Maxwell Distribution. Science Journal of Applied Mathematics and Statistics Vol. 4 No. 4, 2016, pp 129-133. doi: [10] Rukhin A. L. 1988. Estimating the loss of estimators of binomial parameter. Biometrica, 75 (1): 153-155. [11] Poddar C. K. and Roy M. K. 2003. Bayesian estimation of the parameter of Maxwell distribution under MLINEX loss function, Journal of Statistical Studies, 23: 11-16. [12] Bekker, A. and Roux, J. J., 2005. Reliability characteristics of the Maxwell distribution: a Bayes estimation study, Comm. Stat. Theory & Meth., 34 (11): 2169-2178. [13] Day S. and Sudhanshu, S. M., 2010. Bayesian estimation of the parameter of Maxwell distribution under different loss functions. Journal of Statistical Theory & Practice, 4 (2): 279-287. [14] Krishna H. and Malik M., 2011 Reliability estimation in Maxwell distribution with progressively Type-II censored data. Journal of Statistical Computation and Simulation, 82 (4): 1-19. [15] Xu M. P., Ding X. Y. and Yu J., 2013 Bayes inference for the loss and risk functions of Rayleigh distribution parameter estimator. Mathematics in Practice & Theory, 43 (21): 151-156. [16] Ajami, Masoud and Jahanshahi, Seyed Mahdi Amir, 2017Journal of modern applied statistical methods: JMASM November 2017. DOI: 10.22237/jmasm/1509495240. [17] Tummala, V. M. and Sathe, P. T., 1978. Minimum Expected Loss Estimators of Reliability and Parameters of Certain Life Time Distributions. IEEE Transactions on Reliability, Vol. R-27, No. 4, pp. [18] Zellner, A. and Park, S. B. 1979. Minimum Expected Loss Estimators (MELO) of Functions of Parameters and Structural Coefficients of Econometric Models. JASA 74, pp. 185-193. [19] Teena Goyal, Piyush Kant Rai and Sandeep K. Maurya. 2019. Bayesian Estimation for Exponentiated Weibull distribution under different Loss Functions. International Journal of Pure and Applied Researches. http://ijopar.com2019 Vol. 2 (1): pp. 01-13ISSN: 2455-474X [20] Teena Goyal, Piyush Kant Rai and Sandeep K. Maurya. 2020. Bayesian Estimation for Logarithmic Transformed Exponential l distribution under different Loss Functions. Journal of Statistics Applications & Probability. Vol 9 No. 1, pp. 139-148. http://dx.doi.org/10.18576/jsap/090113.
{"url":"http://sjams.org/article/10.11648/j.sjams.20210903.11","timestamp":"2024-11-09T19:07:58Z","content_type":"text/html","content_length":"77046","record_id":"<urn:uuid:d405dec7-d251-409a-8ab3-768d4da41683>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00130.warc.gz"}
PEMDAS—Do you understand math? Understanding PEMDAS: The Order of Operations in Mathematics Mathematics is a field that thrives on rules and structures. One such set of rules that almost every student learns early on is the order of operations, often remembered by the acronym PEMDAS. It is a vital guide to ensure that mathematical expressions are evaluated in a consistent manner. In this blog post, we will delve into the meaning and significance of PEMDAS, examine examples, and explore how this rule enables us to maintain uniformity in mathematical calculations. What is PEMDAS? PEMDAS stands for Parentheses, Exponents, Multiplication and Division (from left to right), and Addition and Subtraction (from left to right). This mnemonic helps us remember the sequence in which to perform operations when evaluating mathematical expressions. - Parentheses: Perform all calculations inside parentheses first. - Exponents: Next, handle all exponents (or powers). - Multiplication and Division: Multiply and divide in the order that they appear, from left to right. - Addition and Subtraction: Finally, add and subtract in the order that they appear, from left to right. Importance of PEMDAS
{"url":"https://k-andrew.medium.com/pemdas-do-you-understand-math-ba009a5adb4e?source=author_recirc-----88baf4cee068----3---------------------c8b22006_4e51_4dde_8de5_87200d76183a-------","timestamp":"2024-11-03T16:29:36Z","content_type":"text/html","content_length":"89316","record_id":"<urn:uuid:6866e9d2-fd1e-4c75-8b08-0cb0da4c1af6>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00210.warc.gz"}
Gradians to Degrees Converter ⇅ Switch toDegrees to Gradians Converter How to use this Gradians to Degrees Converter 🤔 Follow these steps to convert given angle from the units of Gradians to the units of Degrees. 1. Enter the input Gradians value in the text field. 2. The calculator converts the given Gradians into Degrees in realtime ⌚ using the conversion formula, and displays under the Degrees label. You do not need to click any button. If the input changes, Degrees value is re-calculated, just like that. 3. You may copy the resulting Degrees value using the Copy button. 4. To view a detailed step by step calculation of the conversion, click on the View Calculation button. 5. You can also reset the input by clicking on button present below the input field. What is the Formula to convert Gradians to Degrees? The formula to convert given angle from Gradians to Degrees is: Angle[(Degrees)] = Angle[(Gradians)] × 9 / 10 Substitute the given value of angle in gradians, i.e., Angle[(Gradians)] in the above formula and simplify the right-hand side value. The resulting value is the angle in degrees, i.e., Angle Calculation will be done after you enter a valid input. Consider that a precision engineering tool adjusts by 100 gradians. Convert this angle from gradians to Degrees. The angle in gradians is: Angle[(Gradians)] = 100 The formula to convert angle from gradians to degrees is: Angle[(Degrees)] = Angle[(Gradians)] × 9 / 10 Substitute given weight Angle[(Gradians)] = 100 in the above formula. Angle[(Degrees)] = 100 × 9 / 10 Angle[(Degrees)] = 90 Final Answer: Therefore, 100 gon is equal to 90 °. The angle is 90 °, in degrees. Consider that a civil engineer designs a slope with an angle of 90 gradians. Convert this angle from gradians to Degrees. The angle in gradians is: Angle[(Gradians)] = 90 The formula to convert angle from gradians to degrees is: Angle[(Degrees)] = Angle[(Gradians)] × 9 / 10 Substitute given weight Angle[(Gradians)] = 90 in the above formula. Angle[(Degrees)] = 90 × 9 / 10 Angle[(Degrees)] = 81 Final Answer: Therefore, 90 gon is equal to 81 °. The angle is 81 °, in degrees. Gradians to Degrees Conversion Table The following table gives some of the most used conversions from Gradians to Degrees. Gradians (gon) Degrees (°) 0 gon 0 ° 1 gon 0.9 ° 10 gon 9 ° 45 gon 40.5 ° 90 gon 81 ° 180 gon 162 ° 360 gon 324 ° 1000 gon 900 ° Gradians, also known as grads or gon, are a unit of angular measurement where a full circle is divided into 400 gradians. This unit is particularly useful in fields such as surveying and civil engineering, especially in some European countries. One gradian is equivalent to 0.9 degrees, making it convenient for calculating right angles and dividing circles into decimal fractions. Degrees are a widely used unit of angular measurement, especially in geometry, trigonometry, and everyday applications. A full circle is divided into 360 degrees, with each degree further divided into 60 minutes and each minute into 60 seconds. Degrees offer an intuitive way to express angles, and they are prevalent in fields ranging from navigation to astronomy, as well as in common day-to-day measurements. Frequently Asked Questions (FAQs) 1. How do I convert gradians to degrees? To convert gradians to degrees, multiply the number of gradians by 0.9, since one gradian equals 0.9 degrees. For example, 200 gradians multiplied by 0.9 equals 180 degrees. 2. What is the formula for converting gradians to degrees? The formula is: \( \text{degrees} = \text{gradians} \times 0.9 \). 3. How many degrees are in a gradian? There are 0.9 degrees in one gradian. 4. Is 100 gradians equal to 90 degrees? Yes, 100 gradians is equal to 90 degrees because 100 × 0.9 = 90. 5. How do I convert degrees to gradians? To convert degrees to gradians, divide the number of degrees by 0.9. For example, 180 degrees divided by 0.9 equals 200 gradians. 6. Why do we multiply by 0.9 to convert gradians to degrees? Because a full circle is 400 gradians or 360 degrees, so each gradian is equivalent to 0.9 degrees (360° ÷ 400 = 0.9° per gradian). 7. How many degrees are there in 50 gradians? 50 gradians multiplied by 0.9 equals 45 degrees. 8. What is the difference between degrees and gradians? Degrees and gradians are both units for measuring angles. A full circle is 360 degrees or 400 gradians. Therefore, gradians divide a circle into 400 equal parts, making calculations involving right angles and percentages simpler.
{"url":"https://convertonline.org/unit/?convert=gradians-degrees","timestamp":"2024-11-09T21:11:01Z","content_type":"text/html","content_length":"81258","record_id":"<urn:uuid:c01c851a-68ab-4e65-8a99-9e65088f3516>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00726.warc.gz"}
Return a value based on multiple cells I want to check a series of cells and return a value of one cell based upon the others not being blank. If I have columns: Original Current Updated Actual Most Accurate I want to populate "Most Accurate" with the value in Actual. If Actual is blank then I want to pull "Updated" cell value If Updated is blank then pull value in CURRENT into "Most Accurate" If Current is blank then pull value in ORIGINAL into "Most Accurate" If Original is blank the return "NA" The end goal copy the value that is in "Most Accurate" and link it to another sheet for a dashboard reporting. Best Answer • Try this: =IF(Actual@row <> "", Actual@row, IF(Current@row <> "", Current@row, IF(Original@row <> "", Original@row, "N/A"))) • Hello @MelissaSan This would be simple to use a nested IF statement. The function depends on where the data is coming from; whether you can reference cells directly or use a lookup function to pull the values from another sheet. Also, the return value will depend on the order in which we analyze each situation. Following your post, I assume the order in which you want each situation analyzed would be: 1. Actual, 2. Updated, 3. Current, 4. Original • Thanks! I have a nested IF but I can't get it by the 2nd lookup: =IF(ISBLANK(Actual@row), Current@row, IF(ISBLANK(Current@row), Original@row, IF(ISBLANK(Original@row), "NA", Actual@row))) I should have shown my actual sheet and formula in my first post, sorry! So in the last case I would want it to show "Melissa Test Value" And if THAT is blank (so all Original, Current, Actual are blank), then I would show NA. • Try this: =IF(Actual@row <> "", Actual@row, IF(Current@row <> "", Current@row, IF(Original@row <> "", Original@row, "N/A"))) Help Article Resources
{"url":"https://community.smartsheet.com/discussion/110165/return-a-value-based-on-multiple-cells","timestamp":"2024-11-05T02:59:44Z","content_type":"text/html","content_length":"428624","record_id":"<urn:uuid:537fa2b6-1e56-4432-becd-f48837eabffb>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00124.warc.gz"}
Fractions with the Same Numerator | sofatutor.com Fractions with the Same Numerator Fractions with the Same Numerator Basics on the topic Fractions with the Same Numerator Comparing Fractions with the Same Numerator Fractions with the same numerator have the same number of shaded parts, but may have different denominators. As such, we will need to compare them closely. When the whole is the same, a smaller number in the denominator means the pieces will be larger since there is less to divide between. Comparing Fractions with the Same Numerator – Example Let’s explore how to compare fractions with the same numerator. We can use fraction bars to compare the values of fractions with the same numerator. Let's use the fractions one-half and one-third as examples. Both fractions come from an equal-sized whole. They also both have the same numerator: one. However, their denominators are different. Now we can explain how to compare fractions with the same numerator but different denominators. Draw a fraction bar to represent the first fraction. Next, shade the bar to represent one-half. Then, draw a fraction bar of identical size to represent the second fraction. Now, shade the bar to represent one-third. Finally, compare the shaded parts of both models. One-half has a denominator that is smaller than one-third, but as we can see the whole is divided into just two parts so the pieces are actually larger. When comparing fractions with the same numerator and the same whole, the one with the smaller denominator is actually larger. Comparing Fractions with the Same Numerator – Summary of Steps How Can I Compare Two Fractions with the Same Numerator? To compare the size of two fractions with the same whole and the same numerator, you need to follow the steps listed in the chart below. Step # What to do 1 Draw and shade a fraction bar to represent the first fraction. 2 Draw and shade a fraction bar of identical size to represent the second fraction. 3 Compare the shaded parts of both bars. If two fractions with the same whole have the same numerator, 4 but different denominators, the fraction with the smaller denominator is actually larger. Comparing Fractions with the Same Numerator – Activities Have you practiced yet? At the end of the video, you can also find comparing fractions with the same numerator worksheets and exercises for third grade. Transcript Fractions with the Same Numerator Axel and Tank are training Tank’s new dogfish, Sparky. Using the same-sized treats, they each break it into equal parts and feed him the same number of parts, but Sparky always goes to Axel instead of Tank! They think they are feeding Sparky the same amount, but they will need to learn about "Fractions with the Same Numerator" to determine what is going on! Fractions with the same numerator have the same number of shaded parts, but may have different denominators, so we need to compare them closely. When the whole is the same, a smaller number in the denominator means the pieces will be larger since there is less to divide between. To prove this, we can use fraction bars to compare values. Let's use the fractions one half and one third as examples. The whole for both is the same size. They both have the same numerator: one. However, their denominators are different. To compare, draw a fraction bar to represent the first fraction. Next, shade the bar to represent one-half. Then, draw a fraction bar of identical size to represent the second fraction. Now, shade the bar to represent one-third. Finally, compare the shaded parts of both models. One-half has a denominator that is smaller than one-third, but the whole is divided into just two parts so the pieces are actually larger. When comparing fractions with the same numerator and whole, the one with the smaller denominator is actually larger. Let's try this again using Axel and Tank's dogfish treats. Axel has two-fourths of a treat for Sparky. Tank has two sixths. Let's compare by drawing a fraction bar to represent Axel's fraction. Next, shade the bar to show two-fourths. Then, draw a fraction bar of identical size to represent Tank's fraction. Now, shade the bar to show two-sixths. Finally, compare the shaded parts of both bars. Two-fourths has a denominator that is smaller than two-sixths, but the whole is divided into just four parts so the pieces are actually larger. Again, when comparing fractions with the same numerator and whole, the one with the smaller denominator is actually larger so Sparky swims to Axel! Let's try it once more! This time Axel has three-thirds of a treat for Sparky. Tank has three-fifths. Pause the video and predict who has the larger fraction to give Sparky. Let's check our work! First, draw and shade a fraction bar to represent Axel's fraction, three-thirds. Then, draw and shade a fraction bar to represent Tank's fraction, three-fifths. Finally, compare the shaded parts of both models. The shaded value of three-thirds is greater than three-fifths, so Sparky swims to Axel again! Now we know why Sparky always swims to Axel instead of Tank! Before they finish training for the day, let's remember: To compare the size of two fractions with the same whole and the same numerator: first, draw and shade a fraction bar to represent the first fraction. Then, draw and shade a fraction bar of identical size to represent the second fraction. Finally, compare the shaded parts of both bars. If two fractions with the same whole have the same numerator but different denominators, the fraction with the smaller denominator is actually larger. "I'm making certain that Sparkly swims to me this time!" Fractions with the Same Numerator exercise Would you like to apply the knowledge you’ve learned? You can review and practice it with the tasks for the video Fractions with the Same Numerator. • Which fraction is bigger? Look at the image of Axel and Tank's fish treat. Who has the smaller pieces? This treat has been broken into six pieces, the denominator is 6. Is each piece of the treat that is $\frac1 6$, larger or smaller than the pieces that are in thirds? The top number in a fraction is the numerator and the bottom number is the denominator. When the whole is the same, a smaller number in the denominator means the fraction will be larger. For example, even though both $\frac1 2$ and $\frac1 3$ have the same numerator, the smaller fraction is $\mathbf{\frac{1}{3}}$. This is because the larger the denominator, the smaller the fraction. • Treats for Sparky. Look at the denominators. Does the larger fraction have a bigger or a smaller denominator when the numerators are the same? Look at the size of these fish treats and the size of the denominator. What do you notice? Try drawing fraction bars of identical size to compare the fractions. The biggest piece will be from the food that is cut into the fewest pieces. Therefore the pieces that have been cut into three parts, where each one is $\mathbf{\frac{1}{3}}$, will be the • Comparing fractions of sandwiches. Look at how many parts the sandwich has been cut into - this is the denominator. The larger the denominator, the smaller the piece. Look at these two sandwiches. One has been cut into thirds, one has been cut into fifths. Compare a piece of each sandwich. Which would be larger? The order of the parts of the sandwiches from largest denominator to smallest: □ The sandwich cut into eight parts: $\mathbf{\frac{1}{8}}$ □ The sandwich cut into six parts: $\mathbf{\frac{1}{6}}$ □ The sandwich cut into four parts: $\mathbf{\frac{1}{4}}$ □ The sandwich cut into three parts: $\mathbf{\frac{1}{3}}$ • Compare the fractions with the same numerator. Look at the denominator. What does a larger denominator mean when the numerators are the same? When the numerators are the same, the larger the denominator, the smaller the part. Begin by finding the smallest fraction for Monday. This will be the fraction with the largest denominator. On Friday Sparky gets the most food, so this will be the fraction with the smallest denominator. These are the correct pairs: Monday = $\frac3 9$ Tuesday = $\frac3 7$ Wednesday = $\frac3 5$ Thursday = $\frac3 4$ Friday = $\frac3 3$ Each day Sparky always gets three parts (the numerator), since the numerator is the same each time, we only need to compare the denominators. The larger the denominator, the smaller the fraction. So we order the pieces from Monday to Friday, with the fraction that has the largest denominator first, and the fraction with the smallest denominator last, as this piece will be the biggest. • Help Sparky choose the bigger piece. Look at the denominator. When the numerators are the same, the larger the denominator, the smaller the fraction. In these two fractions: $\frac1 3$ and $\frac1 2$, which is the bigger fraction? Look at its denominator. $\frac1 4$ is larger than $\frac1 6$. The symbol to show this is $\frac1 4$ > $\frac1 6$. When both pieces of food are broken up, the one that is broken into more pieces, means that each piece ends up smaller. • Compare the fractions. Try drawing fraction bars to compare the two fractions. Remember that when the numerators are the same, we only need to look at the denominators to compare the fractions. When the numerators are the same, the larger the denominator, the smaller the fraction. □ $\frac2 8$ > $\mathbf{\frac{2}{9}}$ □ $\frac{4}{11}$ > $\frac{4}{12}$ □ $\frac{7}{10}$ > $\mathbf{\frac{7}{12}}$ More videos in this topic Comparing Fractions
{"url":"https://us.sofatutor.com/math/videos/fractions-with-the-same-numerator","timestamp":"2024-11-12T22:14:43Z","content_type":"text/html","content_length":"155901","record_id":"<urn:uuid:1e7bb6ad-aec2-454b-86a7-15807031cbc7>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00888.warc.gz"}
Quantum Geometry, Exclusion Statistics, and the Geometry of "Flux Attachment" in 2D Landau levels Duncan Haldane talks about Quantum Geometry, Exclusion Statistics, and the Geometry of "Flux Attachment" in 2D Landau levels. The degenerate partially-filled 2D Landau level is a remarkable environment in which kinetic energy is replaced by "quantum geometry" (or an uncertainty principle) that quantises the space occupied by the electrons quite differently from the atomic-scale quantisation by a periodic arrangement of atoms. In this arena, when the short-range part of the Coulomb interaction dominates, it can lead to "flux attachment", where a particle (or cluster of particles) exclusively occupies a quantised region of space. This principle underlies both the incompressible fractional quantum Hall fluids an the composite fermion Fermi liquid states that occur in such systems.
{"url":"https://www.podcasts.ox.ac.uk/quantum-geometry-exclusion-statistics-and-geometry-flux-attachment-2d-landau-levels?video=1","timestamp":"2024-11-05T07:38:36Z","content_type":"text/html","content_length":"31617","record_id":"<urn:uuid:82c26713-f236-4eec-a94f-49e5a89f30d6>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00781.warc.gz"}
±è°ü¼® 2024-09-15 08:35:16, Á¶È¸¼ö : 211 - Download #1 : BC_3b.jpg (316.2 KB), Download : 0 3.1.4 Cosmic Neutrino Background The most weekly interacting particles of the Standard Model are neutrinos. We therefore expect them to decouple first from the thermal plasma. How this produce the cosmic neutrino background (𝐶𝜈𝛣) will be shown in the following. Neutrino decoupling Neutrinos were coupled to the thermal bath through weak interaction process like (3.57) 𝜈[𝑒] + 𝜈̄[𝑒] ⟷ 𝑒^+ + 𝑒^-, 𝑒^- + 𝜈̄[𝑒] ⟷ 𝑒^- + 𝜈̄[𝑒]. The interaction rate(/particle) is 𝛤 ¡Õ 𝑛𝜎∣𝑣∣, where 𝑛 is the number density of the target particles, 𝜎 is the cross section, and 𝑣 is the relative velocity (which is approximately 𝑣 ≈ 𝑐 = 1). By dimensional analysis, we infer that the cross section for weak scale interaction is 𝜎 ≈ 𝐺[𝐹]^2𝛵^2, where 𝐺[𝐹] ≈ 1.2 ¡¿ 10^-5 GeV^-2 is Fermi's constant. Taking the number density to be 𝑛 ≈ 𝛵^3, the interaction rate becomes (3.58) 𝛤 = 𝑛𝜎∣𝑣∣ ≈ 𝐺[𝐹]^2𝛵^5 As the temperature decrease, the interaction rate drops much more rapidly than the Hubble rate 𝐻 ≈ 𝛵^2/𝑀[𝑃𝐼]: (3.59) 𝛤/𝐻 ≈ (𝛵/1 MeV)^3. We conclude that neutrinos decouple around 1 MeV. (more accurately 0.8 MeV) After decoupling, the neutrinos more freely along geodesics and preserve the relativistic Fermi-Dirac distribution (even after they become non-relativistic at later time). In Section 2.2.1, we showed that the physical momentum of free-streaming particles scales as 𝑝 ¡ð𝑎^-1. It's convenient to define the time-independent combination 𝑞 ¡Õ 𝑎𝑝, so that the neutrino number density is (3.60) 𝑛[𝜈] ¡ð𝑎^3 ¡ò 𝑑^3𝑞 1/[exp(𝑞/𝑎𝛵[𝜈]) + 1]. After decoupling, particle number conservation requires 𝑛[𝜈] ¡ð𝑎^3, which is only consistent with (3.60) if the neutrino temperature evolves as 𝛵[𝜈] ¡ð𝑎^-1. As long as the photon temperature 𝛵[𝛾] scales in the same way, we still have 𝛵[𝜈] = 𝛵[𝛾]. However, particle annihilations will cause a deviation from 𝛵[𝛾] ¡ð𝑎^-1 in the photon temperature. Electron-positron annihilation Shortly after the neutrinos decouple, the temperature drops below the electron mass, so that electrons and positrons can annihilate into photons. 𝑒^+ + 𝑒^- ¡æ 𝛾 + 𝛾. The energy density and entropy of electrons and positrons are transferred to the photons, but not to the decoupled neutrinos. The photons are thus "heated" (the photon temperature decreases more slowly) relative to the neutrinos (see Fig. 3.3). To quantify this effect, we consider the change in the effective number of degree of freedom in entropy. If we neglect neutrinos and other decoupled species,^8 then we have (3.61) 𝑔[*𝑆] = { 2 + 7/8 ¡¿ 4 = 11/2 𝛵 ≳ 𝑚[𝑒]; 2 𝛵 < 𝑚[𝑒]. The annihilation of electrons and positrons occurs on a timescale of 𝛼^2/𝑚[𝑒] ~ 10^-18 s (where 𝛼 is the fine-structure constant), which is much less than the age of the universe (~1 s) at the time. This means that the 𝑒^¡¾-𝛾 plasma evolves quasi-adiabatically into the 𝛾-only plasma. Entropy is therefore conserved during the process. Taking 𝑔[*𝑆](𝑎𝛵[𝛾])^3 to remain constant, we find that 𝑎𝛵[𝛾] increases after electron-positron annihilation by a factor (11/4)^1/3, while 𝑎𝛵[𝜈] remains the same. This means that after 𝑒^+𝑒^- annihilation, the neutrino temperature is slightly lower than the photon temperature. (3.62) 𝛵[𝜈] = (4/11)^1/3 𝛵[𝛾]. [𝛵[𝜈]≈ 0.714 𝛵[𝛾]] For 𝛵 ¡ì 𝑚[𝑒], the effective number of relativistic species (in energy density and entropy) therefore is (3.63) 𝑔[*] = 2 + 7/8 ¡¿ 2 𝑁[eff] (4/11)^4/3 = 3.36, (3.64) 𝑔[*𝑆] = 2 + 7/8 ¡¿ 2 𝑁[eff] (4/11) = 3.94, where 𝑁[eff] is the effective number of neutrino species in the universe. If neutrino decoupling was instantaneous then we would simply have 𝑁[eff] = 3. However, neutrino decoupling was not quite complete when 𝑒^+𝑒^- annihilation began, so some of the energy and entropy did leak to neutrinos. Taking this into account^9 raises the effective number of neutrinos to 𝑁[eff] = 3.046.^10 Using this value in (3.63) explains the final value of 𝑔[*](𝛵) in Fig. 3.2. Electron-positron annihilation The relation (3.62) holds until the present. The cosmic neutrino background therefore has a slightly lower temperature 𝛵[𝜈,0] = 1.95 K. cf. CMB 𝛵[𝜈0] = 2.73 K. The number density of neutrinos (per flavor) is (3.65) 𝑛[𝜈] ≈ 3/4 ¡¿ 4/11 𝑛[𝛾]. Using (3.24) we see that this corresponds to 112 neutrinos cm^ per flavor. The present energy density of neutrinos depends on whether the neutrinos are relativistic or non-relativistic today. It used to be believed that neutrinos were massless, in that case (3.66) 𝜌[𝜈] = 7/8 𝑁[eff] (4/11)^4/3𝜌[𝛾] ¢¡ 𝛺[𝜈]𝘩^2 ≈ 1.7 ¡¿ 10^-5 (𝑚[𝜈] = 0). [𝘩 = 𝐻[0]/100 Mpc km s^-1] However, neutrino oscillation experiments have shown that neutrinos do have a mass. The minimum sum of the neutrino masses is ¢² 𝑚[𝜈,𝑖] > 0.06 eV. massive neutrinos behave as radiation-like particles in the early universe (for 𝑚[𝜈] < 0.2 eV, neutrinos are relativistic at recombination) and as matter-like particles in the late universe (see Fig. 3.4). In Problem 3.4, it will be shown that energy of massive neutrino, 𝜌[𝜈] = ¢² 𝑚[𝜈,𝑖]𝑛[𝜈,𝑖], corresponds to (3.67) 𝛺[𝜈]𝘩^2 ≈ ¢² 𝑚[𝜈,𝑖]/(94 eV). [𝘩 = 𝐻[0]/100 Mpc km s^-1] By demanding that 𝛺[𝜈] <1, a cosmological upper bound can be placed on the sum of the neutrino masses ¢² 𝑚[𝜈,𝑖] < 15 eV (using 𝘩 ≈ 0.7). Massive neutrinos also affect the late-time expansion rate which is constraint by CBM and BAO(Baryon acoustic oscillations) measurements. The current Planck constraint is ¢² 𝑚[𝜈,𝑖] < 0.13eV, which implies 𝛺[𝜈] < 0.003. Future observations promise to be sensitive enough to measure the neutrino masses. 3.1.5 Cosmic Microwave Background An important event in the history of the early universe is the formation of the first atoms and the associated decoupling of photons. (see Fig. 3.5). At temperatures above about 1eV this universe still consisted of a plasma of free electrons and nuclei. Photons were tightly coupled to the electrons via Thomson scattering, which in tern strongly interacted with protons via Coulomb scattering. There was very little neutral hydrogen. Below 0.3 eV the electons and nuclei combined to form neutral atoms and the density of free electrons decreased sharply. The photon mean free path grew rapidly and became longer than the Hubble length, 𝐻^-1. Around 0.25 eV, the photons decoupled from the matter and the universe became transparent. Today, these photons are observed as the cosmic microwave background. Key events in the formation of the CMB are summarized in Table 3.3. We will derive these facts. Chemical equilibrium During the recombination, the number of each particle species wasn't fixed, because hydrogen atoms were formed, while the number of free electrons and protons decreased. In thermodynamics we describe such a situation with chemical potential. Consider the generic reaction 1 + 2 ⟷ 3 + 4. Each particle species has a chemical potential 𝜇. The second law of thermodynamics implies that particles flow to the side of the reaction where the total chemical potential is lower. Chemical equilibrium is reached when the sum of the chemical potentials each side is equal, in which case the rates of the forward and revers reaction are equal. So (3.68) 𝜇[1] + 𝜇[2] = 𝜇[3] + 𝜇[4]. • There is no chemical potential for photons, because photon number is not conserved. (e.g. double Compton scattering 𝑒^- + 𝛾 ⟷ 𝑒^- + 𝛾 + 𝛾 happens in equilibrium at high temperatures.) Sometimes, this is expressed as (3.69) 𝜇[𝛾] = 0, but, more accurately the concept of a chemical potential for photons doesn't exist. • If the chemical potential of a particle 𝑋 is 𝜇𝑥, then the chemical potential of the corresponding antiparticle 𝑋̄ is (3.70) 𝜇𝑥̄ = -𝜇𝑥. To see this, just consider particle-antiparticle annihilation, 𝑋 + 𝑋̄ ⟷ 𝛾 + 𝛾, and use that 𝜇[𝛾] = 0, The equilibrium assumption will be sufficient to describe the onset of recombination, but will not capture the correct dynamics shortly thereafter (such as the freeze-out of electrons). We will revisit these non-equilibrium aspects of recombination in Section 3.2.5. Hydrogen recombination Recombination proceeds in two stages. The formation of helium atoms is followed by that of hydrogen atoms. We will assume that the universe was filled only with free electrons, protons and photons. Over 90% (by number) of the nuclei are protons, so this is reasonable approximation to reality. Moreover , helium recombination is completed before hydrogen recombination, so hat the two events can be treated separately. The formation of hydrogen atom occurs via the reaction 𝑒^- + 𝑝^+ = 𝐻 + 𝛾, Initially, this reaction keeps the particles in equilibrium, and since 𝛵 < 𝑚[𝑖], 𝑖 = {𝑒, 𝑝, 𝐻}, we have the following equilibrium abundances (3.71) 𝑛[𝑖]^eq = 𝑔[𝑖] (𝑚[𝑖]𝛵/2¥ð)^3/2 exp[(𝜇[𝑖] - 𝑚[𝑖])/𝛵], where 𝜇[𝑝] + 𝜇[𝑒] = 𝜇[𝐻] (recall that 𝜇[𝛾] = 0). To remove ththe following ratioe dependence on the chemical potentials, we consider (3.72) (𝑛[𝐻]/𝑛[𝑒]𝑛[𝑝])[eq] = 𝑔[𝐻]/𝑔[𝑒]𝑔[𝑝] (𝑚[𝐻]/𝑚[𝑒]𝑚[𝑝] 2¥ð/𝛵)^3/2 𝑒^(𝑚[𝑝] + 𝑚[𝑒] - 𝑚[𝐻])/𝛵. In the prefactor, we can use 𝑚[𝐻] ≈ 𝑚[𝑝], but in the exponential the small difference is crucial: it is the ionization energy of hydrogen (3.73) 𝐸[𝐼] ¡Õ 𝑚[𝑝] + 𝑚[𝑒] - 𝑚[𝐻] = 13.6 eV. The numbers of internal degrees of freedom are 𝑔[𝑝] = 𝑔[𝑒] =2 and 𝑔[𝐻] =4.^11 In our knowledge the universe isn't electrically charged, so that we have 𝑛[𝑒] = 𝑛[𝑝], Equation (3.72) then becomes (3.74) (𝑛[𝐻]/𝑛[𝑒]^2)[eq] = (2¥ð/𝑚[𝑒]𝛵)^3/2 𝑒^𝐸[𝐼]/𝛵. It is convenient to describe the process of recombination in terms of the free electron fraction (3.75) 𝑋[𝑒] ¡Õ 𝑛[𝑒]/(𝑛[𝑝] + 𝑛[𝐻]) = 𝑛[𝑒]/(𝑛[𝑒] + 𝑛[𝐻]). A fully ionized universe corresponds 𝑋[𝑒] = 1, while a universe of only neutral atoms has 𝑋[𝑒] = 0. Our goal is to understand how 𝑋[𝑒] evolves. If we neglect the small number of helium atoms, then the denominator in (3.75) can be approximated by the baryon density (3.76) 𝑛[𝑏] = 𝜂 𝑛[𝛾] = 𝜂 ¡¿ 2𝜁(3)/¥ð^2 𝛵^3, [RE (3.24) 𝑛[𝛾,0] = 2𝜁(3)/¥ð^2 𝛵[0]^3] where 𝜂 is the baryon-to-photon ratio. We can then write (3.77) (1 - 𝑋[𝑒])/𝑋[𝑒]^2 = 𝑛[𝐻]/𝑛[𝑒]^2 𝑛[𝑏], and substituting (3.74), we arrive at the so-called Saha equation (3.78) [(1 - 𝑋[𝑒])/𝑋[𝑒]^2][eq] = 2𝜁(3)/¥ð^2 𝜂 (2¥ð𝛵/𝑚[𝑒])^3/2 𝑒^𝐸[𝐼]/𝛵 The solution to this equation is (3.79) 𝑋[𝑒] = [-1 ¡¾ ¡î(1 + 4𝑓)]/2𝑓,^* with 𝑓(𝛵,𝜂) = 2𝜁(3)/¥ð^2 𝜂 (2¥ð𝛵/𝑚[𝑒])^3/2 𝑒^𝐸[𝐼]/𝛵, ^*[correcton: from [-1 + ¡î(1 + 4𝑓)]/2𝑓 to [-1 ¡¾ ¡î(1 + 4𝑓)]/2𝑓]] which is shown in Fig. 3.6 as a function of temperature (or equivalently redshift). Let us define the recombination temperature 𝛵[rec] the temperature at which as 𝑋[𝑒] = 0/5 in (3.78).^12 For 𝜂 ≈ 6 ¡¿ 10^-10, we get (3.80) 𝛵[rec] ≈ 0.32 eV ≈ 3760 K. The reason why the recombination temperature is significantly below the binding energy of hydrogen, 𝛵[rec] ¡ì 𝐸[𝐼] = 13.6 eV, is that there are many photons for each hydrogen atom. Even when 𝛵 < 𝐸 [𝐼], the high-energy tail of the 𝐸[𝐼], the high-energy tail of the photon distribution contains photons with energy 𝐸[𝛾] > 𝐸[𝐼], which can ionize the hydrogen atoms. Concretely, although the mean photon energy is <𝐸[𝛾]> ≈ 2.7 𝛵, one in 500 photons has 𝐸[𝛾] > 10 𝛵, one in 3 ¡¿ 10^6 has 𝐸[𝛾] > 20 𝛵, and one in 3 ¡¿ 10^10 has 𝐸[𝛾] > 30 𝛵. Since there are over 10^9 photons per baryon, rare high-energy photons are still present in sufficient numbers, unless the temperature drops far below the binding energy. 𝛵[rec] = 𝛵[0](1 + 𝑧[rec]), with 𝛵[0] =2.73 K, gives 𝑧[rec] ≈ 1380.^13. However, we will see in Section 3.2.5 that the details of recombination are more complex and is delayed relative to the Saha prediction, with 𝑋[𝑒] = 0.5 only being reached at (3.81) 𝑧[rec] ≈ 1270, 𝑡[rec] ≈ 290 000 yrs. Since matter-radiation equality is at 𝑧[rec] ≈ 3400, we conclude that recombination occurred in matter-dominated era. Of course, it was not an instantaneous process as seen from Fig. 3.6. It took about 𝛥𝑡 ≈ 70 000 yrs (or 𝛥𝑧 ≈ 180^*) for the ionization fraction to drop from 𝑋[𝑒] = 0.9 to 𝑋[𝑒] = 0.1. ^*[correction: from ∆𝑧 ≈ 80 to ∆𝑧 ≈ 180] Photon decoupling At early times, photons are strongly coupled to the primordial plasma through their interaction with the free electrons 𝑒^- + 𝛾 ⟷ 𝑒^- + 𝛾, with the interaction rate given by 𝛤[𝛾] ≈ 𝑛[𝑒] 𝜎[𝛵] ≈ 2 ¡¿ 10^-3 MeV^-2 is the Thomson cross section. At 𝑎 = 10^-5 (prior to matter-radiation equality), the rate of photon scattering is 𝛤[𝛾] ≈ 5.0 ¡¿ 10^-6 s^-1, or three times per week. This interaction rate was much larger than the expansion rate at that time (𝐻 ≈ 2 ¡¿ 10^-10 s^-1), so thah electrons and photons were in equilibrium. Since 𝛤[𝛾] ¡ð 𝑛[𝑒], the interaction rate decrease as the density of free electrons drops during recombination. At some point, this rate becomes smaller than the expansion rate and photons decouple. We define the approximate moment of photon decoupling as 𝛤[𝛾](𝛵[dec]) ≈ 𝐻(𝛵[dec]). (3.82) 𝛤[𝛾](𝛵[dec]) = 𝑛[𝑏] 𝑋[𝑒](𝛵[dec]) 𝜎[𝛵] = 2𝜁(3)/¥ð^2 𝜂 𝑋[𝑒](𝛵[dec]) 𝛵[dec]^3, (3.83) 𝐻(𝛵[dec]) = 𝐻[0] ¡î𝛺[𝑚] (𝛵[dec]/𝛵[0])^3/2, we get (3.84) 𝑋[𝑒](𝛵[dec]) 𝛵[dec]^3/2 ≈ ¥ð^2/2𝜁(3) 𝐻[0]¡î𝛺[𝑚]/𝑛𝜎[𝛵]𝛵[0]^3/2. Using the Saha equation for 𝑋[𝑒](𝛵[dec]) we find 𝛵[dec] ≈ 0.27 eV. In the more precise treatment in Section 3.2.5, we find that decoupling occurs at a slightly lower temperature, (3.85-86) 𝛵[dec] ≈ 0.25 eV ≈ 2970 K, 𝑧[rec] ≈ 1090, 𝑡[rec] ≈ 370 000 yrs. After decoupling, the photons stream freely through the universe. Note that the ionization fraction decreases significantly between recombination and decoupling. 𝑋[𝑒](𝛵[rec]) ≈ 5 ¡æ 𝑋[𝑒](𝛵[dec]) ≈ 0.001. This show that a large degree of neutrality is necessary before the universe becomes transparent to photons. Exercise 3.4 Imagine that he recombination did not occurs, so that 𝑋[𝑒] = 1. At what redshift would the CMB photons now decouple? [Solution] From definition of photon decoupling, 𝛤[𝛾](𝛵[dec]) ≈ 𝐻(𝛵[dec]), so we have (3.84) (a) 𝑋[𝑒](𝛵[dec]) 𝛵[dec]^3/2 ≈ ¥ð^2/2𝜁(3) 𝐻[0]¡î𝛺[𝑚]/𝑛𝜎[𝛵]𝛵[0]^3/2. and 𝑋[𝑒] ¡Õ 1 (b) 𝛵[dec] = [¥ð^2/2𝜁(3) 𝐻[0]¡î𝛺[𝑚]/𝑛𝜎[𝛵]𝛵[0]^3/2]^2/3 To evaluate this we first write all quantities in natural units, using (c) 𝑐 = 3 ¡¿ 10^8 m s^-1 ¡Õ 1, ℏ𝑐 = 2 ¡¿ 10^-7 eV m ¡Õ 1 For we use 𝐻[0] ≈ 70 km s^-1 Mpc^-1 = 70/3 ¡¿ 10^5 (3.1 ¡¿ 10^22 m)^-1 = 70/3 ¡¿ 10^5 2 ¡¿ 10^-7/3.1 ¡¿ 10^22 eV = 1.5 ¡¿ 10^-33 eV The photon temperature today is 𝛵[0] ≈ 2.725 K or 2.348 ¡¿ 10^-4 eV. The Thomson cross section [RE Wikipedia Thomson scattering: For an electron ...] 𝜎[𝛵] = 6.65 ¡¿ 10^-29 m^2 = 6.65 ¡¿ 10^-29/(2 ¡¿ 10^-7)^2 eV^-2 = 1.66 ¡¿ 10^-15 eV^-2 Substituting these results, ¥ð^2/2𝜁(3) ≈ 4.105 and 𝛺[𝑚] = 0.315 [RE Wikipedia Lambda-CDM model] into the equation gives (d) 𝛵[dec] ≈ [4.105 ¡¿ (1.5 ¡¿ 10^-33 ¡¿ ¡î0.315)/{6 ¡¿ 10^-10 ¡¿ (1.66 ¡¿ 10^-15) ¡¿ (2.3 ¡¿ 10^-4)^3/2}]^2/3 eV ≈ 9.76 ¡¿ 10^-3 eV. The redshift of decoupling is, using 𝛵[dec] = 𝛵[0](1 + 𝑧[dec]), [Wikipedia Cosmic micro background: 𝛵[r] = 2.725 K ¡¿ (1 + 𝑧)] (e) 𝑧[dec] = 𝛵[dec]/𝛵[0] - 1 = 9.76/0.23 - 1 ≈ 41. ▮ The scattering of photons off electrons essentially stops at photon decoupling. To define the precise moment of last-scattering, we have to consider the probability of photon scattering. Let 𝑑𝑡 b a small time interval around the time 𝑡. The probability that a photon will scatter during this time is 𝛤[𝛾](𝑡) 𝑑𝑡, and the integrated probability between the time 𝑡 and 𝑡[0] > 𝑡 is (3.91) 𝛤[𝛾](𝑡) = ¡ò𝑡[]^𝑡[0] 𝛤[𝛾](𝑡) 𝑑𝑡, This probability is also called the optical depth. Taking 𝑡[0] to be the present time, the moment of lat-scattering is defined by 𝜏(𝑡[*]) ¡Õ 1. To a good approximation, last scattering coincides with photon decoupling, 𝑡[*] ≈ 𝑡[dec]. However, the optical dept is sensitive to the evolution of free electron density at the end of recombination, which isn't captured well by the equilibrium treatment of this section. A precise evaluation of 𝑡[*] must therefore await our non-equilibrium analysis of recombination in Section 3.2.5. When we observe the CMB, we are detecting photons from this surface of last-scattering (see Fig. 3.7). Given the age of the universe, and taking into account the expansion of the universe, the distance us and the spherical last-scattering surface today is 42 billion light-years. Of course, last-scattering is a probabilistic concept-not all photons experienced their last scattering event at the same time-so there is some thickness to the last-scattering surface. Blackbody spectrum The CBM is often presented as key evidence that the early universe began in a state of thermal equilibrium. Before decoupling, the number density of photons with frequency in the range 𝑓 and 𝑓 + 𝑑𝑓 (3.88) 𝑛(𝑓,𝛵) 𝑑𝑓 = 2/𝑐^3 4¥ð𝑓^2/(𝑒^𝘩𝑓/𝑘[𝛣]𝛵 - 1) 𝑑𝑓, where factors of 𝑐 and 𝑘[𝛣] were restored for clarity. This frequency distribution is called the blackbody spectrum and is characteristic of objects in thermal equilibrium.After decoupling, the photons propagate freely, with their frequencies redshifting as 𝑓(𝑡) ¡ð𝑎(𝑡)^-1 and number density decreasing as 𝑎(𝑡)^-3. The spectrum therefore maintains is blackbody form as long as we take the temperature to such scale as 𝛵 ¡ð 𝑎(𝑡)^-1. Te early e relic radiation encodes the early equilibrium phase of the hot Big Bang. CMB experiments observe the so-called spectral radiation intensity, 𝐼[𝑓], which is the flux of energy per unit area per unit frequency. Let us see how this is related to the spectrum in (3.68). We first pick a specific direction and consider photons traveling in a solid angle 𝛿𝛺 around the this direction. In a given time interval 𝛿𝑡, these photons move through a volume 𝛿𝑉 = (𝑐𝛿𝑡)^3 𝛿𝛺 and cross a cap of area 𝛿𝛢 = (𝑐𝛿𝑡)^2 𝛿𝛺. The number of photons in this volume is (3.89) 𝛿𝑁 = 𝑛(𝑓)𝑑𝑓/4¥ð 𝛿𝑉 = 2/𝑐^3 𝑓^2𝑑𝑓/(𝑒^𝘩𝑓/𝑘[𝛣]𝛵 - 1) (𝑐𝛿𝑡)^3 𝛿𝛺, and the number of photons crossing the surface per unit area and per unit time is (3.90) 𝛿𝑁/𝛿𝛢𝛿𝑡 = 2/𝑐^2 𝑓^2𝑑𝑓/(𝑒^𝘩𝑓/𝑘[𝛣]𝛵 - 1). Since each photon has energy 𝘩𝑓, the flux of energy across the surface (per unit frequency) is (3.91) 𝐼[𝑓] = 2𝘩/𝑐^2 𝑓^3/(𝑒^𝘩𝑓/𝑘[𝛣]𝛵 - 1). Figure 3.8 shows a measurement of the CMB frequency spectrum by the FIRAS instrument on the COBE satellite. What you are seeing is the most perfect blackbody ever observed in nature, proving that the early universe indeed started in a state of thermal equilibrium. ^8 Obviously entropy is separately conserved for the thermal bath and the decoupled species. ^9 For the precise value, we should consider that the neutrino spectrum after decoupling deviates slightly from the Fermi-Dirac distribution. The spectral distortion arise because the energy dependence of the weak interaction causes neutrinos in the high-energy tail to interact more strongly. ^10 But the Planck constraint on 𝑁[eff] = 2.99 ¡¾ 0.17 (¡Á 3.046), so this leaves room for discovering new physics beyond the Standard Model. ^11 The spins of the electron and proton in a hydrogen atom can be aligned or anti-aligned, giving one singlet state and one triplet state, so 𝑔[𝐻] = 1 + 3 = 4. ^12 the choice of 𝑋[𝑒] = 0/5 looks like arbitrary. However, since 𝑋[𝑒] is exponentially sensitive to 𝛵, we don't change this criterion. ^13 It's useful to compare this to the case of helium recombination. This proceed in two stage: First, He^2+ captures one 𝑒^- to create He^+. This process occurs in equilibrium around 𝑧 ≈ 6000. Then He^+ captures a second 𝑒^- to become a neural helium He. This is slower than predicted by Saha equilibrium and occurs around 𝑧 ≈ 2000. This means that helium recombination don't have a big effect on he hydrogen recombination and the predictions of CMB, since the universe was still optically thick after the completion of the helium recombination. Problem 3.4 Massive neutrinos At least two of the three neutrino species in the Standard Model must have small masses. We will explore the cosmological consequences of this neutrino mass. 1. Let us assume that the neutrino mass is small enough, so that the neutrinos are relativistic at decoupling. Show that the energy density after decoupling is (a) 𝜌[𝜈] = 𝛵[𝜈]^4/¥ð^2 ¡ò[0]^¡Ä 𝑑𝜉 𝜉^2 ¡î(𝜉^2 + 𝑚[𝜈]^2/𝛵[𝜈]^2)/(𝑒^𝜉 + 1), where 𝛵[𝜈] is the neutrino temperature. [Solution] Since the neutrinos are relativistic when they decouple and The contribution of a neutrino flavor with mass 𝑚[𝜈] to the energy density is (b) 𝜌[𝜈] = 𝑔/(2¥ð)^3 ¡ò 𝑑^3𝑝 𝑓(𝑝,𝛵) 𝐸(𝑝), 𝑔[𝜈] = 2; 𝜌[𝜈] = 2/2¥ð^2 ¡ò[0]^¡Ä 𝑑𝑝 𝑝^2¡î(𝑝^2 + 𝑚^2)/[𝑒^{¡î(𝑝^2 + 𝑚^2)/𝛵} ¡¾ 1]. + sign for fermion, 𝑚[𝜈]^2/𝛵[𝜈] ≈ 0; (c) 𝜌[𝜈] = 1/¥ð^2 ¡ò[0]^¡Ä 𝑑𝑝 𝑝^2¡î(𝑝^2 + 𝑚^2)/𝑒^𝑝/𝛵[𝜈] + 1]. 𝜉 ¡Õ 𝑝/𝛵[𝜈]; 𝜌[𝜈] = 1/¥ð^2 ¡ò[0]^¡Ä 𝑑(𝜉𝛵[𝜈]) (𝜉𝛵[𝜈])^2¡î[(𝜉𝛵[𝜈])^2 + 𝑚^2]/(𝑒^𝜉 + 1) ¢¡ (d) 𝜌[𝜈] = 𝛵[𝜈]^4/¥ð^2 ¡ò[0]^¡Ä 𝑑𝜉 𝜉^2 ¡î(𝜉^2 + 𝑚[𝜈]^2/𝛵[𝜈]^2)/(𝑒^𝜉 + 1). ▮ 2. By considering a series expansion for small 𝑚[𝜈]/𝛵[𝜈] show that (e) 𝜌[𝜈] ≈ 𝜌[𝜈0] (1 + 5/7¥ð^2 𝑚[𝜈]^2/𝛵[𝜈]^2), where 𝜌[𝜈0] is the energy density of massless neutrinos. [Solution] For a massless neutrinos, the energy density formula derived above with 𝑚[𝜈] = 0, (f) 𝜌[𝜈0] = 𝛵[𝜈]^4/¥ð^2 ¡ò[0]^¡Ä 𝑑𝜉 𝜉^3/(𝑒^𝜉 + 1) = 𝛵[𝜈]^4/¥ð^2 7¥ð^4/120 = 7¥ð^2/120 𝛵[𝜈]^4. Since 𝑚[𝜈]/𝛵[𝜈] is small and using Taylor expansion ¡î(𝜉^2 + 𝑚[𝜈]^2/𝛵[𝜈]^2) ≈ 𝜉 + 1/2𝜉 (𝑚[𝜈]/𝛵[𝜈])^2 (g) 𝜌[𝜈] ≈ 𝛵[𝜈]^4/¥ð^2 ¡ò[0]^¡Ä 𝑑𝜉 𝜉^2/(𝑒^𝜉 + 1) [𝜉 + 1/2𝜉 (𝑚[𝜈]/𝛵[𝜈])^2] = 𝜌[𝜈0] + 𝛵[𝜈]^2𝑚[𝜈]^2/2¥ð^2 ¡ò[0]^¡Ä 𝑑𝜉 𝜉/(𝑒^𝜉 + 1) = 𝜌[𝜈0] + 𝛵[𝜈]^2𝑚[𝜈]^2/2¥ð^2 ¥ð^2/12, (h) 𝜌[𝜈] ≈ 𝜌[𝜈0] + 𝛵[𝜈]^2𝑚[𝜈]^2/24 = 𝜌[𝜈0](1 + 5/7¥ð^2 𝑚[𝜈]^2/𝛵[𝜈]^2). ▮ 3. If 𝜌[𝜈] is significantly larger than 𝜌[𝜈0] at recombination, then the mass of the neutrinos affects the CMB anisotropies. What is the smallest neutrino mass that is observable in the CMB? [Solution] This implies that 5/7¥ð^2 𝑚[𝜈]^2/𝛵[𝜈]^2 > 0, so (i) 𝑚[𝜈] > ¡î(7¥ð^2/5) 𝛵[𝜈,rec] = ¡î(7¥ð^2/5) (4/11)^21/3 𝛵[𝛾,rec] where we have used the relation between neutrino and photon temperatures after electron-positron annihilation. 𝛵[𝛾,rec] ≈ 0.32 eV, so (j) 𝑚[𝜈] > ¡î(7¥ð^2/5) (4/11)^21/3 ¡¿ 0.32 eV ≈ 0.85 eV. ▮ If the neutrino mass is larger than the present photon temperature 𝛵[𝛾,0] ≈ 0.235 meV, then these neutrinos will be non-relativistic today. 4. Estimate the redshift at which the neutrinos become non-relativistic. [Solution] Neutrinos become non-relativistic when their temperature falls below their mass at 𝛵[𝜈,n-r] ~ 𝑚[𝜈]. Recall that after decoupling theit temperature evolves 𝛵[𝜈] ¡ð 𝑎^-1 and 1 + 𝑧 = 𝑎^-1, so (k) 𝛵[𝜈,n-r]/𝛵[𝜈,0] = 1 + 𝑧[n-r] ¢¡ 𝑧[n-r] = 𝛵[𝜈,n-r]/𝛵[𝜈,0] - 1 ≈ 𝛵[𝜈,n-r]/1.95 K - 1 ≈ 𝑚[𝜈]/(4/11)^1/3𝛵[𝛾,0] ≈ (𝑚[𝜈]/0.714 ¡¿ 0.2348 meV) -1 ≈ (𝑚[𝜈]/0.17 meV) -1. ▮ 5. Compute the number density of these neutrinos today. [Solution] When the neutrinos decoupled, they were in relativistic limit, so we can use equation 𝑛 = 3/4 𝜁(3)/¥ð^2 𝑔 𝛵^3 (for femion), therefore (l) 𝑛[𝜈] = 3𝜁(3)/4¥ð^2 𝑔[𝜈] 𝛵[dec]^3. Since neutrinos are decoupled and temperature redshifts simply as 𝛵[𝜈] ¡ð 𝑎[-1] (see Fig. 3.3), so we can write (m) 𝑛[𝜈,0] = 3𝜁(3)/4¥ð^2 𝑔[𝜈] 𝛵[𝜈,0]^3, where we allow the neutrinos to evolve on geodesics. However, we have the photon number density today which is given by relativistic thermodynamics (n) 𝑛[𝛾,0] = 𝜁(3)/¥ð^2 𝑔[𝛾] 𝛵[𝛾,0]^3, Now we can compute the ratio of the two number density using 𝛵[𝜈] = (4/11)^1/3 𝛵[𝛾], (o) 𝑛[𝜈,0]/𝑛[𝛾,0] = 3/4 𝑔[𝜈]/𝑔[𝛾] (𝛵[𝜈,0]/𝛵[𝛾,0])^3 = 3/4 ¡¿ 2/2 ¡¿ 4/11 = 3/11. ▮ 6. Show that their contribution to the energy density in the universe today is (p) 𝛺[𝜈]𝘩^2 ≈ 𝑚[𝜈]/94 EV. Use the lower bound on the sum of the neutrino masses from oscillation experiments, ¢² 𝑚[𝜈] > 0.06 eV, to derive bound on the total neutrino density. How does this compare to the cosmological bound 𝛺 [𝜈]𝘩^2 < 0.001 ? [Solution] Since the neutrinos are non-relativistic today, their energy density is the same as their mass density, so (q) 𝜌[𝜈,0] = 𝑚[𝜈]𝑛[𝜈,0] = 3/11 𝑚[𝜈]𝑛[𝛾,0]. The fractional energy in such a neutrino is, using 𝛵[𝛾,0] ≈ 0.2348 meV and 𝛺[𝛾,0]𝘩^2 ≈ 2.47 ¡¿ 10^-5, (r) 𝛺[𝜈]𝘩^2 = 𝜌[𝜈,0]/𝜌[𝛾,0] 𝛺[𝛾]𝘩^2 = 3/11 𝑚[𝜈]𝑛[𝛾,0]/𝜌[𝛾,0] = 3/11 𝑚[𝜈] (2𝜁(3)/¥ð^2 𝛵[𝛾,0]^3)/(¥ð^2/15 𝛵[𝛾,0]^4) = 90𝜁(3)/11¥ð^2 𝑚[𝜈]/0.2348 meV 2.47 ¡¿ 10^-5 ≈ 𝑚[𝜈]/94eV. ▮ If ¢² 𝑚[𝜈] > 0.06 eV, we get 𝛺[𝜈]𝘩^2 > 0.0006, which is not far from the cosmological upper bound, 𝛺[𝜈]𝘩^2 < 0.001. ▮ 7. Discuss qualitatively why a much larger mass would still be compatible with the standard cosmology. [Solution] If 𝑚[𝜈] ¡í 𝛵[dec], then the neutrinos were already non-relativistic at decoupling. As we can see in Fig. 3.4, at late time the energy density of massive neutrinos start to dominate over photons. Now we can measure the matter density of neutrinos without any constraint. ▮
{"url":"http://www.architectnetwork.co.kr/bbs/zboard.php?id=AT_cosmos&page=1&sn1=&divpage=1&sn=off&ss=on&sc=on&select_arrange=headnum&desc=asc&no=166","timestamp":"2024-11-11T23:21:08Z","content_type":"text/html","content_length":"226367","record_id":"<urn:uuid:1631e403-285f-422b-b91b-77cc156c3d02>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00737.warc.gz"}
A composite circle criterion The paper deals with the classical problem of the absolute stability in a SISO Lur'e system. The circle criterion is applied to different overlapping sectors and then, for all the functions in the 'union sector, a set of constraints is defined, so that they make globally asimptotically stable the closed loop system. The conclusion is that in every sector, such that can be covered by subsectors the circle criterion can be applied to, it is possible to define a class of nonlinearities, which solve the absolute stability problem. Publication series Name Proceedings of the IEEE Conference on Decision and Control ISSN (Print) 0743-1546 ISSN (Electronic) 2576-2370 Other 46th IEEE Conference on Decision and Control 2007, CDC Country/Territory United States City New Orleans, LA Period 12/12/07 → 12/14/07 Dive into the research topics of 'A composite circle criterion'. Together they form a unique fingerprint.
{"url":"https://experts.umn.edu/en/publications/a-composite-circle-criterion","timestamp":"2024-11-05T13:13:38Z","content_type":"text/html","content_length":"50707","record_id":"<urn:uuid:97db594c-bcc8-4383-88b6-641b7f4b3a92>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00770.warc.gz"}
How do you do a Bonferroni test? Applying the Bonferroni correction, you’d divide P=0.05 by the number of tests (25) to get the Bonferroni critical value, so a test would have to have Ptest for total calories is significant. What does the Bonferroni procedure test? The Bonferroni test is a statistical test used to reduce the instance of a false positive. In particular, Bonferroni designed an adjustment to prevent data from incorrectly appearing to be statistically significant. What is the proper way to apply the multiple comparison test? The classic approach for solving a multiple comparison problem involves controlling FWER. A threshold value of less than 0.05, which is conventionally used, can be set. If the H0 is true for all tests, the probability of obtaining a significant result from this new, lower critical value is 0.05. How do you report a hypothesis test result? Every statistical test that you report should relate directly to a hypothesis. Begin the results section by restating each hypothesis, then state whether your results supported it, then give the data and statistics that allowed you to draw this conclusion. How do you report an F test? First report the between-groups degrees of freedom, then report the within-groups degrees of freedom (separated by a comma). After that report the F statistic (rounded off to two decimal places) and the significance level. There was a significant main effect for treatment, F(1, 145) = 5.43, p = . How do you report descriptive statistics? Descriptive ResultsAdd a table of the raw data in the appendix.Include a table with the appropriate descriptive statistics e.g. the mean, mode, median, and standard deviation. Identify the level or data. Include a graph. Give an explanation of your statistic in a short paragraph. What are the four types of descriptive statistics? There are four major types of descriptive statistics:Measures of Frequency: * Count, Percent, Frequency. Measures of Central Tendency. * Mean, Median, and Mode. Measures of Dispersion or Variation. * Range, Variance, Standard Deviation. Measures of Position. * Percentile Ranks, Quartile Ranks. What should be included in descriptive statistics? Descriptive statistics are broken down into measures of central tendency and measures of variability (spread). Measures of central tendency include the mean, median and mode, while measures of variability include standard deviation, variance, minimum and maximum variables, and kurtosis and skewness. How do you summarize descriptive statistics? Interpret the key results for Descriptive StatisticsStep 1: Describe the size of your sample.Step 2: Describe the center of your data.Step 3: Describe the spread of your data.Step 4: Assess the shape and spread of your data distribution.Compare data from different groups. What is an example of descriptive statistics in a research study? Each descriptive statistic reduces lots of data into a simpler summary. For instance, consider a simple number used to summarize how well a batter is performing in baseball, the batting average. This single number is simply the number of hits divided by the number of times at bat (reported to three significant digits). How do you interpret skewness? The rule of thumb seems to be:If the skewness is between -0.5 and 0.5, the data are fairly symmetrical.If the skewness is between -1 and – 0.5 or between 0.5 and 1, the data are moderately skewed.If the skewness is less than -1 or greater than 1, the data are highly skewed. How do you interpret mean and standard deviation? More precisely, it is a measure of the average distance between the values of the data in the set and the mean. A low standard deviation indicates that the data points tend to be very close to the mean; a high standard deviation indicates that the data points are spread out over a large range of values. What is the relation between mean and standard deviation? Standard deviation and Mean both the term used in statistics. Standard deviation is statistics that basically measure the distance from the mean, and calculated as the square root of variance by determination between each data point relative to mean. Standard deviation is the best tool for measurement for volatility. How do you compare mean and standard deviation? Standard deviation is an important measure of spread or dispersion. It tells us how far, on average the results are from the mean. Therefore if the standard deviation is small, then this tells us that the results are close to the mean, whereas if the standard deviation is large, then the results are more spread out. What does the mean and standard deviation tell us about data? Standard deviation tells you how spread out the data is. It is a measure of how far each observed value is from the mean. In any distribution, about 95% of values will be within 2 standard deviations of the mean. How do you explain normal distribution? What is Normal Distribution? Normal distribution, also known as the Gaussian distribution, is a probability distribution that is symmetric about the mean, showing that data near the mean are more frequent in occurrence than data far from the mean. In graph form, normal distribution will appear as a bell curve. What does the mean tell you about a data set? The mean (average) of a data set is found by adding all numbers in the data set and then dividing by the number of values in the set. The median is the middle value when a data set is ordered from least to greatest. The mode is the number that occurs most often in a data set. How do you know if the standard deviation is high or low? Low standard deviation means data are clustered around the mean, and high standard deviation indicates data are more spread out. A standard deviation close to zero indicates that data points are close to the mean, whereas a high or low standard deviation indicates data points are respectively above or below the mean. What does a standard deviation of 1 mean? A normal distribution with a mean of 0 and a standard deviation of 1 is called a standard normal distribution. Areas of the normal distribution are often represented by tables of the standard normal What number is a low standard deviation? For an approximate answer, please estimate your coefficient of variation (CV=standard deviation / mean). As a rule of thumb, a CV >= 1 indicates a relatively high variation, while a CV low.
{"url":"https://www.bloodraynebetrayal.com/suzanna-escobar/how-to-write-better/how-do-you-do-a-bonferroni-test/","timestamp":"2024-11-02T14:41:01Z","content_type":"text/html","content_length":"102732","record_id":"<urn:uuid:d6164417-2388-463e-9263-77dd33e45371>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00114.warc.gz"}
Generalizing about Populations from Random Samples In previous grades, you learned how to use random samples to make inferences about a population. For example, suppose there are 325 eighth grade students at Sam Houston Middle School. A total of 50 eighth grade students were surveyed at random and 20 of those students said that they supported using clear backpacks for increased school safety. You can use this information to set up and solve a proportion to estimate how many of the 325 total eighth grade students would be likely to support using clear backpacks. Before continuing in this lesson, it is important to understand two key terms about sampling. As the Venn diagram shows, the population is the entire group that is being studied. The most accurate way to describe the entire population is to study each member of the population, such as the way the United States census counts the nation’s population every 10 years. However, that is very expensive and takes a lot of time. Statisticians have shown that you can usually get good results by studying a random sample that is selected from within the population. But how can you make sure that the sample really is random? How well does a randomly selected sample represent the population? In this lesson, you will use simulations to investigate these questions. Comparing Random Samples from a Population In previous grades, you used data from a random sample to make inferences and draw conclusions about a population. In this section, you will investigate different simulations that could be used to generate a random sample. Consider the following problem. There are 100 pieces of fruit in Sidney’s Fruit Stand. Of those, there are 20 packages of blueberries, 40 pears, 25 apples, and 15 pineapples. You can use color tiles to represent the fruit at Sidney’s Fruit Stand. • Use 20 blue tiles to represent the blueberries. • Use 40 green tiles to represent the pears. • Use 25 red tiles to represent the apples. • Use 15 yellow tiles to represent the pineapples. In the color tile simulation, you can place 100 color tiles in a bag and randomly select 20 of them. Each color tile selected would represent one fruit, and you could record the results. Another simulation could use a random number generator to select 20 random numbers between one and one hundred. • Let 1 to 20 (20 total numbers) represent the blueberries. • Let 21 to 60 (40 total numbers) represent the pears. • Let 61 to 85 (25 total numbers) represent the apples. • Let 86 to 100 (15 total numbers) represent the pineapples. Click the image below to use a random number generator to simulate the random sample. Enter the appropriate numbers in the boxes, then click Randomize Now! to generate the set of 20 random numbers. • How many sets of numbers do you want to generate? 1 • How many numbers per set? 20 • Number range: From 1 to 100 • Do you wish each number in a set to remain unique? Yes • Do you wish to sort the numbers that are generated? Yes: Least to Greatest 1. For your set of random numbers, match each number with the fruit that it represents. Remember the following: • Let 1 to 20 (20 total numbers) represent the blueberries. • Let 21 to 60 (40 total numbers) represent the pears. • Let 61 to 85 (25 total numbers) represent the apples. • Let 86 to 100 (15 total numbers) represent the pineapples. 2. For your set of random numbers, how many of each type of fruit are represented? 3. For your set of random numbers, what percentage of the sample is each fruit? How does that compare to the known distribution of fruit in the population (the entire fruit stand)? Pause and Reflect 1. How do the characteristics of the random sample compare to the characteristics of the population? 2. If you increased the size of the sample, how do you think that would affect the relationship between the characteristics of the random sample and the characteristics of the population? Describe a simulation that could be used for each of the following situations. 1. A coin collection has 175 coins – 50 pennies, 75 nickels, 20 dimes, and 30 quarters. Create a random sample of 40 coins. 2. A volleyball league has 90 jerseys. Twenty-five of the jerseys are blue, 15 are purple, 30 are black, and 20 are red. Create a random sample of 25 jerseys. Comparing Samples from a Population In the last section, you used color tiles and a random number generator to simulate creating a random sample from a population. In this section, you will compare more than one random sample that is taken from the same population in order to look for patterns. Revisit the problem from the previous section. As before, use a random number generator to create a simulation. However, this time, generate three sets of random numbers. • Let 1 to 20 (20 total numbers) represent the blueberries. • Let 21 to 60 (40 total numbers) represent the pears. • Let 61 to 85 (25 total numbers) represent the apples. • Let 86 to 100 (15 total numbers) represent the pineapples. Click the image below to use a random number generator to simulate the random sample. Enter the appropriate numbers in the boxes, then click Randomize Now! to generate the set of 20 random numbers. • How many sets of numbers do you want to generate? 3 • How many numbers per set? 20 • Number range: From 1 to 100 • Do you wish each number in a set to remain unique? Yes • Do you wish to sort the numbers that are generated? Yes: Least to Greatest 1. For your set of random numbers, how many of each type of fruit are represented in each set? Remember the following: • Let 1 to 20 (20 total numbers) represent the blueberries. • Let 21 to 60 (40 total numbers) represent the pears. • Let 61 to 85 (25 total numbers) represent the apples. • Let 86 to 100 (15 total numbers) represent the pineapples. 2. For your set of random numbers, what percentage of each sample is each fruit? How does that compare to the known distribution of fruit in the population (the entire fruit stand)? Pause and Reflect You generated several random samples of the same size from a population with known characteristics. In general, how does the random sample compare to the population from which it was selected? 1. A coin collection has 250 coins – 75 pennies, 100 nickels, 25 dimes, and 50 quarters. What would you expect a random sample of 50 coins from this collection to look like? 2. In a recent election, Candidate A received 110 votes, Candidate B received 70 votes, and Candidate C received 120 votes. A pollster interviewed 50 randomly selected voters as they were leaving the polls. What would you expect the pollster’s results to be? In this lesson, you used simulations to generate random samples of the same size from a population with known characteristics. With several random samples from the same population, you observed that a random sample is typically representative of the population from which it was selected. Random samples are useful in predicting election results or describing how particular groups of people voted in an election. Random samples are useful in describing characteristics or beliefs of a large population. If the population is sufficiently large, like the population of Texas, it is too expensive and takes too much time to ask every Texan what they think about a particular issue. Instead, you can use a random sample, which is representative of all Texans, and make generalizations and predictions from the
{"url":"https://texasgateway.org/resource/generalizing-about-populations-random-samples","timestamp":"2024-11-07T18:52:37Z","content_type":"text/html","content_length":"78916","record_id":"<urn:uuid:57de7e8d-bea4-4890-b816-6c2a39cfc6ec>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00391.warc.gz"}
Cramer's Rule Linear Equations Solver This online calculator takes a system of linear equations and applies Cramer's rule to solve it, showing all intermediate steps in the process. The calculator for solving systems of linear equations using Cramer's rule takes as input a system of linear equations with the number of equations equal to the number of variables. It then applies Cramer's rule to solve the system, which involves calculating the determinant of the matrix of coefficients for the original system and several modified systems (the formulas can be found below the calculator). The calculator shows the detailed steps for calculating each determinant and then applies Cramer's rule to find the solution for each variable. Cramer's Rule You can use Cramer's rule for systems of linear equations where the number of equations is equal to the number of unknown variables, and the coefficient matrix's determinant is not zero (otherwise, the system of equations does not have a unique solution - either it has no solution at all or it has an infinite or parametric solution, and you have to use other methods to find it). System of linear equations in matrix form looks like this If system of linear equations satisfies abovementioned conditions, it has the only solution $(x_{1}, x_{2}, ... , x_{n})$, which can be expressed using this formula $x_{i} = \frac{\Delta_{A_i}}{\Delta_A}$, where $\Delta_{A_i}$ – determinant a matrix formed by replacing the i-column values of matrix A with the column of constants (B) values, and $\Delta_A$ – determinant of the original matrix A. In fact, it is a handy way to solve just one of the variables without having to solve the whole system of equations. PLANETCALC, Cramer's Rule Linear Equations Solver
{"url":"https://embed.planetcalc.com/6007/","timestamp":"2024-11-12T20:33:52Z","content_type":"text/html","content_length":"38581","record_id":"<urn:uuid:8450ca52-7ace-4554-988c-1eb5bbc15cbd>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00312.warc.gz"}
help on probabilty - binomial 11-20-2017, 10:35 PM Post: #3 primer Posts: 135 Member Joined: Sep 2015 RE: help on probabilty - binomial (11-20-2017 10:07 PM)salvomic Wrote: I think you should use BINOMIAL_CDF(n, p,k,[k2]) that "returns the probability of k or fewer successes out of n trials..." Yes, thank you.... it was helpfull, I was totally lost in Inference app, it was a nightmare ! Not so easy to follows these probability stuffs. User(s) browsing this thread: 1 Guest(s)
{"url":"https://www.hpmuseum.org/forum/showthread.php?mode=threaded&tid=9540&pid=83743","timestamp":"2024-11-05T03:29:18Z","content_type":"application/xhtml+xml","content_length":"17208","record_id":"<urn:uuid:62fc8193-25f1-4e66-af04-4ef36d099cea>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00652.warc.gz"}
You Lost Your Watch? So here you are, in a strange place and you've lost your watch. You stop a stranger to ask what time it is and he turns out to be even stranger than the place. His answer goes like this: "The big hand and the little hand are midway between 1 and 2, lying on top of each other." So, now you need to figure out what time it is. 6 comments: 1. Anonymous9/06/2007 11:37 AM It could be 12 noon, which would put the two hands on top of each other in between a 1 and a 2 (12). Did I get the previous two days correct? Nobody else answered 2. I agreed with both of your previous answers, anon. this one was cute, too. The way I would have done it is to set up an equation to determine at exactly what time the hour and minute hands would pass each other. The fraction of the minutes divided by 60 has to be same fraction that the hour hand is between 1 and 2. Below, x represents the number of minutes, so 1 + the fraction of minutes in that hour equals the number of minutes divided by 5 (to account that the minute hand sees each hash as 5 and the hour hand sees it as 1). 1 + x / 60 = x / 5 x = 5.45454545 so, the time is 1:05 and 27.2727 seconds exactly. 3. Wow, Abe. You would think that would be easier to do. I was looking for the hands lying between the 1 and 2 at 12 noon. See todays post for an explanation on why the answers haven't been verified. Being blocked at work has been a real pain in the behind for me, and I apologize for any stress I caused on your end. 4. Anonymous9/10/2007 10:47 PM i was thinking what abe was thinking, except i didn't actually do the calculations. 5. I was thinking it would be more like at 1:07, no? See, if you look at a clock, and if the big hand is midway between the 1 and the two, and the small one is right there as well, that would indicate 1:07. Unless that's a trick question. Then that's not cool. Haha, just kidding. 6. Anonymous8/13/2011 4:17 PM couldn't it also in be 7:37 ? you can be between the numbers on both sides as the lay on the clock circular face? Leave your answer or, if you want to post a question of your own, send me an e-mail. Look in the about section to find my e-mail address. If it's new, I'll post it soon. Please don't leave spam or 'Awesome blog, come visit mine' messages. I'll delete them soon after.
{"url":"http://www.questionotd.com/2007/09/you-lost-your-watch.html","timestamp":"2024-11-11T04:34:11Z","content_type":"text/html","content_length":"124211","record_id":"<urn:uuid:416ed846-9992-4809-85cf-fd55b5075836>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00692.warc.gz"}
Software for Exploratory Data Analysis and Statistical Modelling Programming with R – Checking Data Types November 13th, 2010 There are a number of useful functions in R that test the variable type or convert between different variable types. These can be used to validate function input to ensure that sensible answers are returned from a function or to ensure that the function doesn’t fail. Following on from a previous post on a simple function to calculate the volume of a cylinder we can include a test with the is.numeric function. The usage of this function is best shown with a couple of examples: > is.numeric(6) [1] TRUE > is.numeric("test") [1] FALSE The function returns either TRUE or FALSE depending on whether the value is numeric. If a vector is specified to this function then a vector or TRUE and FALSE elements is returned. We can add two statements to our volume calculation function to test that the height and radius specified by the user are indeed numeric values: if (!is.numeric(height)) stop("Height should be numeric.") if (!is.numeric(radius)) stop("Radius should be numeric.") We add these tests after checking whether the height and radius have been specified and before the test for whether they are positive values. The function now becomes: cylinder.volume.5 = function(height, radius) if (missing(height)) stop("Need to specify height of cylinder for calculations.") if (missing(radius)) stop("Need to specify radius of cylinder for calculations.") if (!is.numeric(height)) stop("Height should be numeric.") if (!is.numeric(radius)) stop("Radius should be numeric.") if (height < 0) stop("Negative height specified.") if (radius < 0) stop("Negative radius specified.") volume = pi * radius * radius * height list(Height = height, Radius = radius, Volume = volume) A couple of examples show that the function works as expected: > cylinder.volume.5(20, 4) [1] 20 [1] 4 [1] 1005.310 > cylinder.volume.5(20, "a") Error in cylinder.volume.5(20, "a") : Radius should be numeric. These various validation checks can be combined in different ways to ensure that a user does not try to use a function in a way that was not intended and should lead to greater confidence in the output from the function. This is one approach to checking function arguments and there are likely other slicker ways of doing things. Other useful resources are provided on the Supplementary Material page. We talked about how to check for an integer here: http://stackoverflow.com/questions/3476782/how-to-check-if-the-number-is-integer Thanks for providing a link to that discussion. Having not used the as.integer function I wasn’t aware of the issues so was useful to have that highlighted.
{"url":"https://www.wekaleamstudios.co.uk/posts/programming-with-r-checking-data-types/","timestamp":"2024-11-08T06:13:44Z","content_type":"application/xhtml+xml","content_length":"43323","record_id":"<urn:uuid:a99109c4-1fc4-4055-aae8-ada6e77d1029>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00476.warc.gz"}
IMT Institutional Repository: No conditions. Results ordered -Date Deposited. Le figure professionaliIl comportamento degli attori aziendaliBeni culturaliBeni culturaliRivelazione delle Preferenze, Determinanti Sociali dell'Utilità e Rilevanza dei Dati SoggettiviL'attività normativa del governo nel periodo maggio-novembre 2006L'attività normativa del governo nel periodo dicembre 2006-maggio 2007L'organizzazioneArchivi e biblioteche: memorie del passato dall'incerto futuroL'inchiesta pubblica. Analisi comparataNeural networks: A reviewExperimental set-up and optimization of a gamma-ray spectrometer for measurement of cosmogenic radionuclides in meteorites A large cavity gamma ray spectrometer for measurement of cosmogenic radionuclides in astromaterials by whole rock countingLa spesa farmaceutica territoriale convenzionata: il modello FarmaRegio per l'analisi della variabilità regionaleThe Shadow of PositivismDeviazioni perfettamente ragionevoli dalle vie battuteAccessibility in commuting systems network based performance indicatorsThe structure of interurban traffic: a weighted network analysisComplex networks theory for policy making and planning: a research agendaGrouping complex systems: a weighted network comparative analysisModeling commuters dynamics as complex network: the influence of the spaceInspecting the influence of space on a real complex networkStretching Homopolymers[rezension von] Marica Tolomelli: Terrorismo e società. Il pubblico dibattito in Italia e in Germania negli anni Settanta[recensione a] Nicholas Stargardt, La guerra dei bambini. Infanzia e vita quotidiana durante il nazismoThe ‘Primitive Faces’ of Giorgio de Chirico’s mannequins, 1914-15Roughness of fracture surfacesReply to the Comment by H. Tephany and J. Nahmias on “Percolation in real wildfires” by G. Caldarelli et al.Trading strategies in the Italian interbank marketEnsemble approach to the analysis of weighted networksInterplay between topology and dynamics in the World Trade WebUncovering the topology of configuration space networksSelf-organized network evolution coupled to extremal dynamicsThe Italian interbank network: statistical properties and a simple modelSpectral methods cluster words of the same class in a syntactic dependency networkInvasion percolation and critical transient in the Barabási Model of human dynamicsScale-free networks : complex webs in nature and technology(edited by) Large scale structure and dynamics of complex networks: from information technology to finance and natural scienceRadical innovation and network evolutionSur l'interversion de l'ordre entre deux opérations sur les tribusA strong form of stable convergenceLa fondazione de «La Critica d’Arte» nelle carte di Carlo Ludovico Ragghianti : Parte II: 1936-37Dossier sui sistemi museali in Toscana Viaggio in patria : Arezzo e il suo territorio nel Settecento Il gusto della critica : Francesco Tommaso Bernardi e Giacomo SardiniLa provincia di PisaDa Raffaello a Maratti : artisti e committenti in Valdinievole(a cura di) Praeterita facta : scritti in onore di Amleto SpiccianiComunicazione verbale e non verbale nel simposio grecoSimposio o convivioVisioni d'arpaIntroduzione (a cura di ) Il patrimonio culturale in FranciaRisk aversion, intertemporal substitution, and the aggregate investment-uncertainty relationshipAn economic theory of constitutional choiceKriging metamodels in design optimization: an automotive engineering applicationModeling and simulation study for the design of controlled IPMC actuatorsModels for the design and optimization of CNG injection systemsOptimization issues in modeling IPMC devicesPerformance evaluation of the evolution control in design optimization assisted by Kriging surrogatesEnhanced evolutionary algorithms for multidisciplinary design optimization: a control engineering perspectiveOn the role of zealotry in the voter modelIl settore farmaceutico tra barriere alla concorrenza e regolazione sul lato del consumoSpesa sanitaria, Demografia, IstituzioniThe Growth of Business Firms: Facts and TheoryBetweenness centrality of fractal and nonfractal scale-free model networks and tests on real networksInnovation and industrial leadership: lessons from pharmaceuticalsHealth Services in an Open Transatlantic Market: A European PerspectiveA generalized preferential attachment model for business firms growth rates: II. Mathematical treatmentA generalized preferential attachment model for business firms growth rates: I. Empirical evidenceTranslation: Unione: le tentazioni sbagliate, by Giuliano AmatoTranslation: Microsoft e il flusso di informazioni. Note (comparatistiche) dal fronte antitrust/proprietà intellettuale, by Rudolph J.R. Peritz,Le telecomunicazioni nell’era della convergenza tra nuove regole e apertura del mercatoIntroduzione. Liberalizzazioni e concorrenza in Italia(a cura di) Politiche di liberalizzazione e concorrenza in Italia : proposte di riforma e linee di intervento settorialiIl caso LeeginIl matrimonio Telecom-Telefonica, tra reti alternative e scenari di separazioneIl caso WanadooRaymond Aron face au processus de la décolonisation française sous la IVème RépubliqueIl ritorno al potere di de Gaulle e i trattati di RomaLe triangle du pouvoir: Jacques Chaban-Delmas, Georges Pompidou et le mouvement gaulliste, 1969-1972Between Natural Law and Evolutionism. Political Philosophy and Classical Liberal Theory after WWII La nascita dell’Unione europea occidentale: una parentesi o un passo in avanti nel processo di costruzione europea?La integración europea de España: los organismos internacionales y los debates políticos en las Cortes (1977-1979)El europeísmo y la oposición desde el Franquismo hasta la transición democrática Las raices del consenso europeista de la España democrática Dal franchismo alla democrazia: l’europeismo anello di congiunzione tra politica interna e politica esteraOn American Voter ConfidenceHow Hard Can It Be: Do Citizens Think It Is Difficult to Register to Vote?Policy-based abstention in Brazil’s 2002 ElectionLa rivoluzione incompiuta di Thomas Kuhn 2024-11-08T21:51:09Z EPrints http://eprints.imtlucca.it/images/logowhite.png http://eprints.imtlucca.it/ 2018-03-09T11:20:13Z 2018-03-09T11:20:13Z http://eprints.imtlucca.it/id/eprint/3979 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3979 2018-03-09T11:20:13Z Nicola Lattanzi nicola.lattanzi@imtlucca.it 2018-03-08T10:45:46Z 2018-03-08T10:45:46Z http:// eprints.imtlucca.it/id/eprint/3976 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3976 2018-03-08T10:45:46Z Il capitolo affronta il tema della "variabile comportamentale" in termini di comportamento e motivazioni all'interno dei sistemi di pianificazione e controllo soffermandosi sopratutto sul ruolo e la funzione del controller e del planner. Nicola Lattanzi nicola.lattanzi@imtlucca.it 2018-02-16T13:34:12Z 2018-02-16T13:34:12Z http://eprints.imtlucca.it/id/eprint/3913 This item is in the repository with the URL: http://eprints.imtlucca.it/id/ eprint/3913 2018-02-16T13:34:12Z Lorenzo Casini lorenzo.casini@imtlucca.it 2018-02-16T13:26:03Z 2018-02-16T13:26:26Z http://eprints.imtlucca.it/id/eprint/3912 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3912 2018-02-16T13:26:03Z Lorenzo Casini lorenzo.casini@imtlucca.it 2018-01-25T11:30:46Z 2018-01-25T11:30:46Z http://eprints.imtlucca.it/id/eprint/3892 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3892 2018-01-25T11:30:46Z This paper is a collection of reflections about the foundations of utility theory. Two main points are made. First, the postulates of Revealed Preference Theory are sufficient but not necessary to give empirical content to Utility Theory. Empirically testable implications can be obtained under alternative assumptions about individuals’ preferences which allow for social determinants of behavior. Second, the observation of individuals’ behaviour may not be sufficient to obtain the information about people’s preferences which is required to study welfare. There are cases where the information about preferences obtained from observed behaviour has to be integrated with information obtained from survey data because of externalities which affect utility but have no behavioural implication. Ennio Bilancini ennio.bilancini@imtlucca.it 2016-03-09T08:59:42Z 2016-09-14T10:21:18Z http://eprints.imtlucca.it/id/eprint/3205 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3205 2016-03-09T08:59:42Z Martina Conticelli Lorenzo Casini lorenzo.casini@imtlucca.it 2016-03-08T13:53:22Z 2016-09-14T10:21:17Z http://eprints.imtlucca.it/id/eprint/3204 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint /3204 2016-03-08T13:53:22Z Martina Conticelli Lorenzo Casini lorenzo.casini@imtlucca.it 2016-03-02T11:24:31Z 2016-09-14T10:21:17Z http://eprints.imtlucca.it/id/eprint/3172 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3172 2016-03-02T11:24:31Z Lorenzo Casini lorenzo.casini@imtlucca.it Edoardo Chiti 2016-01-14T08:40:21Z 2016-09-14T10:21:17Z http:// eprints.imtlucca.it/id/eprint/3000 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3000 2016-01-14T08:40:21Z Lorenzo Casini lorenzo.casini@imtlucca.it 2016-01-14T08:38:27Z 2016-09-14T10:21:17Z http://eprints.imtlucca.it/id/eprint/2999 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2999 2016-01-14T08:38:27Z Lorenzo Casini lorenzo.casini@imtlucca.it 2015-11-05T10:42:16Z 2015-11-05T15:23:16Z http://eprints.imtlucca.it/id/eprint/2806 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint /2806 2015-11-05T10:42:16Z Tiziano Squartini tiziano.squartini@imtlucca.it 2014-11-07T13:09:42Z 2015-03-25T09:16:39Z http://eprints.imtlucca.it/id/eprint/2348 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2348 2014-11-07T13:09:42Z We have developed a highly efficient and selective gamma-ray spectrometer with extremely low background for activity measurement of gamma emitting cosmogenic radionuclides in meteorites. This spectrometer can operate in specific modes to match decay scheme of a particular radionuclide and is specially suited for measurement of positron emitters. The system consists of a hyperpure Ge detector (3&#xa0;kg, 147 relative efficiency), operating in coincidence with an umbrella of NaI(Tl) scintillator (90&#xa0;kg) in order to achieve low background. The system is tuned such that strong interference due to naturally occurring uranium daughters, e.g. 214Bi present in the meteorites and in the laboratory environment, is minimized. It enables us to measure 44Ti ( T 1 / 2 = 59.2 y ) which is ideal for studying centennial scale variations of cosmic ray flux in the interplanetary space with good reliability. The specific configuration of the coincidence system and electronics are described here. Carla Taricco Narendra Bhandari Paolo Colombetti Neeharika Verma Gianna Vivaldo gianna.vivaldo@imtlucca.it 2014-11-06T14:31:07Z 2015-03-25T09:16:39Z http://eprints.imtlucca.it/id/eprint/2346 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2346 2014-11-06T14:31:07Z For resolving the ongoing debate that the Earth's climate may respond to solar activity variations, it is necessary to reconstruct a high resolution time series of heliospheric magnetic field in the past and correlate it to climatic records. The solar magnetic activity modulates the galactic cosmic ray flux, which is responsible for producing radioactive nuclides in rocks on planetary surfaces and in meteorites. To measure the minute quantity of γ emitting cosmogenic radionuclides, we have set up a low background, highly specific and selective γ-ray spectrometer. Using this spectrometer, we have reconstructed the solar activity over the past 3 centuries by measuring 44Ti and 26Al in meteorite falls; in particular we have shown that the intensity of cosmic rays has linearly decreased, in agreement with some models proposed for the past solar activity. In order to improve the Ge-NaI coincidence spectrometer, crucial for selective 44Ti detection, we have developed a multiparametric acquisition system. The flexibility of optimizing appropriate energy channels allows more reliable measurement of the small activity present in meteorites. Carla Taricco Paolo Colombetti Neeharika Verma Gianna Vivaldo gianna.vivaldo@imtlucca.it Narendra Bhandari 2014-07-03T12:17:44Z 2016-04-06T09:47:39Z http://eprints.imtlucca.it/id/eprint/2242 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2242 2014-07-03T12:17:44Z Si presenta la prima versione di FarmaRegio, modello econometrico per l'analisi della variabilità regionale della spesa farmaceutica territoriale convenzionata (un panel ad effetti casuali). Si suggeriscono due prospettive di lettura: da un lato, la valutazione della significatività delle variabili esplicative; dall'altro, l'interpretazione della relazione funzionale stimata come possibile benchmark con cui chiedere alle Regioni di confrontarsi. Nel primo caso, emergono chiare indicazioni di policy: la rilevanza del reddito e quindi dei flussi di perequazione a sostegno dei LEA, ma anche l'importanza degli strumenti di regolazione lato offerta/domanda, e delle riforme pro concorrenziali della distribuzione al dettaglio. Poco significativo appare l'impatto dell'invecchiamento, la qual cosa dovrebbe far riflettere sul peso da assegnare a questa variabile nell'allocazione interregionale delle risorse oppure, nel caso in cui il risultato derivasse da razionamenti dell'offerta, sull'utilità di ripristinare vincoli di destinazione mirati su una selezione di prestazioni. Nel secondo caso, la relazione funzionale media può concorrere a dare concretezza a quel riferimento ai costi standard dei LEA presente anche nel recente disegno di legge applicativo dell'articolo 119 della Costituzione. Chiara Bonassi Laura Magazzini Fabio Pammolli f.pammolli@imtlucca.it Massimo Riccaboni massimo.riccaboni@imtlucca.it Nicola C. Salerno 2013-12-04T11:19:57Z 2013-12-04T11:19:57Z http://eprints.imtlucca.it/id /eprint/2022 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2022 2013-12-04T11:19:57Z Stefano Gattei stefano.gattei@imtlucca.it 2013-12-04T10:31:22Z 2013-12-04T10:31:22Z http://eprints.imtlucca.it/id/eprint/2034 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2034 2013-12-04T10:31:22Z Stefano Gattei stefano.gattei@imtlucca.it 2013-11-07T11:19:34Z 2013-11-07T11:19:34Z http://eprints.imtlucca.it/id/eprint/1884 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1884 2013-11-07T11:19:34Z The aim of this paper is to contribute to the discussion on accessibility by adopting complex network analysis as a base for constructing accessibility indicators. In this case, a contribution is offered in construction of two groups of indicators: travel cost and gravity based indexes. A case study is proposed on the level of accessibility of two towns of commuters in the island of Sardinia, Italy. Andrea De Montis Simone Caschili Michele Campagna Alessandro Chessa alessandro.chessa@imtlucca.it Giancarlo Deplano 2013-11-07T11:03:39Z 2013-11-20T09:08:17Z http:// eprints.imtlucca.it/id/eprint/1883 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1883 2013-11-07T11:03:39Z We study the structure of the network representing the interurban commuting traffic of the Sardinia region, Italy, which amounts to 375 municipalities and 1 600 000 inhabitants. We use a weighted network representation in which vertices correspond to towns and the edges correspond to the actual commuting flows among those towns. We characterize quantitatively both the topological and weighted properties of the resulting network. Interestingly, the statistical properties of the commuting traffic exhibit complex features and nontrivial relations with the underlying topology. We characterize quantitatively the traffic backbone among large cities and we give evidence for a very high heterogeneity of the commuter flows around large cities. We also discuss the interplay between the topological and dynamical properties of the network as well as their relation with sociodemographic variables such as population and monthly income. This analysis may be useful at various stages in environmental planning and provides analytical tools for a wide spectrum of applications ranging from impact evaluation to decision making and planning support. Andrea De Montis Marc Barthélemy Alessandro Chessa alessandro.chessa@imtlucca.it Alessandro Vespignani 2013-11-07T10:46:55Z 2013-11-07T11:21:11Z http://eprints.imtlucca.it/id/eprint/1881 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1881 2013-11-07T10:46:55Z The emerging new Science of Networks is providing an elegant paradigm for the characterization of the broad area of Complex Systems. New research perspectives have been opened in the study of many real phenomena and processes, and recently fields like urban, regional, and environmental sciences have gained new insights from the tools provided by Network Science. The complex networks analysis becomes a useful framework in these fields to disentangle problems of a complex and unpredictable nature. This paper presents a research agenda on a number of operative tools borrowed from complex network analysis for regional studies: the comparative analysis of commuting systems, the investigation on the influence of spatial properties on complex networks, the detection of communities in commuting systems and the integration between network analysis and geographical information systems. Andrea De Montis Alessandro Chessa alessandro.chessa@imtlucca.it Michele Campagna Simone Caschili Giancarlo Deplano 2013-11-06T16:01:22Z 2014-09-02T09:48:04Z http://eprints.imtlucca.it/id/eprint/1880 This item is in the repository with the URL: http://eprints.imtlucca.it/ id/eprint/1880 2013-11-06T16:01:22Z In this study, the authors compare two inter-municipal commuting networks (MCN) pertaining to the Italian islands of Sardinia and Sicily, by approaching their characterization through a weighted network analysis. They develop on the results obtained for the MCN of Sardinia (De Montis et al. 2007) and attempt to use network analysis as a mean of detection of similarities or dissimilarities between the systems at hand. Andrea De Montis Alessandro Chessa alessandro.chessa@imtlucca.it Michele Campagna Simone Caschili Giancarlo Deplano 2013-11-06T15:47:22Z 2013-11-06T15:47:22Z http://eprints.imtlucca.it/id/eprint/1879 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1879 2013-11-06T15:47:22Z This study extends previous works developed by the authors analysing the interplay between the dynamics and the economics of the real network underlying the regional intermunicipal commuting system of Sardinia, Italy. Further insights have been earned into the influence of the network spatial properties, by means of network modeling in GIS environment, which takes into account the intermunicipal distance features of the corresponding physical transport network. The authors discuss how modelling complex networks in a GIS environment may be considered an innovative approach in the field, as it allows to better think spatially about complex network models. In this perspective, they outlook a a working framework on the possible integration settings between GIS and complex networks models. Michele Campagna Simone Caschili Alessandro Chessa alessandro.chessa@imtlucca.it Andrea De Montis Giancarlo Deplano 2013-11-06T14:46:17Z 2013-11-07T11:20:32Z http://eprints.imtlucca.it/id/eprint/1878 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1878 2013-11-06T14:46:17Z The paradigm of complex network provides the analysts with the opportunity to model real systems by invoking the interaction properties between elements and their topology. Recently, many scholars have applied network analysis when studying socio-economic processes in urban, regional, and environmental planning. Within this realms, complex network theory can be interpreted as an innovative analytical framework for analysing and planning systems characterised by uncertainty and unpredictability. After this introductory remarks, the aim of this paper is to present and comment results obtained by applying complex network analysis to the characterization of topological, traffic and spatial properties of commuters’ systems of insular Italy. Michele Campagna Simone Caschili Alessandro Chessa alessandro.chessa@imtlucca.it Andrea De Montis Giancarlo Deplano 2013-10-03T13:31:30Z 2016-04-06T10:05:02Z http://eprints.imtlucca.it/id/eprint/1814 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1814 2013-10-03T13:31:30Z Force-induced stretching of polymers is important in a variety of contexts. We have used theory and simulations to describe the response of homopolymers, with N monomers, to an external force (f ) in good and poor solvents. In good solvents and for sufficiently large N we show, in accord with scaling predictions, that the mean extension along the f axis 〈Z〉 ∼ f for small f and 〈Z〉 ∼ f 2/3 (the Pincus regime) for intermediate values of f. The theoretical predictions for 〈Z〉 as a function of f are in excellent agreement with simulations for N = 100 and 1600. However, even with N = 1600, the expected Pincus regime is not observed due to the breakdown of the assumptions in the blob picture for finite N. We predict the Pincus scaling in a good solvent will be observed for N ≳ 105. The force-dependent structure factors for a polymer in a poor solvent show that there is a hierarchy of structures, depending on the nature of the solvent. For a weakly hydrophobic polymer, various structures (ideal conformations, self-avoiding chains, globules, and rods) emerge on distinct length scales as f is varied. A strongly hydrophobic polymer remains globular as long as f is less than a critical value fc. Above fc, an abrupt first-order transition to a rodlike structure occurs. Our predictions can be tested using single molecule experiments. Greg Morrison greg.morrison@imtlucca.it Changbong Hyeon Ngo Minh Toan Bae-Yeun Ha D. Thirumalai 2012-10-19T10:54:55Z 2012-10-19T10:54:55Z http://eprints.imtlucca.it/id/eprint/1417 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1417 2012-10-19T10:54:55Z Fiammetta Balestracci fiammetta.balestracci@imtlucca.it 2012-10-18T13:31:24Z 2012-10-18T13:44:28Z http:// eprints.imtlucca.it/id/eprint/1413 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1413 2012-10-18T13:31:24Z Fiammetta Balestracci fiammetta.balestracci@imtlucca.it 2012-07-02T13:07:09Z 2013-04-16T14:20:56Z http://eprints.imtlucca.it/id/eprint/1296 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1296 2012-07-02T13:07:09Z Silvia Loreti silvia.loreti@imtlucca.it 2012-02-24T10:49:38Z 2013-11-20T14:02:02Z http://eprints.imtlucca.it/id/eprint/1165 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/ 1165 2012-02-24T10:49:38Z We study the roughness of fracture surfaces of three-dimensional samples through numerical simulations of a model for quasi-static cracks known as Born Model. We find for the roughness exponent a value ζ simeq 0.5 measured for "small length scales" in microfracturing experiments. Our simulations confirm that at small length scales the fracture can be considered as quasi-static. The isotropy of the roughness exponent on the crack surface is also showed. Finally, considering the crack front, we compute the roughness exponents of longitudinal and transverse fluctuations of the crack line (ζpar ~ ζ⊥ ~ 0.5). They result in agreement with experimental data, and support the possible application of the model of line depinning in the case of long-range interactions. Andrea Parisi Guido Caldarelli guido.caldarelli@imtlucca.it Luciano Pietronero 2012-02-21T11:14:52Z 2012-02-21T11:16:19Z http://eprints.imtlucca.it/id/eprint/1138 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1138 2012-02-21T11:14:52Z Guido Caldarelli guido.caldarelli@imtlucca.it Raffaella Frondoni Andrea Gabrielli Marco Montuori Rebecca Retzlaff Carlo Ricotta 2012-02-01T15:59:45Z 2013-11-21T09:19:40Z http://eprints.imtlucca.it/id/eprint/1105 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1105 2012-02-01T15:59:45Z Using a data set which includes all transactions among banks in the Italian money market, we study their trading strategies and the dependence among them. We use the Fourier method to compute the variance–covariance matrix of trading strategies. Our results indicate that well defined patterns arise. Two main communities of banks, which can be coarsely identified as small and large banks, emerge. Giulia Iori Renato Renò Giulia De Masi Guido Caldarelli guido.caldarelli@imtlucca.it 2012-02-01T15:39:59Z 2018-03-08T17:09:14Z http://eprints.imtlucca.it/id/eprint/1104 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1104 2012-02-01T15:39:59Z We present an approach to the analysis of weighted networks, by providing a straightforward generalization of any network measure defined on unweighted networks, such as the average degree of the nearest neighbors, the clustering coefficient, the “betweenness,” the distance between two nodes, and the diameter of a network. All these measures are well established for unweighted networks but have hitherto proven difficult to define for weighted networks. Our approach is based on the translation of a weighted network into an ensemble of edges. Further introducing this approach we demonstrate its advantages by applying the clustering coefficient constructed in this way to two real-world weighted networks. Sebastian E. Ahnert Diego Garlaschelli diego.garlaschelli@imtlucca.it Thomas M.A. Fink Guido Caldarelli guido.caldarelli@imtlucca.it 2012-02-01T15:35:22Z 2018-03-08T17:09:24Z http://eprints.imtlucca.it/id/eprint/1103 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1103 2012-02-01T15:35:22Z We present an empirical analysis of the network formed by the trade relationships between all world countries, or World Trade Web (WTW). Each (directed) link is weighted by the amount of wealth flowing between two countries, and each country is characterized by the value of its Gross Domestic Product (GDP). By analysing a set of year-by-year data covering the time interval 1950–2000, we show that the dynamics of all GDP values and the evolution of the WTW (trade flow and topology) are tightly coupled. The probability that two countries are connected depends on their GDP values, supporting recent theoretical models relating network topology to the presence of a `hidden' variable (or fitness). On the other hand, the topology is shown to determine the GDP values due to the exchange between countries. This leads us to a new framework where the fitness value is a dynamical variable determining, and at the same time depending on, network topology in a continuous feedback. Diego Garlaschelli diego.garlaschelli@imtlucca.it Tiziana Di Matteo Tomaso Aste Guido Caldarelli guido.caldarelli@imtlucca.it Maria Immacolata Loffredo 2012-02-01T14:07:02Z 2014-12-18T15:51:42Z http:// eprints.imtlucca.it/id/eprint/1102 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1102 2012-02-01T14:07:02Z The configuration space network (CSN) of a dynamical system is an effective approach to represent the ensemble of configurations sampled during a simulation and their dynamic connectivity. To elucidate the connection between the CSN topology and the underlying free-energy landscape governing the system dynamics and thermodynamics, an analytical solution is provided to explain the heavy tail of the degree distribution, neighbor connectivity, and clustering coefficient. This derivation allows us to understand the universal CSN topology observed in systems ranging from a simple quadratic well to the native state of the beta3s peptide and a two-dimensional lattice heteropolymer. Moreover, CSNs are shown to fall in the general class of complex networks described by the fitness model. David Gfeller David Morton de Lachapelle Paolo De Los Rios Guido Caldarelli guido.caldarelli@imtlucca.it Francesco Rao 2012-02-01T14:01:47Z 2018-03-08T17:09:03Z http://eprints.imtlucca.it/id/eprint/1101 This item is in the repository with the URL: http: //eprints.imtlucca.it/id/eprint/1101 2012-02-01T14:01:47Z The interplay between topology and dynamics in complex networks is a fundamental but widely unexplored problem. Here, we study this phenomenon on a prototype model in which the network is shaped by a dynamical variable. We couple the dynamics of the Bak–Sneppen evolution model with the rules of the so-called fitness network model for establishing the topology of a network; each vertex is assigned a 'fitness', and the vertex with minimum fitness and its neighbours are updated in each iteration. At the same time, the links between the updated vertices and all other vertices are drawn anew with a fitness-dependent connection probability. We show analytically and numerically that the system self-organizes to a non-trivial state that differs from what is obtained when the two processes are decoupled. A power-law decay of dynamical and topological quantities above a threshold emerges spontaneously, as well as a feedback between different dynamical regimes and the underlying correlation and percolation properties of the network. Diego Garlaschelli diego.garlaschelli@imtlucca.it Andrea Capocci Guido Caldarelli guido.caldarelli@imtlucca.it 2012-02-01T13:51:08Z 2012-02-01T13:51:48Z http://eprints.imtlucca.it/id/eprint/1100 This item is in the repository with the URL: http://eprints.imtlucca.it/id/ eprint/1100 2012-02-01T13:51:08Z We use the theory of complex networks in order to quantitatively characterize the structure of reciprocal expositions of Italian banks in the interbank money market market. We observe two main different strategies of banks: small banks tend to be the lender of the system, while large banks are borrowers. We propose a model to reproduce the main statistical features of this market. Moreover the network analysis allows us to investigate properties of robustness of this system. Giulia De Masi Giulia Iori Guido Caldarelli guido.caldarelli@imtlucca.it 2012-02-01T13:17:52Z 2013-11-21T09:07:26Z http://eprints.imtlucca.it/id/eprint/1099 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1099 2012-02-01T13:17:52Z We analyze here a particular kind of linguistic network where vertices represent words and edges stand for syntactic relationships between words. The statistical properties of these networks have been recently studied and various features such as the small-world phenomenon and a scale-free distribution of degrees have been found. Our work focuses on four classes of words: verbs, nouns, adverbs and adjectives. Here, we use spectral methods sorting vertices. We show that the ordering clusters words of the same class. For nouns and verbs, the cluster size distribution clearly follows a power-law distribution that cannot be explained by a null hypothesis. Long-range correlations are found between vertices in the ordering provided by the spectral method. The findings support the use of spectral methods for detecting community structure. Ramon Ferrer I Cancho Andrea Capocci Guido Caldarelli guido.caldarelli@imtlucca.it 2012-01-26T14:19:40Z 2014-12-05T09:24:53Z http:// eprints.imtlucca.it/id/eprint/1086 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1086 2012-01-26T14:19:40Z We introduce an exact probabilistic description for L=2 of the Barabási model for the dynamics of a list of L tasks. This permits us to study the problem out of the stationary state and to solve explicitly the extremal limit case where a critical behavior for the waiting time distribution is observed. This behavior deviates at any finite time from that of the stationary state. We study also the characteristic relaxation time for finite time deviations from stationarity in all cases showing that it diverges in the extremal limit, confirming that these deviations are important at all time. Andrea Gabrielli Guido Caldarelli guido.caldarelli@imtlucca.it 2012-01-25T13:12:31Z 2012-01-25T13:12:31Z http://eprints.imtlucca.it/id/eprint/1077 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1077 2012-01-25T13:12:31Z A variety of different social, natural and technological systems can be described by the same mathematical framework. This holds from the Internet to food webs and to boards of company directors. In all these situations, a graph of the elements of the system and their interconnections displays a universal feature. There are only a few elements with many connections and many elements with few connections. This book reports the experimental evidence of these ‘Scale-free networks’ and provides students and researchers with a corpus of theoretical results and algorithms to analyse and understand these features. The content of this book and the exposition makes it a clear textbook for beginners and a reference book for experts. Guido Caldarelli guido.caldarelli@imtlucca.it 2012-01-20T10:45:29Z 2012-01-25T13:10:14Z http://eprints.imtlucca.it/id/eprint/1076 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1076 2012-01-20T10:45:29Z This book is the culmination of three years of research effort on a multidisciplinary project in which physicists, mathematicians, computer scientists and social scientists worked together to arrive at a unifying picture of complex networks. The contributed chapters form a reference for the various problems in data analysis visualization and modeling of complex networks. Guido Caldarelli guido.caldarelli@imtlucca.it Alessandro Vespignani 2012-01-18T11:14:17Z 2012-01-18T11:14:17Z http://eprints.imtlucca.it/id/eprint/1060 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1060 2012-01-18T11:14:17Z This paper examines how a radical technological innovation affects alliance formation of firms and subsequent network structures. We use longitudinal data of interfirm R&D collaborations in the biopharmaceutical industry in which a new technological regime is established. Our findings suggest that it requires radical technological change for firms to leave their embedded path of existing alliances and form new alliances with new partners. While new partners are mostly found through the firms’ existing network, we provide some insight into distant link formation with unknown partners, which contributes to our understanding of how ‘small-worlds’ might emerge Sandra Phlippen Massimo Riccaboni massimo.riccaboni@imtlucca.it 2011-10-31T14:19:25Z 2012-01-18T08:01:43Z http://eprints.imtlucca.it/id/eprint/984 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/984 2011-10-31T14:19:25Z We characterize the measurable spaces (Ω,A) such that, for each sub-σ-field G of A and each decreasing filtered family (F_t) of sub-σ-fields of A, with F_t ↓ F_∞, we have F_t ∨ G ↓ F_∞ ∨ G. It follows a characterization of the probability spaces (Ω, A, P) such that, for each sub-σ-field G of A and each decreasing sequence (F_n) of sub-σ-fields of A, with F_n ↓ F_∞, we have ⋂_n (F_n ∨ G) ∼ F_∞ ∨ G (mod P). Irene Crimaldi irene.crimaldi@imtlucca.it Giorgio Letta Luca Pratelli 2011-10-31T13:49:03Z 2011-11-17T09:37:52Z http://eprints.imtlucca.it/id/eprint/983 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/983 2011-10-31T13:49:03Z We introduce and study a strengthening of the notion of stable convergence. Irene Crimaldi irene.crimaldi@imtlucca.it Giorgio Letta Luca Pratelli 2011-10-18T12:51:42Z 2011-10-18T13:04:46Z http://eprints.imtlucca.it/id/eprint/957 This item is in the repository with the URL: http:// eprints.imtlucca.it/id/eprint/957 2011-10-18T12:51:42Z Emanuele Pellegrini emanuele.pellegrini@imtlucca.it 2011-10-13T14:08:26Z 2011-10-13T14:08:26Z http://eprints.imtlucca.it/id/eprint/939 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/939 2011-10-13T14:08:26Z Cristina Borgioli Emanuele Pellegrini emanuele.pellegrini@imtlucca.it 2011-10-13T13:53:25Z 2011-10-13T13:53:25Z http://eprints.imtlucca.it/id/eprint/938 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/938 2011-10-13T13:53:25Z Emanuele Pellegrini emanuele.pellegrini@imtlucca.it Donata Levi 2011-10-13T13:41:48Z 2011-10-13T13:41:48Z http://eprints.imtlucca.it/id/eprint/937 This item is in the repository with the URL: http://eprints.imtlucca.it/ id/eprint/937 2011-10-13T13:41:48Z Emanuele Pellegrini emanuele.pellegrini@imtlucca.it 2011-10-13T13:04:14Z 2011-10-13T13:04:14Z http://eprints.imtlucca.it/id/eprint/936 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/936 2011-10-13T13:04:14Z Emanuele Pellegrini emanuele.pellegrini@imtlucca.it 2011-10-12T10:39:47Z 2011-10-12T10:39:47Z http:// eprints.imtlucca.it/id/eprint/919 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/919 2011-10-12T10:39:47Z Emanuele Pellegrini emanuele.pellegrini@imtlucca.it 2011-10-12T10:24:06Z 2011-10-12T10:24:06Z http://eprints.imtlucca.it/id/eprint/918 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/918 2011-10-12T10:24:06Z Alessandro Merlo Emanuele Pellegrini emanuele.pellegrini@imtlucca.it 2011-09-15T10:48:09Z 2014-12-02T09:54:53Z http://eprints.imtlucca.it/id/eprint/888 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/888 2011-09-15T10:48:09Z Maria Luisa Catoni marialuisa.catoni@imtlucca.it 2011-09-14T14:14:03Z 2014-12-02T09:49:22Z http://eprints.imtlucca.it/id/eprint/878 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/878 2011-09-14T14:14:03Z Maria Luisa Catoni marialuisa.catoni@imtlucca.it 2011-09-14T10:19:09Z 2014-12-02T09:49:06Z http:/ /eprints.imtlucca.it/id/eprint/877 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/877 2011-09-14T10:19:09Z Maria Luisa Catoni marialuisa.catoni@imtlucca.it 2011-09-14T09:43:44Z 2014-12-02T09:52:42Z http://eprints.imtlucca.it/id/eprint/875 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/875 2011-09-14T09:43:44Z Maria Luisa Catoni marialuisa.catoni@imtlucca.it 2011-09-13T13:10:25Z 2014-12-02T09:56:36Z http://eprints.imtlucca.it/id/eprint/868 This item is in the repository with the URL: http://eprints.imtlucca.it/ id/eprint/868 2011-09-13T13:10:25Z Un volume che raccoglie gli atti della Giornata di Studi tenutasi presso la Scuola Normale di Pisa nel maggio 2005 sul patrimonio culturale in Francia. Nella prima parte il volume sviluppa l'idea di patrimoine fra Rivoluzione, Impero e Restaurazione (Poulot), ma anche la storia della protezione del patrimonio culturale in Francia fino alla legge del 1913 (Lafarge) e sotto la V Repubblica (Poirier). Gli approfondimenti conducono al Novecento con il saggio Frier sulla legge del 1913 nonché la presentazione del nuovo Codice del patrimonio del 2004 (Lafarge) e i problemi del decentramento trattati da Leniaud che si rivelano molto istruttivi per un confronto con la situazione italiana. Nella seconda parte del libro vi sono invece saggi di Alain Schnapp sulle "Antiquités nationales" da Guizot a Carcopino, e di Michel Gras sugli anni più recenti, quindi un saggio esplicitamente comparativo fra Italia e Francia di Roberto Balzani, e infine Daniel Soutif che propone un confronto tra Italia e Francia a proposito dell'arte contemporanea Maria Luisa Catoni marialuisa.catoni@imtlucca.it 2011-08-09T07:45:00Z 2011-08-12T15:18:50Z http:// eprints.imtlucca.it/id/eprint/773 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/773 2011-08-09T07:45:00Z We analyze the role of risk aversion and intertemporal substitution in a simple dynamic general equilibrium model of investment and savings. Our main finding is that risk aversion cannot by itself explain a negative relationship between aggregate investment and aggregate uncertainty, as the effect of increased uncertainty on investment also depends on the intertemporal elasticity of substitution. In particular, the relationship between aggregate investment and aggregate uncertainty is positive even if agents are very risk averse, as long as the elasticity of intertemporal substitution is low. A negative investment-uncertainty relationship requires that the relative risk aversion and the elasticity of intertemporal substitution are both relatively high or both relatively low. We also show that the implications of our model are consistent with the available empirical evidence. Enrico Saltari Davide Ticchi davide.ticchi@imtlucca.it 2011-08-08T14:07:37Z 2012-05-30T07:12:23Z http://eprints.imtlucca.it/id/eprint/772 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/772 2011-08-08T14:07:37Z Davide Ticchi davide.ticchi@imtlucca.it Andrea Vindigni andrea.vindigni@imtlucca.it 2011-08-03T10:52:17Z 2011-08-04T07:30:21Z http://eprints.imtlucca.it/id/eprint/765 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/765 2011-08-03T10:52:17Z Gabriella Dellino gabriella.dellino@imtlucca.it Paolo Lino Carlo Meloni Alessandro Rizzo 2011-08-03T10:42:23Z 2011-08-04T07:30:21Z http://eprints.imtlucca.it/id/eprint/764 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/764 2011-08-03T10:42:23Z Gabriella Dellino gabriella.dellino@imtlucca.it Paolo Lino Carlo Meloni Alessandro Rizzo Paolo Di Giambernardino Andrea Usai 2011-08-02T10:17:52Z 2011-08-04T07:30:21Z http://eprints.imtlucca.it/id/eprint/763 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/763 2011-08-02T10:17:52Z Gabriella Dellino gabriella.dellino@imtlucca.it Paolo Lino Carlo Meloni Alessandro Rizzo 2011-08-02T10:15:15Z 2011-08-08T08:41:01Z http://eprints.imtlucca.it/id/eprint/762 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/762 2011-08-02T10:15:15Z Claudia Bonomo Gabriella Dellino gabriella.dellino@imtlucca.it Luigi Fortuna Pietro Giannone Salvatore Graziani Paolo Lino Carlo Meloni Alessandro Rizzo 2011-08-02T09:13:43Z 2011-08-04T07:30:21Z http://eprints.imtlucca.it/id/eprint/759 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/759 2011-08-02T09:13:43Z Gabriella Dellino gabriella.dellino@imtlucca.it Carlo Meloni Paolo Lino Alessandro Rizzo 2011-08-01T10:52:00Z 2011-08-04T07:30:21Z http://eprints.imtlucca.it/id/eprint/749 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/749 2011-08-01T10:52:00Z This chapter deals with the application of hybrid evolutionary methods to design optimization issues in which approximation techniques and model management strategies can be used to guide the decision making process in a multidisciplinary context. An enhanced evolutionary algorithmic scheme devoted to design optimization is proposed, and its use in real applications is illustrated in the framework of the multidisciplinary design optimization (MDO). At this aim, a case study is discussed. It relies to the field of automotive engineering in which the design optimization of a system is carried out considering simultaneously both mechanical and control requirements. The studied system is the regulator of the injection pressure of an innovative common rail system for compressed natural gas (CNG) engines, whose engineering design optimization includes several practical and numerical difficulties. To tackle such a situation, a multiobjective optimization formulation of the problem is proposed. The adopted optimization strategy pursues the Pareto optimality on the basis of fitness functions that capture domain specific design aspects as well as static and dynamic objectives. The computational experiments show the ability of the proposed method for finding a satisfactory set of efficient solutions. Gabriella Dellino gabriella.dellino@imtlucca.it Paolo Lino Carlo Meloni Alessandro Rizzo 2011-07-04T09:21:57Z 2011-09-27T13:26:08Z http:// eprints.imtlucca.it/id/eprint/261 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/261 2011-07-04T09:21:57Z We study the voter model with a finite density of zealots—voters that never change opinion. For equal numbers of zealots of each species, the distribution of magnetization (opinions) is Gaussian in the mean-field limit, as well as in one and two dimensions, with a width that is proportional to 1/√Z , where Z is the number of zealots, independent of the total number of voters. Thus just a few zealots can prevent consensus or even the formation of a robust majority. M. Mobilia Alexander M. Petersen alexander.petersen@imtlucca.it S. Redner 2011-07-01T14:44:57Z 2011-08-31T14:40:39Z http://eprints.imtlucca.it/id/eprint/690 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/690 2011-07-01T14:44:57Z Fabio Pammolli f.pammolli@imtlucca.it Massimo Riccaboni massimo.riccaboni@imtlucca.it Nicola C. Salerno 2011-07-01T14:23:30Z 2011-08-31T14:40:39Z http://eprints.imtlucca.it/id/eprint/691 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/691 2011-07-01T14:23:30Z Fabio Pammolli f.pammolli@imtlucca.it Nicola C. Salerno 2011-06-30T14:27:04Z 2011-08-31T14:40:38Z http://eprints.imtlucca.it/id/eprint/639 This item is in the repository with the URL: http:// eprints.imtlucca.it/id/eprint/639 2011-06-30T14:27:04Z We refer to the framework developed by Ijiri and Simon (1977) and to the notion of independent submarkets (Sutton 1998) to provide a simple candidate explanation for the shape of the firm growth distribution based on a model of proportional growth at the level of both the introduction of new products by firms and their size dynamics. We exploit the features of a unique longitudinal data set which covers the entire distribution of products and firms in the worldwide pharmaceutical industry to test the model at different levels of aggregation as well as at different time lags. Econometric investigations show that the model's predictions are in good agreement with empirical evidence. (JEL: L11, L65) Fabio Pammolli f.pammolli@imtlucca.it Sergey V. Buldyrev Jakub Growiec Massimo Riccaboni massimo.riccaboni@imtlucca.it H. Eugene Stanley 2011-06-30T14:26:40Z 2014-12-18T15:44:51Z http://eprints.imtlucca.it/id/ eprint/643 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/643 2011-06-30T14:26:40Z We study the betweenness centrality of fractal and nonfractal scale-free network models as well as real networks. We show that the correlation between degree and betweenness centrality C of nodes is much weaker in fractal network models compared to nonfractal models. We also show that nodes of both fractal and nonfractal scale-free networks have power-law betweenness centrality distribution P(C)∼C−δ. We find that for nonfractal scale-free networks δ=2, and for fractal scale-free networks δ=2−1∕dB, where dB is the dimension of the fractal network. We support these results by explicit calculations on four real networks: pharmaceutical firms (N=6776), yeast (N=1458), WWW (N=2526), and a sample of Internet network at the autonomous system level (N=20566), where N is the number of nodes in the largest connected component of a network. We also study the crossover phenomenon from fractal to nonfractal networks upon adding random edges to a fractal network. We show that the crossover length ℓ*, separating fractal and nonfractal regimes, scales with dimension dB of the network as p−1∕dB, where p is the density of random edges added to the network. We find that the correlation between degree and betweenness centrality increases with p. Maksim Kitsak Shlomo Havlin Gerald Paul Massimo Riccaboni massimo.riccaboni@imtlucca.it Fabio Pammolli f.pammolli@imtlucca.it H. Eugene Stanley 2011-06-30T14:26:33Z 2011-08-31T14:40:39Z http://eprints.imtlucca.it/id/ eprint/644 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/644 2011-06-30T14:26:33Z Fabio Pammolli f.pammolli@imtlucca.it Massimo Riccaboni massimo.riccaboni@imtlucca.it 2011-06-30T14:26:24Z 2011-08-31T14:40:38Z http://eprints.imtlucca.it/id/eprint/645 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/645 2011-06-30T14:26:24Z Fabio Pammolli f.pammolli@imtlucca.it Chiara Bonassi Massimo Riccaboni massimo.riccaboni@imtlucca.it 2011-06-30T14:26:16Z 2013-11-21T13:03:15Z http://eprints.imtlucca.it/id/ eprint/646 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/646 2011-06-30T14:26:16Z We present a preferential attachment growth model to obtain the distribution P(K) of number of units K in the classes which may represent business firms or other socio-economic entities. We found that P(K) is described in its central part by a power law with an exponent ϕ = 2+b/ (1-b) which depends on the probability of entry of new classes, b. In a particular problem of city population this distribution is equivalent to the well known Zipf law. In the absence of the new classes entry, the distribution P(K) is exponential. Using analytical form of P(K) and assuming proportional growth for units, we derive P(g), the distribution of business firm growth rates. The model predicts that P(g) has a Laplacian cusp in the central part and asymptotic power-law tails with an exponent ζ = 3. We test the analytical expressions derived using heuristic arguments by simulations. The model might also explain the size-variance relationship of the firm growth rates. Fabio Pammolli f.pammolli@imtlucca.it Sergey V. Buldyrev Massimo Riccaboni massimo.riccaboni@imtlucca.it Kazuko Yamasaki Dongfeng Fu Kaushik Matia H. Eugene Stanley 2011-06-30T14:26:09Z 2013-11-21T13:05:56Z http://eprints.imtlucca.it/id/eprint/647 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/647 2011-06-30T14:26:09Z We introduce a model of proportional growth to explain the distribution P(g) of business firm growth rates. The model predicts that P(g) is Laplace in the central part and depicts an asymptotic power-law behavior in the tails with an exponent ζ = 3. Because of data limitations, previous studies in this field have been focusing exclusively on the Laplace shape of the body of the distribution. We test the model at different levels of aggregation in the economy, from products, to firms, to countries, and we find that the predictions are in good agreement with empirical evidence on both growth distributions and size-variance relationships. Fabio Pammolli f.pammolli@imtlucca.it Dongfeng Fu Sergey V. Buldyrev Massimo Riccaboni massimo.riccaboni@imtlucca.it Kaushik Matia Kazuko Yamasaki H. Eugene Stanley 2011-04-04T14:03:10Z 2011-07-11T14:21:53Z http://eprints.imtlucca.it/id/eprint/257 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/257 2011-04-04T14:03:10Z Andrea Giannaccari a.giannaccari@imtlucca.it 2011-04-04T13:36:59Z 2011-07-11T14:21:53Z http:// eprints.imtlucca.it/id/eprint/256 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/256 2011-04-04T13:36:59Z Il contributo analizza la recente decisione del Tribunale di Primo Grado sul caso Microsoft attraverso una lettura congiunta con le omologhe pronunce nordamericane. In tale prospettiva, viene approfondito il rapporto conflittuale tra il regime di proprietà intellettuale e la normativa antitrust, ponendo particolare attenzione al flusso di informazioni quale elemento pivotale nel garantire la concorrenza attraverso l'innovazione. Andrea Giannaccari a.giannaccari@imtlucca.it 2011-04-01T09:32:20Z 2011-07-11T14:21:53Z http://eprints.imtlucca.it/id/eprint/250 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/250 2011-04-01T09:32:20Z Carlo Cambini c.cambini@imtlucca.it Andrea Giannaccari a.giannaccari@imtlucca.it 2011-04-01T09:11:19Z 2011-08-31T14:40:39Z http://eprints.imtlucca.it/id/eprint/249 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/249 2011-04-01T09:11:19Z Fabio Pammolli f.pammolli@imtlucca.it Carlo Cambini c.cambini@imtlucca.it Andrea Giannaccari a.giannaccari@imtlucca.it 2011-04-01T09:03:15Z 2011-08-31T14:40:39Z http://eprints.imtlucca.it/id/eprint/248 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/248 2011-04-01T09:03:15Z L'Italia è uno dei paesi Ocse con la più alta regolamentazione nei diversi comparti dell'economia. Le misure recentemente adottate sono finalizzate a promuovere, attraverso l'iniezione di competitività, quel recupero di efficienza che le istituzioni comunitarie e i maggiori analisti indicano quale strumento per incentivare la crescita. Ma il permanere di rendite di posizione e di barriere alla concorrenza rende necessario intervenire sulle variabili strutturali in numerosi settori, attuando un più esteso programma di liberalizzazione capace di favorire l'affermazione di contesti realmente competitivi. Da queste premesse prendono spunto i contributi raccolti nel volume, per suggerire linee di intervento tese a facilitare l'accesso al mercato e la pluralità degli operatori e, per questa via, promuovere la produttività e la crescita. Fabio Pammolli f.pammolli@imtlucca.it Carlo Cambini c.cambini@imtlucca.it Andrea Giannaccari a.giannaccari@imtlucca.it 2011-03-30T10:16:41Z 2011-07-11T14:21:53Z http://eprints.imtlucca.it/id/eprint/224 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/224 2011-03-30T10:16:41Z Nota a Corte Suprema degli Stati Uniti d’America, 28 giugno 2007, No. 06-480, Leegin Creative Leather Products, Inc. v. Psks, Inc. Andrea Giannaccari a.giannaccari@imtlucca.it 2011-03-30T10:15:45Z 2011-07-11T14:21:53Z http://eprints.imtlucca.it/id/eprint/230 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/230 2011-03-30T10:15:45Z Il contributo ricostruisce le recenti vicende relative al passaggio di controllo di Telecom Italia nel più ampio contesto delle comunicazioni elettroniche, avendo particolare attenzione al ruolo dell'infrastruttura di rete e all'ipotesi di separazione al vaglio dell'Autorità di regolazione. Sono inoltre richiamate le tappe principali che hanno condotto all'attuale articolazione proprietaria, sfociata nella costituzione di Telco, e le criticità dell'"incumbent" nell'affrontare le sfide di un mercato sempre più globale. Nelle sezioni successive, sono approfondite le ipotesi di separazione funzionale della rete di accesso e la questione nodale degli investimenti necessari per il potenziamento delle infrastrutture di nuova generazione - le "Next Generation Networks" -. / The article analyses recent Telecom Italia changements in both shareholders structure as well as management composition, with specific emphasis on the agreement which has led to the creation of the Telco holding. The financial and industrial strategies of Telecom Italia are also scrutinised in the even more globalized telecomunication market, pointing out the many critical aspects that the Italian incumbent has to deal with and, above all, overcome. In this respect, it is also discussed the proposal of operational separation, which has been recently submitted to a public consultation by the Italian Regulatory Authority; and the issue of the necessary investments to boost the proper and sound development of the Next Generation Networks. Carlo Cambini c.cambini@imtlucca.it Andrea Giannaccari a.giannaccari@imtlucca.it 2011-03-25T12:52:50Z 2011-07-11T14:21:53Z http://eprints.imtlucca.it/id/eprint/220 This item is in the repository with the URL: http://eprints.imtlucca.it/id/ eprint/220 2011-03-25T12:52:50Z Nota a Tribunale di Primo Grado delle Comunità Europee, 30 gennaio 2007, causa T-340/03, France Télécom c. Commissione Andrea Giannaccari a.giannaccari@imtlucca.it 2011-03-23T13:14:38Z 2011-07-11T13:54:05Z http://eprints.imtlucca.it/id/eprint/177 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/177 2011-03-23T13:14:38Z Lucia Bonfreschi lucia.bonfreschi@imtlucca.it 2011-03-23T13:14:24Z 2011-07-11T13:54:05Z http://eprints.imtlucca.it/id/eprint/178 This item is in the repository with the URL: http://eprints.imtlucca.it/id/ eprint/178 2011-03-23T13:14:24Z Lucia Bonfreschi lucia.bonfreschi@imtlucca.it Christine Vodovar 2011-03-23T12:49:28Z 2011-07-11T13:54:05Z http://eprints.imtlucca.it/id/eprint/176 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/176 2011-03-23T12:49:28Z Lucia Bonfreschi lucia.bonfreschi@imtlucca.it 2011-03-18T14:26:14Z 2014-01-24T14:27:40Z http:// eprints.imtlucca.it/id/eprint/206 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/206 2011-03-18T14:26:14Z Antonio Masala a.masala@imtlucca.it 2011-02-25T15:18:25Z 2013-05-29T12:39:31Z http://eprints.imtlucca.it/id/eprint/109 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/109 2011-02-25T15:18:25Z The article analyses in details the passage of the european integration history after the failure of European Defence Community. How did the ECSC mamber states react? After answering to this question the article focus on the italian answer. In particular it presents the reaction our diplomacy had in the major european countries and the government and parliamentary reaction within our boundaries. Maria Elena Cavallaro m.cavallaro@imtlucca.it 2011-02-25T15:14:00Z 2013-05-29T12:44:35Z http://eprints.imtlucca.it/id/eprint/113 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/113 2011-02-25T15:14:00Z This essay is devoted to describe the Spanish political debate during the first democratic legislation on the european integration process. In particular it analyses the debate held in the low Chamber, the several reports delivered by the Foreign Affairs Commission and the General Parliamentary Debate. It describes the position mantained by UCD, AP, PCE, PSOE and the major regional party PNV and CiU. Maria Elena Cavallaro m.cavallaro@imtlucca.it 2011-02-25T14:54:02Z 2013-05-29T12:43:53Z http://eprints.imtlucca.it/id/eprint/112 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/112 2011-02-25T14:54:02Z This essay analyses the values tributed to europeanism in Spain under the Franco regime. It takes into account the governative point of view in particular from 1957 to 1973 and the approach mantained during the same period of time by the major representatives of the antifrancoist platform, PSOE and PCE members who control the reorganization of the anti franco opposition both from outside the country and within the country itself. Maria Elena Cavallaro m.cavallaro@imtlucca.it 2011-02-25T14:08:46Z 2013-05-29T12:44:57Z http: //eprints.imtlucca.it/id/eprint/111 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/111 2011-02-25T14:08:46Z This article is focused on a very debated question of the spanish historiography related to the level of consensus reached during the late Seventies as far as foreign policy is concerned. The article describes the different position mantained by historiography both in Spain and at the international level and then focus on the controversial relationship between europeanism and atlantism, explaining the major political parties position on this issue, and relating them to the heritage of the Civil war period. Maria Elena Cavallaro m.cavallaro@imtlucca.it 2011-02-25T13:56:19Z 2013-05-29T12:41:17Z http://eprints.imtlucca.it/id/eprint/110 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/110 2011-02-25T13:56:19Z This essay shows the long run origin of the Europeanism as a movement and underlines how much Spain during the transition to democracy bet on its own European future within the EEC Institutions because it has already matured a long tradition and a very well developed knowledge of the European mechanism from the Sixties. Maria Elena Cavallaro m.cavallaro@imtlucca.it 2011-02-24T13:42:09Z 2011-07-11T14:26:26Z http://eprints.imtlucca.it/id/eprint/96 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/96 2011-02-24T13:42:09Z R. Michael Alvarez Thad E. Hall Morgan H. Llewellyn morgan.llewellyn@imtlucca.it 2011-02-23T11:03:05Z 2011-07-11T14:26:26Z http:// eprints.imtlucca.it/id/eprint/94 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/94 2011-02-23T11:03:05Z R. Michael Alvarez Thad E. Hall Morgan H. Llewellyn morgan.llewellyn@imtlucca.it 2011-02-23T08:38:27Z 2011-07-11T14:25:31Z http://eprints.imtlucca.it/id/eprint/91 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/91 2011-02-23T08:38:27Z This paper implements a unified model of individual abstention and vote choice to analyze policy-based alienation and indifference in Brazil’s 2002 presidential election. The results indicate that both alienation and indifference depressed turnout, with indifference contributing slightly more to voter abstention. Also, the determinants of alienation and indifference differed considerably, the former being determined by structural factors such as voters’ information and perceived efficacy levels, while the latter was related to shortterm aspects such as parties’ mobilization efforts. More importantly, evidence shows that while alienation and indifference were strongly influenced by attitudinal and protest variables, they were also affected by citizens’ evaluation of candidates’ ideological locations. The main conclusion is that abstention in Brazil’s 2002 election had a policy-driven component and that spatial considerations played a substantive role in citizens’ electoral behavior, a fact that has been overlooked in previous research on the determinants of abstention in Latin America. Gabriel Katz g.katz@imtlucca.it 2011-02-15T10:50:21Z 2011-07-11T14:24:17Z http://eprints.imtlucca.it/id/eprint/54 This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/54 2011-02-15T10:50:21Z Stefano Gattei
{"url":"http://eprints.imtlucca.it/cgi/exportview/divisions/EIC/2007/Atom/EIC_2007.xml","timestamp":"2024-11-08T22:02:14Z","content_type":"application/atom+xml","content_length":"181061","record_id":"<urn:uuid:d1edca9a-0708-4475-bd6e-22e5fc55f0fc>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00341.warc.gz"}
A note on the use of outlier criteria in Ontario laboratory quality control schemes Objectives: This paper examines the pitfalls that arise when an outlier is assessed using a criterion based on a fixed multiple of the standard deviation rather than an established statistical test. Although the former approach is statistically invalid, it is the favored method for identifying outliers in Ontario laboratory quality control protocols. Design and methods: Computer simulations are used to calculate the probability of a false positive result (classifying a valid observation as an outlier) when outlier criteria based on fixed multiples of the standard deviation are applied to samples containing no outliers. Results: The estimated probability of a false positive result is tabulated over various sample sizes. Outlier criteria based on fixed multiples of the standard deviation are shown to be highly inefficient. Conclusions: This work presents arguments for discontinuing the widespread practice of using outlier criteria based on fixed multiples of the standard deviation to identify outliers in univariate samples. • Boxplot • Dixon test • Grubbs test • Robust methods Dive into the research topics of 'A note on the use of outlier criteria in Ontario laboratory quality control schemes'. Together they form a unique fingerprint.
{"url":"https://pure.ul.ie/en/publications/a-note-on-the-use-of-outlier-criteria-in-ontario-laboratory-quali","timestamp":"2024-11-08T15:04:17Z","content_type":"text/html","content_length":"52122","record_id":"<urn:uuid:80bb3395-6174-4550-9158-486df0aaee17>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00157.warc.gz"}
Cryptography Academy Enter a message (a sentence containing only whitespace and the letters A-Z) that Alice wants to send encrypted to Bob: Hit enter to generate the parameters and use the left and right arrow keys to navigate. Alice first chooses the values \( a \) and \( b \) at random such that \( \gcd(a, 26)=1 \) and then she computes the inverse \( a^{-1} \) of \( a \) with the extended Euclidean algorithm. She then sends the values \( a \), \( a^{-1} \) and \( b \) through a secure channel to Bob.
{"url":"https://www.cryptographyacademy.com/substitution-ciphers/protocol/affine-cipher.php","timestamp":"2024-11-07T04:39:30Z","content_type":"text/html","content_length":"38805","record_id":"<urn:uuid:a196dcd6-f8a4-4a60-8de1-5bb664d74fdb>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00628.warc.gz"}
Overview and Abstracts DIMACS Series in Discrete Mathematics and Theoretical Computer Science VOLUME Thirty Six TITLE: "Discrete Mathematics in the Schools" EDITORS: Joseph G. Rosenstein, Deborah S. Franzblau and Fred S. Roberts. Published by the American Mathematical Society and the National Council of Teachers of Mathematics A PostScript version of this document Overview and Abstracts As noted in the Preface, this volume makes the case that discrete mathematics should be included in K--12 classrooms and curricula, and provides practical assistance and guidance on how this can be accomplished. The organization of this volume parallels these two goals. After the Introduction the articles are arranged in the following eight clusters: • Section 1. The Value of Discrete Mathematics: Views from the Classroom • Section 2. The Value of Discrete Mathematics: Achieving Broader Goals • Section 3. What is Discrete Mathematics: Two Perspectives • Section 4. Integrating Discrete Mathematics into Existing Mathematics Curricula, Grades K--8 • Section 5. Integrating Discrete Mathematics into Existing Mathematics Curricula, Grades 9--12 • Section 6. High School Courses on Discrete Mathematics • Section 7. Discrete Mathematics and Computer Science • Section 8. Resources for Teachers Everyone's first question is of course, ``What is discrete mathematics?'' Everyone's second question is, ``Why should I use discrete mathematics?'' Explicit discussion of the first question is delayed until Section 3, and the focus of the Introduction and Sections 1--2 is the second question. These sections make the case for discrete mathematics --- from the perspective of teachers in the classroom, and from the perspective of researchers involved in improving mathematics education. These articles encompass a variety of agendas --- implementing the four NCTM process standards (problem-solving, reasoning, communicating mathematical ideas, and making connections), improving the public's perception of mathematics, conveying the usefulness of mathematics, and providing a new start for students, teachers, and curricula. Everyone's third question is, ``How can I use discrete mathematics in my classroom?'' This question is addressed in Sections 4--7. One set of responses involves incorporating discrete mathematics into existing curricula; these responses appear in Sections 4 and 5, arranged by grade level. Another set of responses involves introducing new courses, typically at the high school level, and these are addressed in Section 6. Section 7 addresses the role of computer science in the high school curriculum, as well as the role of discrete mathematics in the teaching of computer science. Section 8 describes resources available to teachers who decide to enrich their classrooms with discrete mathematics. Following are abstracts of the articles in this volume, prepared by the editors. The abstracts are arranged by section, and within each section are presented alphabetically, as are the articles in the volume. Joseph G. Rosenstein's article Discrete Mathematics in the Schools: An Opportunity to Revitalize School Mathematics serves as an introduction to this volume and describes why discrete mathematics can be a useful vehicle for improving mathematics education and revitalizing school mathematics. He provides rationales for introducing discrete mathematics in the schools, noting that discrete mathematics is applicable, accessible, attractive, and appropriate, and argues that discrete mathematics offers a ``new start'' in mathematics for students. This article is based on a concept document distributed to participants prior to the October 1992 conference, and on the opening presentation of the conference. Section 1. The Value of Discrete Mathematics: Views from the Classroom Bro. Patrick Carney's article The Impact of Discrete Mathematics in My Classroom describes anecdotally how the author aroused in his students an interest in mathematics, and developed in his students a more ``positive attitude toward mathematics and their ability to do it''. Nancy Casey's article Three for the Money: An Hour in the Classroom describes the excitement generated in a class of high school students, participating in a special summer program, when they are presented with an unsolved mathematical problem, and the mathematical journeys that they take to learn what the problem is and to try to solve it. It also provides a vivid description of how the teacher's role in the classroom changes when the class embarks on an uncharted adventure of mathematical discovery. Janice C. Kowalczyk's article Fibonacci Reflections: It's Elementary! is an account of her experiences giving a workshop on the Fibonacci sequence (1, 1, 2, 3, 5, 8, ...) to a fourth-grade class. She gives a detailed description of the workshop activities, including student investigations of the classical rabbit population problem that leads to the sequence, and spiral-counting in pinecones, sunflowers, shells, and other objects whose growth patterns exhibit the sequence. The article illustrates how using a topic with a strong visual appeal, along with a focus on student exploration, can bring out the strengths in many students who have had difficulties in the traditional elementary mathematics curriculum. Susan H. Picker's article Using Discrete Mathematics to Give Remedial Students a Second Chance is an account of her experiences introducing discrete mathematics to a class of remedial tenth-grade students in Manhattan, and their success in solving complex graph-coloring problems. More than that, it is an account of the impact that this course had on the students' perceptions of mathematics and their own abilities, as well as on their subsequent school careers. The author learned from this experience the extent to which students' dislike of arithmetic serves as an obstacle to their progress and success in mathematics. Reuben J. Settergren's article ``What We've Got Here is a Failure to Cooperate'' describes a cooperative game, based on the classical Prisoner's Dilemma, that the author played with twelve-year-old students in a summer program. The game gave students insight into why individuals are sometimes motivated to behave in a way that harms the larger community, providing an opportunity to discuss moral and social issues in a mathematics class. Section 2. The Value of Discrete Mathematics: Achieving Broader Goals Nancy Casey and Michael R. Fellows' article Implementing the Standards: Let's Focus on the First Four argues that in order to properly address the NCTM process standards --- reasoning, problem-solving, communications, and connections --- in the elementary school classroom, new content must be introduced into the K--4 mathematics curriculum. The authors show by example how elementary versions of problem situations that arise in computer science and discrete mathematics make it possible to realize the goals of the process standards. They describe their approach to teaching mathematics as parallel to the ``whole language'' approach to teaching reading. Margaret B. Cozzens' article Discrete Mathematics: A Vehicle for Problem Solving and Excitement provides examples of discrete mathematics activities from several curriculum development projects funded by the NSF division that the author heads. The author argues that discrete mathematics can motivate students to think mathematically, to become better problem solvers, and to increase their interest in mathematics. Susanna S. Epp's article Logic and Discrete Mathematics in the Schools argues that logical reasoning should be a component of the discrete mathematics that is discussed at all grade levels. Students should not have to wait until they are college students to explore the reasoning involved in ``and'', ``or'', and ``if-then'' statements, or to understand how quantifiers are used. This need not be done formally (e.g., through truth tables) but through concrete activities which ultimately will support the students' transition to abstract mathematical thinking. The author illustrates the value of explicit discussion of logic with experiences from a discrete mathematics course she has taught at DePaul University. Rochelle Leibowitz' article Writing Discrete(ly) argues that discrete mathematics serves as an excellent vehicle for teaching students to communicate mathematically. Through describing carefully simple proofs and algorithms (e.g., instructions for building a Lego model), students acquire technical writing skills that will be useful in a variety of career and life situations. Joseph Malkevitch's article Discrete Mathematics and Public Perceptions of Mathematics contrasts the kinds of problems typically discussed in high school mathematics classes, usually involving extensive manipulation of symbols, with the kinds of problems that manifest the ways in which mathematics influences daily life. Malkevitch argues that the negative perceptions that the general public has about mathematics arise in part from an unbalanced mathematical diet --- too much of the former, too little of the latter --- and notes that problems from discrete mathematics can play an important role in changing these perceptions. Henry O. Pollak's article Mathematical Modeling and Discrete Mathematics discusses mathematical modeling in general, noting that ``applied mathematics'', ``problem solving'', and ``word problems'' all start with an idealized version of a real world problem, and so normally omit the initial and final parts of the modeling process. The author notes that in discrete mathematics situations, however, it is often possible to introduce the entire mathematical modeling process into the classroom; he provides five examples of modeling situations which lead to discrete mathematics and which can be made accessible to high school students. Fred S. Roberts' article The Role of Applications in Teaching Discrete Mathematics notes that ``one of the major reasons for the great increase in interest in discrete mathematics is its importance in solving practical problems.'' The author introduces several ``rules of thumb'' about the role of applications in teaching discrete mathematics, and illustrates those by providing many applications of the Traveling Salesman Problem, graph coloring, and Euler paths. Section 3. What is Discrete Mathematics: Two Perspectives Stephen B. Maurer's article ``What is Discrete Mathematics?'' The Many Answers provides and discusses a variety of proposed definitions and descriptions of discrete mathematics, along with several proposed goals and benefits for including discrete mathematics in the schools. The article concludes with a set of goals and topics for discrete mathematics in the schools on which the author thinks there might be general agreement. Joseph G. Rosenstein's article A Comprehensive View of Discrete Mathematics: Chapter 14 of the New Jersey Mathematics Curriculum Framework contains a comprehensive discussion of topics of discrete mathematics appropriate for each of the K--2, 3--4, 5--6, 7--8, and 9--12 grade levels. The author spearheaded the development of the Framework in his role as Director of the New Jersey Mathematics Coalition. Grade-level overviews are accompanied by several hundred activities appropriate for the various grade levels. The material reflects the experiences of teachers in the Leadership Program in Discrete Mathematics, discussed in a separate article in Section 8. Section 4. Integrating Discrete Mathematics into Existing Mathematics Curricula, Grades K--8 Valerie A. DeBellis' article Discrete Mathematics in K--2 Classrooms describes the author's visits to several classrooms and what she learned about the reasoning and problem-solving skills exhibited by young children who are introduced to situations involving discrete mathematics. It also describes how topics in discrete mathematics can be reformulated for children at early elementary levels. Robert E. Jamison's article Rhythm and Pattern: Discrete Mathematics with an Artistic Connection for Elementary School Teachers describes the material that the author has used in programs for both inservice and preservice elementary school teachers. It focuses on how elementary school teachers can use geometric activities involving drawing polygons and planar representations of polyhedra, moving in geometric patterns, and using modular arithmetic in movement and music --- to provide their students with foundational experiences for future study of mathematics. Evan Maletsky's article Discrete Mathematics Activities in Middle School provides a wealth of activities that are appropriate at the middle school level; these involve counting (e.g., finding the triangular numbers when you count rectangles on a folded piece of paper), graphs, and iteration (e.g., generating Sierpinski triangles). The author discusses how these can be incorporated into the activities that are already taking place in the classroom. Section 5. Integrating Discrete Mathematics into Existing Mathematics Curricula, Grades 9--12 Robert L. Devaney's article Putting Chaos into Calculus Courses describes how fundamental ideas of dynamical systems, including iteration, attracting and repelling points, and chaos, can be introduced in a beginning calculus class, through an in-depth investigation of the behavior of Newton's Method, using a computer or graphing calculator. The author's approach integrates discrete with continuous mathematics and provides a connection from calculus to the fascinating world of fractals and chaos. John A. Dossey's article Making a Difference with Difference Equations shows how difference equations can be used to model change in a number of real-world settings. The author recommends the use of difference equations to provide a unified development of standard sequences studied in mathematics, such as arithmetic, geometric, and Fibonacci sequences. Eric W. Hart's article Discrete Mathematical Modeling in the Secondary Curriculum: Rationale and Examples from the Core-Plus Mathematics Project (CPMP) discusses the questions of what discrete mathematics belongs in the secondary curriculum, and how it should be incorporated, from the perspective of the curriculum developer. The article presents examples adapted from CPMP materials which illustrate the CPMP approach --- that discrete mathematics should be woven into an overall integrated mathematics curriculum, and that the emphasis should be on discrete mathematical modeling. Bret Hoyer's article A Discrete Mathematics Experience with General Mathematics Students describes how the author introduced topics in discrete mathematics first into intermediate algebra and geometry classes, and then, as a result of the students' positive experiences, into other classes as well --- including general mathematics and consumer mathematics courses. The article focuses on the ``Street Networks'' unit on Euler paths and circuits that was woven into these courses. Philip G. Lewis' article Algorithms, Algebra, and the Computer Lab describes how the author's high school students used the LOGO computer environment to explore and develop concepts in linear algebra. These explorations, which took place in a computer lab, enabled students to view linear algebra algorithmically and to learn how to construct and analyze algorithms. Joan Reinthaler's article Discrete Mathematics is Already in the Classroom --- But It's Hiding argues that many problems in high school courses are discussed as problems with continuous domains when a discrete perspective would be more realistic, and would lead to different investigations and solutions. Several examples are given involving standard textbook problems in algebra. James T. Sandefur's article Integrating Discrete Mathematics into the Curriculum: An Example describes how he uses the handshake problem to review with his precalculus class the notions of function, domain and range, and graphing quadratic functions. The author argues that ``this approach integrates discrete mathematics into the existing curriculum, results in deeper student understanding, and can be accomplished in about the same amount of time as is presently devoted to these topics.'' Section 6. High School Courses on Discrete Mathematics Harold F. Bailey's article The Status of Discrete Mathematics in the High Schools reports on a survey that the author did to ascertain how many high schools offer courses in discrete mathematics, what those courses contain, and the goals of the schools in offering such courses. L. Charles Biehl's article Discrete Mathematics: A Fresh Start for Secondary Students describes a project-based discrete mathematics course developed by the author for juniors and seniors of average ability. The students explored a variety of mathematical topics in real-world settings; moreover, since many topics in discrete mathematics have few prerequisites, these students were able to become successful problem solvers and to develop more positive attitudes to mathematics. The article includes an outline of the course. Nancy Crisler, Patience Fisher, and Gary Froelich's article A Discrete Mathematics Textbook for High Schools describes the textbook they have co-authored, providing a discussion of its origins and development. The organization and content of the book is based on the NCTM report, Discrete Mathematics and the Secondary Mathematics Curriculum; it addresses five broad areas (social decision making, graph theory, counting techniques, matrix models, and the mathematics of iteration) and interweaves six unifying themes (modeling, use of technology, algorithmic thinking, recursive thinking, decision making, and mathematical induction). The article includes summaries of and examples drawn from each chapter of the book. Section 7: Discrete Mathematics and Computer Science Peter B. Henderson's article Computer Science, Problem Solving, and Discrete Mathematics addresses the role of discrete mathematics in a first course in computer science, based on the author's experience in developing a ``Fundamentals of Computer Science'' course at SUNY Stony Brook. Although the course described was developed originally for students planning a career in computer science, it has drawn students with a wide variety of goals. The author notes that ``With its emphasis on logical reasoning and problem analysis and solution, discrete mathematics provides a catalyst for general thinking and problem-solving skills ...,'' making such a course valuable for teaching computer science to high school students as well. Viera K. Proulx' article The Role of Computer Science and Discrete Mathematics in the High School Curriculum identifies six key themes in computer science that the author argues should be taught to all high school students, and sketches activities for students to explore these themes. The ideas in the article grew out of the author's participation in the Association for Computing Machinery (ACM) Task Force on the High School Curriculum, which produced a ``Model High School Computer Science Curriculum'' in 1993. Section 8. Resources for Teachers Nathaniel Dean and Yanxi Liu's article Discrete Mathematics Software for K--12 Education describes two workshops involving teachers and software developers in which teachers solved problems using software developed for research, and shared their reflections on the features that would make such software useful in their classrooms. In the first workshop, teachers used NETPAD, written by Dean when he was at Bellcore; in the second workshop, teachers used Combinatorica, written by Steven Skiena of SUNY Stony Brook. The article also provides an annotated list of other software packages that are potentially useful to teachers. Deborah S. Franzblau and Janice C. Kowalczyk's article Recommended Resources for Teaching Discrete Mathematics identifies outstanding resources, including books, modules, periodicals, literature, Internet sites, software, and videos for the K--12 mathematics teacher or supervisor building a core resource library for teaching topics in discrete mathematics. There are extensive reviews of four popular textbooks; other resources are accompanied by briefer descriptions. The list of resources, which is indexed by topic and grade level, and which includes publisher information, was developed from recommendations by participants and instructors in the DIMACS Leadership Program in Discrete Mathematics. Joseph G. Rosenstein and Valerie A. DeBellis' article The Leadership Program in Discrete Mathematics describes the DIMACS-sponsored programs for K--12 teachers that have taken place for the past nine years at Rutgers University, the development and implementation of the program's goals, and how the program is serving as a continuous resource for the dissemination of discrete mathematics to K--12 Mario Vassallo and Anthony Ralston's article Computer Software for the Teaching of Discrete Mathematics in the Schools provides a number of criteria for judging the suitability of computer software for educational use, and then describes and evaluates three software systems (Mathematica/Combinatorica, GraphPack, and SetPlayer) against these criteria.
{"url":"http://dimacs.rutgers.edu/archive/Volumes/schools/overview.html","timestamp":"2024-11-02T10:48:55Z","content_type":"text/html","content_length":"22984","record_id":"<urn:uuid:f23171ce-64ab-4d1a-88ce-6092a76e9ba8>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00226.warc.gz"}
3.1. Probability Next: 3.2. Statistics Up: Stochastics Previous: Stochastics Programme leaders: F.M.Dekking, W.Th.F.den Hollander Research in this programme is carried out in a very broad range of topics, covering most of the aspects of the mathematical discipline of probability theory and its applications to other mathematical areas, other sciences, and industry and technology. The basic themes of the research comprise among others extreme value theory and applications, optimal stopping, infinitely divisible distributions, stochastic recurrence, stochastic geometry, percolation and particle systems, modelling and simulation of geological structures, branching processes, queueing theory and applications, diffusions, ergodic theory, stochastic dynamical systems, fractal geometry and coding, noncommutative probability theory, stochastic inequalities, and simulation. Probability theory and its applications develops mathematically precise models for quantitatively describing uncertain situations, and applies these models in order to arrive at optimal or nearly optimal decision procedures. Research carried out in this domain in The Netherlands, and in particular in the Stieltjes Institute, is broad-ranging. Inside of the Institute, there are common interests and collaborations with the programmes Statistics, Stochastic Operations Research, Topology and Dynamical Systems and Number Theory.
{"url":"https://stieltjes.org/archief/rep20002001/node38.html","timestamp":"2024-11-06T21:54:53Z","content_type":"text/html","content_length":"3081","record_id":"<urn:uuid:0bdbcd57-b15b-4109-837c-f2eae02a9627>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00108.warc.gz"}
Add and Subtract Rational Expressions Learning Outcomes • Add and subtract rational expressions In beginning math, students usually learn how to add and subtract whole numbers before they are taught multiplication and division. However, with fractions and rational expressions, multiplication and division are sometimes taught first because these operations are easier to perform than addition and subtraction. Addition and subtraction of rational expressions are not as easy to perform as multiplication because, as with numeric fractions, the process involves finding common denominators. Adding Rational Expressions To find the least common denominator (LCD) of two rational expressions, we factor the expressions and multiply all of the distinct factors. For instance, consider the following rational expressions: [latex]\dfrac{6}{\left(x+3\right)\left(x+4\right)},\text{ and }\frac{9x}{\left(x+4\right)\left(x+5\right)}[/latex] The LCD would be [latex]\left(x+3\right)\left(x+4\right)\left(x+5\right)[/latex]. To find the LCD, we count the greatest number of times a factor appears in each denominator and include it in the LCD that many times. For example, in [latex]\dfrac{6}{\left(x+3\right)\left(x+4\right)}[/latex], [latex]\left(x+3\right)[/latex] is represented once and [latex]\left(x+4\right)[/latex] is represented once, so they both appear exactly once in the LCD. In [latex]\dfrac{9x}{\left(x+4\right)\left(x+5\right)}[/latex], [latex]\left(x+4\right)[/latex] appears once and [latex]\left(x+5\right)[/latex] appears once. We have already accounted for [latex]\left(x+4\right)[/latex], so the LCD just needs one factor of [latex]\left(x+5\right)[/latex] to be complete. Once we find the LCD, we need to multiply each expression by the form of [latex]1[/latex] that will change the denominator to the LCD. What do we mean by ” the form of [latex]1[/latex]“? [latex]\frac{x+5}{x+5}=1[/latex] so multiplying an expression by it will not change its value. For example, we would need to multiply the expression [latex]\dfrac{6}{\left(x+3\right)\left(x+4\right)}[/latex] by [latex]\frac{x+5}{x+5}[/latex] and the expression [latex]\frac{9x}{\left(x+4\right) \left(x+5\right)}[/latex] by [latex]\frac{x+3}{x+3}[/latex]. Hopefully this process will become clear after you practice it yourself. As you look through the examples on this page, try to identify the LCD before you look at the answers. Also, try figuring out which “form of 1” you will need to multiply each expression by so that it has the LCD. Add the rational expressions [latex]\frac{5}{x}+\frac{6}{y}[/latex] and define the domain. State the sum in simplest form. Show Solution Here is one more example of adding rational expressions, but in this case, the expressions have denominators with multi-term polynomials. First, we will factor and then find the LCD. Note that [latex]x^2-4[/latex] is a difference of squares and can be factored using special products. Simplify[latex]\frac{2{{x}^{2}}}{{{x}^{2}}-4}+\frac{x}{x-2}[/latex] and give the domain. State the result in simplest form. Show Solution In the video that follows, we present an example of adding two rational expression whose denominators are binomials with no common factors. Subtracting Rational Expressions To subtract rational expressions, follow the same process you use to add rational expressions. You will need to be careful with signs though. Subtract[latex]\frac{2}{t+1}-\frac{t-2}{{{t}^{2}}-t-2}[/latex] and define the domain. State the difference in simplest form. Show Solution In the next example, we will give less instruction. See if you can find the LCD yourself before you look at the answer. Subtract the rational expressions: [latex]\frac{6}{{x}^{2}+4x+4}-\frac{2}{{x}^{2}-4}[/latex], and define the domain. State the difference in simplest form. Show Solution In the previous example, the LCD was [latex]\left(x+2\right)^2\left(x-2\right)[/latex]. The reason we need to include [latex]\left(x+2\right)[/latex] two times is because it appears two times in the expression [latex]\frac{6}{{x}^{2}+4x+4}[/latex]. The video that follows contains an example of subtracting rational expressions.
{"url":"https://courses.lumenlearning.com/intermediatealgebra/chapter/read-add-and-subtract-rational-expressions-part-i/","timestamp":"2024-11-11T13:20:15Z","content_type":"text/html","content_length":"60153","record_id":"<urn:uuid:e6c191cf-cf45-4257-9301-fc57fa2b2ae6>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00636.warc.gz"}
Homepage of John Irving Last updated September 3, 2024 Who Am I? I am an Associate Professor at Saint Mary's University in Halifax, Nova Scotia, Canada. I am currently Chair of the Department of Mathematics and Computing Science. My primary research interest is algebraic combinatorics, particularly enumerative problems underlying questions in geometry and representation theory. Contact Information Office Hours Tuesday 1:00–3:00 Wednesday 10:00–2:00 Thursday 12:30–2:30 Send me an e-mail if you wish to set up an appointment to see me outside of office hours, or drop by the office and see if I'm available. I am currently (Fall 2024) teaching Math 1211 (Introductory Calculus II) and Math 4420 (Abstract Algebra I). My main research interest is enumerative combinatorics (ie. counting), particularly as it relates to problems in algebra and geometry. To date I have mostly focused on problems involving factorizations in the symmetric group; that is, counting the number of ways a given permutation can be decomposed as an ordered product of other permutations with various conditions imposed on these factors (such as cycle type, minimality, transitivity). Questions of this type are intimately linked with the representation theory of the symmetric group (equivalently, the study of its group algebra), and also the geometry of branched coverings. Preprints of my papers can be downloaded below. My graduate work was completed under the supervision of David Jackson at the University of Waterloo (Ontario, Canada). Both my Master's and Ph.D. theses are available upon request. • An Enumerative Problem Concerning Products of Permutations, Master's Thesis, University of Waterloo, 1998 • Combinatorial Constructions for Transitive Factorizations in the Symmetric Group, Ph.D. Thesis, University of Waterloo, 2004 • On the number of factorizations of a full cycle, J. Combin. Theory Ser. A, 113 (2006), 1549–1554 • Minimal transitive factorizations of permutations into cycles, Canad. J. Math., 61 (2009), 1092–1117 • (with A. Rattan) Minimal factorizations of permutations into star transpositions, Discrete Math., 309 (2009), 1435–1442 • (with A. Rattan) The number of lattice paths below a cyclically shifting boundary, J. Combin. Theory Ser. A, 116 (2009), 499–514 • (with G. Berkolaiko) Inequivalent factorizations of permutations, J. Combin. Theory Ser. A, 140 (2016), 1–37 • (with A. Rattan) Trees, parking functions, and factorizations of full cycles, Europ. J. Combinatorics, 93 (2021) • (with A. Rattan) k-Factorizations of the full cycle and generalized Mahonian statistics on k-forests, Adv. Appl. Math., 151 (2023) • (with T. Kosir and M. Mastnak) A proof of the Box Conjecture for commuting pairs of matricess (preprint) Please contact me if you have any questions or would like further information.
{"url":"https://cs.smu.ca/~jirving/","timestamp":"2024-11-12T07:37:17Z","content_type":"text/html","content_length":"5232","record_id":"<urn:uuid:fc32ce71-55d6-470d-bc22-16e8301033f6>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00038.warc.gz"}
Issue 1 Prevalence and background of FEA Effectiveness of FEA in the Development Process Electromagnetic field finite element analysis (FEA) has been rapidly expanding as a tool used in the development process over the last 15 years. The application of FEA varies based on the needs of each development process, but why has FEA expanded so rapidly as a tool for development? In addition, what are the advantages of using FEA in the development process? Impact of FEA on the Design Process will introduce how FEA has effected the development process from multiple perspectives over the next year. Before FEA FEA is well known as an analysis method using a computation solver. In recent years, the utilization of specialized computation solvers for FEA in the development process is not unusual. Furthermore, large scale analyses exceeding a million elements or analyses that have multiple cases are being used more frequently. However, this was not the case a quarter of a century ago due to the limitations and cost of the computation solver performance. Therefore, how did FEA enter into the development process as a design tool at that time? The simple answer is years of intuition and experience. However, the magnetic circuit method has been used systematically as the primary method to analyze the magnetic characteristics of electromechanical machines. The magnetic circuit method is a universal analysis method that is used widely even today, but, it was the only analysis method available at the time because the computation solvers were not as sophisticated as they are today. The magnetic circuit method is a method to estimate the magnetic flux produced in the magnetic pathways of the magnetic circuit by replacing the core, coils, and magnets making up a machine with a magnetic circuit composed of the source of magnetomotive force and magnetic resistance. An example of a magnetic circuit overlayed on a motor is indicated in Fig. 1. This method can be calculated by hand without a computation solver because large scale calculation is not required for a simple magnetic circuit. The electromagnetic attractive force between the stator and mover obtained using the simple magnetic circuit method for the solenoid value shown in Fig. 2 is indicated in Fig. 3. In Fig. 3, the calculation results using FEA are compared to the simple magnetic circuit method to show the similarity of the results regardless of which method is used. There are also software based on the magnetic circuit method. Results can be obtained instantly because an analysis can be performed using very few calculation resources. Therefore, the magnetic circuit method was widely used in the design process before FEA become a standard tool (before FEA). Then why has analysis using FEA become necessary and widely adopted in the development process? Why is FEA being widely adopted? The magnetic circuit method is a convenient method that can obtain results simply and easily, but a more authentic magnetic circuit is required to obtain more accurate results. The primary magnetic pathways the magnetic flux flows need to be predicted and the magnetic resistance of the magnetic circuit needs to be evaluate in advance to define the magnetic circuit. This requires the intuition and experience of designers. The magnetic circuit is also strongly dependent on the geometry and material properties. The magnetic circuit that needs to be taken into account becomes complex as the geometry becomes complicated. A point sequence of each operating point for the physical values and magnetic resistance also becomes necessary to grasp the nonlinear properties of materials. The characteristics of the attractive force indicated in Fig. 3 for a wider operating region are indicated in Fig. 4. The error in the magnetic properties is expressed as the error of attractive force because the magnetic saturation is more severe as the gap between the mover and stator becomes thinner (linear properties are assumed in the magnetic circuit method). The geometrical dependency of the attractive force is more prominent as the gap becomes thinner because the magnetic resistance varies largely with the geometry. This means the application of the magnetic circuit method becomes more difficult with the complexity of the geometry for an analysis target that is dependent on nonlinear characteristics. The geometry often needs to be determined while comprehensively investigating the magnetic saturation and loss distribution of the primary magnetic circuit to achieve the maximum performance of machines, such as motors, during the design process. However, the magnetic circuit method cannot account for distributions such as the magnetic flux density distribution and loss density of a machine by simply replacing and calculating an equivalent circuit for an analysis target. FEA is suitable for these kinds of evaluations. The magnetic flux density distribution inside a motor can be comprehensively evaluated using FEA as indicated in Fig. 5. In addition, physical phenomena reflected in geometry, such as the torque ripple, can be examined in detail easily. FEA performs analyses by modeling the geometry as closely as possible to the actual machine and setting accurate values for the material properties without relying on the intuition and experience of the designer. Therefore, highly accurate results can be obtained by anyone even if they are not an expert in design by following the correct modeling procedures. FEA can even examine the magnetic saturation and eddy current distribution inside of machines that cannot be directly evaluated using measurements. More information can be acquired using FEA than can even be obtained through experimentation. The rapidly increasing performance of computation solvers and the technical advancements of analysis methods in recent years are drastically expanding the prevalence of FEA. This has resulted in the rapid application of FEA in the development process. This issue has provided a broad overview for the prevalence and background of FEA in the design process. The next issue will describe the technical background that has made comprehensive and highly accurate analysis possible using FEA. [JMAG Newsletter Spring, 2011]
{"url":"https://www.jmag-international.com/fea/01/","timestamp":"2024-11-06T07:27:41Z","content_type":"text/html","content_length":"106394","record_id":"<urn:uuid:90667e03-9403-40d7-8cf6-4f2e452b8413>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00082.warc.gz"}
How to create a list of list of numbers in haskell? To create a list of lists of numbers in Haskell, you can use the square bracket syntax to define the outer list, and within each element of the outer list, use another set of square brackets to define the inner list. Here's an example: 1 listOfLists :: [[Int]] 2 listOfLists = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] In this example, listOfLists is a list of lists of integers. The outer list contains three elements, which are the inner lists [1, 2, 3], [4, 5, 6], and [7, 8, 9]. Each inner list contains three You can also create a list of lists using list comprehension. Here's another example: 1 listOfLists :: [[Int]] 2 listOfLists = [[i * j | j <- [1..3]] | i <- [1..3]] In this example, listOfLists contains the multiplication tables from 1 to 3. The outer list comprehension iterates over values of i from 1 to 3, and for each i, the inner list comprehension iterates over values of j from 1 to 3, and multiplies i and j to create the elements of the inner lists.
{"url":"https://devhubby.com/thread/how-to-create-a-list-of-list-of-numbers-in-haskell","timestamp":"2024-11-06T21:58:11Z","content_type":"text/html","content_length":"103104","record_id":"<urn:uuid:e0a86cf6-576a-4708-b052-20fd02e4d6c0>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00005.warc.gz"}
Hi all, I always had a some difficulties to understand what removing the effect of a covariates actually means and what happens "under the hood". So I decided to make a small R example that shows what limma::removeBatchEffect does, by "implementing" the method just using base R linear models. It really helped me to understand so I though I would share it: Batch correction while preserving a biological effect # Parameters num_genes <- 100 num_samples <- 12 # Simulate some data: 100 genes and 12 samples # Base expression levels base_expression <- matrix(rnorm(num_genes * num_samples, mean = 10, sd = 3), nrow = num_genes, ncol = num_samples) rownames(base_expression) <- paste("Gene", 1:num_genes) colnames(base_expression) <- paste("Sample", 1:num_samples) # Create batch factors batch <- factor(rep(c("Batch1", "Batch2", "Batch3"), 4)) # Create a biological factor (treatment) treatment <- factor(rep(c("Control", "Treatment"), 6)) # Apply consistent batch effects batch_effects <- c(5, -4, 9) # Different consistent effects for each batch batch_matrix <- matrix(batch_effects[as.integer(batch)], nrow = num_genes, ncol = num_samples, byrow = TRUE) # Apply consistent treatment effects treatment_effects <- c(0, 15) # No effect for control, positive effect for treatment treatment_matrix <- matrix(treatment_effects[as.integer(treatment)], nrow = num_genes, ncol = num_samples, byrow = TRUE) expression <- base_expression + batch_matrix + treatment_matrix # Manual adjustment using lm() preserving biological condition adjusted_expression_manual <- apply(expression, 1, function(gene_expression) { full_model <- lm(gene_expression ~ batch + treatment) reduced_model <- lm(gene_expression ~ treatment) # isolate the component of the gene expression predictions that is due to the batch effect. # Is the portion of gene expression that can be attributed solely to the batch, independent of the biological variable (treatment). batch_effect <- full_model$fitted.values - reduced_model$fitted.values adjusted_expression <- gene_expression - batch_effect # remove the batch effect # Convert adjusted expression to a matrix for easier handling adjusted_expression_matrix_manual <- matrix(unlist(adjusted_expression_manual), nrow = num_genes, byrow = TRUE) colnames(adjusted_expression_matrix_manual) <- colnames(expression) rownames(adjusted_expression_matrix_manual) <- rownames(expression) # Adjust using limma's removeBatchEffect while preserving biological effects design_matrix <- model.matrix(~treatment) adjusted_expression_limma <- removeBatchEffect(expression, batch = batch, design = design_matrix) # Comparing the two approaches comparison <- adjusted_expression_matrix_manual - adjusted_expression_limma print("Difference between Manual and limma Approaches:") Batch correction if you do not want to preserve anything # Parameters num_genes <- 100 num_samples <- 12 # Simulate some data: 100 genes and 12 samples # Base expression levels base_expression <- matrix(rnorm(num_genes * num_samples, mean = 10, sd = 3), nrow = num_genes, ncol = num_samples) rownames(base_expression) <- paste("Gene", 1:num_genes) colnames(base_expression) <- paste("Sample", 1:num_samples) # Create batch factors batch <- factor(rep(c("Batch1", "Batch2", "Batch3"), 4)) # Create a biological factor (treatment) treatment <- factor(rep(c("Control", "Treatment"), 6)) # Apply consistent batch effects batch_effects <- c(5, -4, 9) # Different consistent effects for each batch batch_matrix <- matrix(batch_effects[as.integer(batch)], nrow = num_genes, ncol = num_samples, byrow = TRUE) # Apply consistent treatment effects treatment_effects <- c(0, 15) # No effect for control, positive effect for treatment treatment_matrix <- matrix(treatment_effects[as.integer(treatment)], nrow = num_genes, ncol = num_samples, byrow = TRUE) expression <- base_expression + batch_matrix + treatment_matrix # Manual adjustment using lm() preserving biological condition adjusted_expression_manual <- apply(expression, 1, function(gene_expression) { model <- lm(gene_expression ~ batch) adjusted_expression <- residuals(model) + mean(gene_expression) # Adding back the mean # Convert adjusted expression to a matrix for easier handling adjusted_expression_matrix_manual <- matrix(unlist(adjusted_expression_manual), nrow = num_genes, byrow = TRUE) colnames(adjusted_expression_matrix_manual) <- colnames(expression) rownames(adjusted_expression_matrix_manual) <- rownames(expression) # Adjust using limma's removeBatchEffect while preserving biological effects design_matrix <- model.matrix(~treatment) adjusted_expression_limma <- removeBatchEffect(expression, batch = batch, design = matrix(1, ncol(expression), 1) # essentially says, there are no experimental conditions, all belong to same category # Print the results print("Adjusted Expression Matrix (Manual):") print("Adjusted Expression Matrix (limma):") # Comparing the two approaches comparison <- adjusted_expression_matrix_manual - adjusted_expression_limma print("Difference between Manual and limma Approaches:") (+1) "implementing" the method just using base R linear models.: I find implementing from basic starting point to be one of the best, if not the best, way to learn what a procedure does. Entering edit mode Biological and batch effects are known effects that are based on prior knowledge, a variable in the design. If we remove both effects, we get the residuals, i.e. data minus model, the model being both effects. There might be unexpected information in these residuals. The typical workflow of using limma is to perform an exploratory (i.e. without prior) analysis (e.g. plotMDS) before any modelling in order to determine what is the main effect that drives the data and to verifiy that this main effect derives either from the biological conditions or the batch effect, which means that the experiment/data reflects the known effects. This is only a point of view from an analogy to a simple linear regression. Entering edit mode Thank you for the explanation. In most of the experiments, we are testing the biological effect, for instance, (1) what a treatement intervention is doing or (2) what drives the cancer progress/ causes ? In these situation, we need to preserve the biological effect so that we can estimate the intervention/cancer affect. I wanted to know what are the situations where we do not care about preserving biological effect as in the above tutorial: "Batch correction if you do not want to preserve anything" did to eliminate biologcal effect as well. What are the cases where this strategy is applicable? Entering edit mode I didn't clearly get your point before. I think the given code/example cannot answer it and I have no answer to it. Batching correcting without the design matrix may produce misleading results because it may remove genuine non-batch effects in the data, if they exist. I do not see any purpose in doing that. If you have already read this thread, you'd better start a new question.
{"url":"https://www.biostars.org/p/9594320/#9604861","timestamp":"2024-11-13T08:27:06Z","content_type":"text/html","content_length":"32706","record_id":"<urn:uuid:e914d71e-ed4e-4354-b11b-760e18f4d4de>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00208.warc.gz"}
How can I find reliable MATLAB programming experts? | Programming Assignment Help How can I find reliable MATLAB programming experts? Post navigation About Me Hi. I’m about to graduate from my ‘work experience’ degree and have been involved with many programming consulting courses since I was 2. I am an experienced InnoDB trainer and have written at least 30+ books about database design and knowledge of IAM/IOMDB. With over 2 decades experience in MySQL and DBM systems I have given training in different subjects like SQL and programming. I have in memory an ongoing blog with interesting discussion about ‘the value of MySQL in my life’ and also how different in different areas of programming I choose which I like. Thanks in advance. Thank you. Why should a company outsource a big data platform for processing a tiny amount of data that is stored at a specific time and frequency? My experience in this case was quite poor and has led me to believe that the majority of these companies (and others) do anything which is not intended for small data flows. I have no knowledge of which should be used for data flow processing. How much is it then relevant in all the cases(s)? Why do you choose it when you are doing work on a large dataset and want to move a small part of it around? Nowadays some companies that have got a better understanding are saying they prefer MySQL and other databases just for this single – single purpose and need best techniques to solve the problems(s). What do you think? What is the role of designing data from scratch for a business Read IT Data from scratch, if it does not feel good to try again, or better yet, keep your head down. I don’t recommend applying for these types of companies, that all companies should consider doing small business but usually they are looking for a source of good data flow that don’t take into account server side costs (e.g. your data needs to be stored up properly.) Tell me why are your companies picking them for this platform? It may not be as good as you think but you can make a difference if you do. This is on its own – would you like to recommend other similar companies or we can discuss your recommendations? I can tell you because what it is, is so many points that I don’t have made with any knowledge. Many of these companies have their database data uploaded to a server, but could you prove a few basic things about them? If you have a database, what level of security does it need to keep in place, what is the data integrity? Are you setting sure that the data you create is correct, and after moving it you should keep the corrected data. And what can you make that fit your needs in the way you want? For example, what can you do to change data that you plan to change while go to this site are running it? I don’t know, I don’t much care if the source should be placed in MySQL as a data store, orHow can I find reliable MATLAB programming experts? I’d like to help, maybe you could put together a call to MATLAB that they can use and/or perhaps code to improve their code. If you missed it, we have so many forum posts on MATLAB that we are looking to get away from. There are only a handful of experts on here, and that’s mainly because they haven’t really been doing a good job of it. Great Teacher Introductions On The Syllabus There are also the folks who seem to be well-versed in MATLAB which I’ve been blessed with several times already from working with VB, and others who have been busy learning some programming skills. For the past two weeks I was trying to figure out a way to get a little more out of the background of creating an “old school” example that I already had, when I was on the road for a few years. The work I was doing was about fixing things that wasn’t so stable for older versions of MATLAB, and it was very challenging to sort out some basics. The very basics I required of what’s actually required of all of the examples I’m building I was working with are the ones that get me started, and usually I’ll be working on something that’s all in a little bit more new. I was able to figure out which step I needed to run to get these “old school examples” out of the way and when and that’s got us working with my version of VB, and I’ve been rather bad at this. So I’m going to start by looking at some past connections and ask what we did to try to get my original MATLAB example and keep stuff like this the original. I found out that some of my initial ideas are pretty much this one, but the problem is that, where do I start from? I need to run all the functions above and be ready to jump right into the main functions, so I started figuring out how to run those functions pretty quickly, so I figured I would online programming homework help MATLAB functions which are the ones that actually start and go to the main functions, but where else could I run MATLAB functions? I found a way to do this, by hand, back and forth until I can’t find somewhere where my “first order functions” might start. The idea is that, all around the time I run Matlab, all I need to do is go back and install the Matlab package and put all the functions I need in it, everything is running. It’s a really tight knit process when you want to learn how to do things well, so I’d recommend that you stick with the previous Matlab code, maybe you can do some more experimenting! At each stage of the start-up, you really need to have any modules it requires and add them allHow can I find reliable MATLAB programming experts? Here’s how I found a MATLAB experts website. (If you’re willing to spend a little time looking up various programming resources, try the online tool ‘Matrix Learning’). Help With My Online Class Thanks! I’ll stop here soon. How this site is actually used cannot be measured because of the way its accessed and the ways it publishes statistics. Some math is used as it’s meant to interpret a diagram. However, we make it good business. How can I use this to make a good distribution system? So, here’s a general question: if I feed the 2DXX function a constant values 0 and 1, what matters? What do I need to know? How do I know if a function value of constant value = 0 is really real or real or is it just a numerical representation of an hypothetical’sim’ integral? How do I compute the same for a value of some other constant value 0? If the function is less than the nominal one and greater than another, how do I know it’s all right with this? If it is multiplexed, what is <=< if it is not multiplexed, which one is greater? There’s no absolutely certain way. Sometimes it helps to know the answer to your problem. (you’re referring to MATLAB, so you need to know one thing. The same function name as both the < and the not< are recognized in C-compatible languages, if you want to know your algorithm, see 'MATLAB features' for an overview of more examples.) How does the name of the function for multiuser type work? I'll give the code a quick look here: I am referring to the figure of K, a numeric matrix with 2DZ representation of the 3DXX vector. If it is a difference of real and imaginary parts you can refer to it as K3. Do My Online Classes Why does the expression of K3 mean zero at the first input? To get K3 before the other two are compared, you create a mask of 0 and 1, as well as a value of type <, =, >, =, =, =, =<, =, =, = =, bitwise: This is my intention as a means of validation. However, if I call for it as integer multiplication or division, I always get something interesting. For example, if I try to set K3 = 0.5 before multiplications, the notation looks like K3 = (0-0.5)/0, \mathcmd{Btogc:=} (-0.5 / 0) = That means I get K3 = (0.5-0.50)/-0, \mathcmd{Btogc:=} (+.50 / 0) = A simple example of this would be like this: To understand K3, it makes sense that all 3rd argument should be real and not imaginary. The expression of K3 = (-0. Do My Aleks For Me 5 / 0) = is equivalent to K3 = (-0.5 / 0) = Since we are trying to understand the function properly, we just need to be clear about the interpretation of this expression. So, let’s go beyond the simplified interpretation of this expression. In the right-hand-side of our example we have the function. We have all symbols and they are all numeric-relative, so not exactly a data-value, but there is a symbol and the symbol of real value appears quite effectively as it does for an integer, it’s been a number 1 = -0.5 = 2 in MATLAB’s implementation of programming. (this illustration shows 3DXX
{"url":"https://programmingdoc.com/how-can-i-find-reliable-matlab-programming-experts","timestamp":"2024-11-10T01:53:42Z","content_type":"text/html","content_length":"161710","record_id":"<urn:uuid:1f1ceae4-e46e-474c-8777-8cf5567872a1>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00675.warc.gz"}
5 Best Ways to Check Whether a Triangle is Valid if Sides Are Given in Python π ‘ Problem Formulation: We often encounter geometrical problems in programming that require validation of shapes based on their properties. In a triangle, a fundamental rule is that the sum of any two sides must be greater than the third. This article explores five methods to check the validity of a triangle in Python when the lengths of the sides are provided. For instance, given side lengths 3, 4, and 5, the program should indicate that it forms a valid triangle. Method 1: Using Basic Conditional Statements This traditional approach employs simple conditional checks to validate the triangle’s existence by ensuring the sum of any two sides is greater than the remaining one. Here’s an example: def is_valid_triangle(a, b, c): if (a + b > c) and (a + c > b) and (b + c > a): return True return False print(is_valid_triangle(3, 4, 5)) Output: True This function is_valid_triangle() takes three arguments representing the sides of a triangle and returns True if they satisfy the triangle inequality theorem, which states that the sum of the lengths of any two sides of a triangle must be greater than the length of the third side. The output confirms that sides of lengths 3, 4, and 5 can form a valid triangle. Method 2: Using the Sort and Compare Technique This method involves sorting the sides in ascending order to make the comparison logic simpler and clearer. The largest side is compared to the sum of the other two to check the validity of a Here’s an example: def is_valid_triangle(a, b, c): sides = sorted([a, b, c]) return sides[0] + sides[1] > sides[2] print(is_valid_triangle(3, 4, 5)) Output: True In this code, the list of sides is first sorted to identify the smallest and largest sides without assuming their order. The function is_valid_triangle() compares the sum of the two smaller sides to the largest side, again returning True for a valid triangle. Method 3: Using a More Pythonic Approach We can employ tuple unpacking and the all function to check the validity of the triangle more succinctly. This is more in line with typical Python coding practices and readability. Here’s an example: def is_valid_triangle(*sides): a, b, c = sorted(sides) return all([a + b > c, a + c > b, b + c > a]) print(is_valid_triangle(3, 4, 5)) Output: True In this method, is_valid_triangle() uses asterisk (*) to take any number of arguments and sorts them. The sorted sides are then unpacked into variables a, b, and c. The built-in function all() checks all conditions at once, which improves readability and maintains the function’s purpose. Method 4: Using Object-Oriented Programming (OOP) The OOP method encapsulates the triangle validation logic within a class, enabling more sophisticated data manipulation and the potential for extending the functionality. Here’s an example: class Triangle: def __init__(self, a, b, c): self.sides = sorted([a, b, c]) def is_valid(self): a, b, c = self.sides return a + b > c triangle = Triangle(3, 4, 5) Output: True The Triangle class is created with an initializer that sorts the sides. The method is_valid() checks the triangle validity within the context of an object. This structure allows for more complex geometric calculations and storing additional attributes of the triangle if needed. Bonus One-Liner Method 5: Using a Lambda Function The lambda function provides a concise one-liner approach to check the validity of a triangle, largely used for simplicity in cases where a full function definition isn’t necessary. Here’s an example: is_valid_triangle = lambda a, b, c: a + b > c and a + c > b and b + c > a print(is_valid_triangle(3, 4, 5)) Output: True This one-liner uses a lambda function that takes three arguments and immediately applies the validation conditions. It’s a compact version of Method 1 and is best used in simple scripts or within a larger function for clarity. • Method 1: Basic Conditional Statements. Strengths: Easy to understand and implement. Weaknesses: Can become verbose with more complex conditions. • Method 2: Sort and Compare Technique. Strengths: Ensures proper comparison by sorting. Weaknesses: Slightly less efficient due to sorting. • Method 3: Pythonic Approach. Strengths: Improved readability and conciseness with Python-specific features. Weaknesses: May not be as immediately clear to newcomers to Python. • Method 4: Object-Oriented Programming. Strengths: Easily extendable and maintainable with encapsulation. Weaknesses: Overhead of class structure for a simple task. • Bonus Method 5: Lambda Function. Strengths: Extremely concise, good for inline use. Weaknesses: Can reduce readability in complex situations.
{"url":"https://blog.finxter.com/5-best-ways-to-check-whether-a-triangle-is-valid-if-sides-are-given-in-python/","timestamp":"2024-11-11T05:11:21Z","content_type":"text/html","content_length":"71188","record_id":"<urn:uuid:4a9e1222-924e-427d-a3c8-dec4f0357ee8>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00607.warc.gz"}
Direct & Inverse Proportions (Indirect Proportions) with solutions, examples, videos The following diagram gives the steps to solve ratios and direct proportion word problems. Scroll down the page for examples and step-by-step solutions. Printable & Online Ratio & Proportion Worksheets Direct Proportions/Variations Two values x and y are directly proportional to each other when the ratio x : y or x and y will either increase together or decrease together by an amount that would not change the ratio. Knowing that the ratio does not change allows you to form an equation to find the value of an unknown variable. If two pencils cost $1.50, how many pencils can you buy with $9.00? The number of pencils is directly proportional to the cost. How To Solve Directly Proportional Questions? Example 1: F is directly proportional to x. When F is 6, x is 4. Find the value of F when x is 5. Example 2: A is directly proportional to the square of B. When A is 10, B is 2. Find the value of A when B is 3. How To Use Direct Proportion? How To Solve Word Problems Using Proportions? This video shows how to solve word problems by writing a proportion and solving 1. A recipe uses 5 cups of flour for every 2 cups of sugar. If I want to make a recipe using 8 cups of flour, how much sugar do I use? 2. A syrup is made by dissolving 2 cups of sugar in 2/3 cups of boiling water. How many cups of sugar should be used for 2 cups of boiling water? 3. A school buys 8 gallons of juice for 100 kids. how many gallons do they need for 175 kids? Solving More Word Problems Using Proportions 1. On a map, two cities are 2 5/8 inches apart. If 3/8 inches on the map represents 25 miles, how far apart are the cities (in miles)? 2. Solve for the sides of similar triangles using proportions Inverse Proportions/Variations Or Indirect Proportions Two values x and y are inversely proportional to each other when their product xy is a constant (always remains the same). This means that when x increases y will decrease, and vice versa, by an amount such that xy remains the same. Knowing that the product does not change also allows you to form an equation to find the value of an unknown variable It takes 4 men 6 hours to repair a road. How long will it take 8 men to do the job if they work at the same rate? The number of men is inversely proportional to the time taken to do the job. Let t be the time taken for the 8 men to finish the job. 4 × 6 = 8 × t 24 = 8t t = 3 hours Usually, you will be able to decide from the question whether the values are directly proportional or inversely proportional. How To Solve Inverse Proportion Questions? This video shows how to solve inverse proportion questions. It goes through a couple of examples and ends with some practice questions Example 1: A is inversely proportional to B. When A is 10, B is 2. Find the value of A when B is 8 Example 2: F is inversely proportional to the square of x. When A is 20, B is 3. Find the value of F when x is 5. How To Use Inverse Proportion To Work Out Problems? How to use a more advanced form of inverse proportion where the use of square numbers is involved. More examples to explain direct proportions / variations and inverse proportions / variations How to solve Inverse Proportion Math Problems on pressure and volume? In math, an inverse proportion is when an increase in one quantity results in a decrease in another quantity. This video will show how to solve an inverse proportion math problem. Example: The pressure in a piston is 2.0 atm at 25°C and the volume is 4.0L. If the pressure is increased to 6.0 atm at the same temperature, what will be the volume? Try the free Mathway calculator and problem solver below to practice various math topics. Try the given examples, or type in your own problem and check your answer with the step-by-step explanations. We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page.
{"url":"https://www.onlinemathlearning.com/proportions.html","timestamp":"2024-11-14T17:22:22Z","content_type":"text/html","content_length":"45111","record_id":"<urn:uuid:d7ddea5a-fe12-44f4-bda4-2f49a6ce868f>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00652.warc.gz"}
Searching for the Solskjaer bounce I’ve spent much of the last week helping students with “Hypothesis Testing” as they prepare for their A level exams in the next few months. Fed up of wading through connived examples, and upon stumbling across perhaps the best headline I’ve read in sometime (“Man United regress to the mean after Solskjaer bounce“) I thought I’d use a bit of A level maths to see if the Solskjaer bounce was real or just another Norse myth. (I’m about to walk through how this may be presented as an A level question, followed by my worked solution, so my less mathematically minded readers may want to skip the next few lines.) Since taking over as manager of Manchester United in December 2018, Ole Gunnar Solskjaer (OGS) has transformed the club, returning it to winning ways. In the season to date, in all competitions, Man Utd have won 24 out of 43 games. Since OGS was appointed manager, Man Utd have won 15 of their 21 games. Does the data support the theory that OGS has transformed the club at the 5% significance level? Let p be the probability that Man Utd win a match. They have won 24 out of 43, so the probability of winning is 24/43 = 0.558 The null hypothesis is that the probability is 0.558, the alternative hypothesis is that the probability of a win (under OGS) is greater Let X be the number of wins in a sample of 21 games (the games that OGS was manager) If H[o] is correct then X~B(21,0.558) We’re modelling the data as a Binomial distribution as there are two outcomes: win, don’t win. Test statistic: X = 15 (the number of games OGS won) P(X>=15) = 1 – P(X<=14) = 1 – 0.8905 = 0.1095 We use our calculator in Binomial CD mode to find the cumulative probability of up to, and including 14, then take that away from 1 to get the probability that Man Utd would win 15 or more of their games under the null hypothesis (i.e OGS has made no difference), which works out to be 0.1095 or 10.95%) 0.1095 is not less than 0.05 Sometimes easier to think in percentages, even if we give answers as decimals. 10.95% is not less than 5% (our significance level) So there is insufficient evidence to reject H[o], our null hypothesis (Non-mathematicians, start reading from here) At the 5% significance level, there is no evidence to support the theory that OGS has transformed the club. Or, in other words, the Solskjaer bounce is probably just a myth. We have shown that even if there had been no change of manager there was a 10.95% chance Man Utd would have won 15 out of the next 21 games. In fact, even if we widened the significance level to 10%, the data still wouldn’t have supported a Solskjaer bounce. Winning 15 out of the 21 games since taking charge whilst unlikely was not so unlikely that it couldn’t have been due to chance rather than the genius that is OGS. My theory: if we’d looked for the bounce a little earlier, we may have found evidence for it – perhaps the bounce peaked at around 15 games and Man Utd are, indeed, now regressing to the mean. As ever, statistics raises as many questions as they answer, but it is good to be able to apply some A level mathematics to answer a “real” question. Post a Comment This entry was posted in Handling Data, Probability and tagged football, sport. Bookmark the permalink. Post a comment or leave a trackback: Trackback URL.
{"url":"http://www.amathsteacherwrites.co.uk/searching-for-the-solskjaer-bounce/","timestamp":"2024-11-04T22:08:21Z","content_type":"application/xhtml+xml","content_length":"57945","record_id":"<urn:uuid:9ba5ae6b-4a12-4050-971e-078b8a9edee9>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00626.warc.gz"}
Background to the schools Wikipedia The articles in this Schools selection have been arranged by curriculum topic thanks to SOS Children volunteers. SOS Children is the world's largest charity giving orphaned and abandoned children the chance of family life. Number of Fraction Percentage half-lives remaining remaining 0 ^1/[1] 100 1 ^1/[2] 50 2 ^1/[4] 25 3 ^1/[8] 12 .5 4 ^1/[16] 6 .25 5 ^1/[32] 3 .125 6 ^1/[64] 1 .563 7 ^1/[128] 0 .781 ... ... ... n ^1/[2^n] 100/(2^n) Half-life (t[½]) is the time required for a quantity to fall to half its value as measured at the beginning of the time period. In physics, it is typically used to describe a property of radioactive decay, but may be used to describe any quantity which follows an exponential decay. The original term, dating to Ernest Rutherford's discovery of the principle in 1907, was "half-life period", which was shortened to "half-life" in the early 1950s. Half-life is used to describe a quantity undergoing exponential decay, and is constant over the lifetime of the decaying quantity. It is a characteristic unit for the exponential decay equation. The term "half-life" may generically be used to refer to any period of time in which a quantity falls by half, even if the decay is not exponential. For a general introduction and description of exponential decay, see exponential decay. For a general introduction and description of non-exponential decay, see rate law. The converse of half-life is doubling time. The table on the right shows the reduction of a quantity in terms of the number of half-lives elapsed. Probabilistic nature of half-life A half-life usually describes the decay of discrete entities, such as radioactive atoms, which have unstable nuclei. In that case, it does not work to use the definition "half-life is the time required for exactly half of the entities to decay". For example, if there is just one radioactive atom with a half-life of one second, there will not be "one-half of an atom" left after one second. There will be either zero atoms left or one atom left, depending on whether or not that atom happened to decay. Instead, the half-life is defined in terms of probability. It is the time when the expected value of the number of entities that have decayed is equal to half the original number. For example, one can start with a single radioactive atom, wait its half-life, and then check whether or not it has decayed. Perhaps it did, but perhaps it did not. But if this experiment is repeated again and again, it will be seen that - on average - it decays within the half-life 50% of the time. In some experiments (such as the synthesis of a superheavy element), there is in fact only one radioactive atom produced at a time, with its lifetime individually measured. In this case, statistical analysis is required to infer the half-life. In other cases, a very large number of identical radioactive atoms decay in the measured time range. In this case, the law of large numbers ensures that the number of atoms that actually decay is approximately equal to the number of atoms that are expected to decay. In other words, with a large enough number of decaying atoms, the probabilistic aspects of the process could be neglected. There are various simple exercises that demonstrate probabilistic decay, for example involving flipping coins or running a statistical computer program. For example, the image on the right is a simulation of many identical atoms undergoing radioactive decay. Note that after one half-life there are not exactly one-half of the atoms remaining, only approximately, because of the random variation in the process. However, with more atoms (right boxes), the overall decay is smoother and less random-looking than with fewer atoms (left boxes), in accordance with the law of large Formulas for half-life in exponential decay An exponential decay process can be described by any of the following three equivalent formulas: $N(t) = N_0 \left(\frac {1}{2}\right)^{t/t_{1/2}}$ $N(t) = N_0 e^{-t/\tau} \,$ $N(t) = N_0 e^{-\lambda t} \,$ □ N[0] is the initial quantity of the substance that will decay (this quantity may be measured in grams, moles, number of atoms, etc.), □ N(t) is the quantity that still remains and has not yet decayed after a time t, □ t[1/2] is the half-life of the decaying quantity, □ τ is a positive number called the mean lifetime of the decaying quantity, □ λ is a positive number called the decay constant of the decaying quantity. The three parameters $t_{1/2}$, $\tau$, and λ are all directly related in the following way: $t_{1/2} = \frac{\ln (2)}{\lambda} = \tau \ln(2)$ where ln(2) is the natural logarithm of 2 (approximately 0.693). Click "show" to see a detailed derivation of the relationship between half-life, decay time, and decay constant. Start with the three equations $N(t) = N_0 \left(\frac {1}{2}\right)^{t/t_{1/2}}$ $N(t) = N_0 e^{-t/\tau}$ $N(t) = N_0 e^{-\lambda t}$ We want to find a relationship between $t_{1/2}$, $\tau$, and λ, such that these three equations describe exactly the same exponential decay process. Comparing the equations, we find the following condition: $\left(\frac {1}{2}\right)^{t/t_{1/2}} = e^{-t/\tau} = e^{-\lambda t}$ Next, we'll take the natural logarithm of each of these quantities. $\ln\left(\left(\frac {1}{2}\right)^{t/t_{1/2}}\right) = \ln(e^{-t/\tau}) = \ln(e^{-\lambda t})$ Using the properties of logarithms, this simplifies to the following: $(t/t_{1/2})\ln \left(\frac {1}{2}\right) = (-t/\tau)\ln(e) = (-\lambda t)\ln(e)$ Since the natural logarithm of e is 1, we get: $(t/t_{1/2})\ln \left(\frac {1}{2}\right) = -t/\tau = -\lambda t$ Canceling the factor of t and plugging in $\ln\left(\frac {1}{2}\right)=-\ln 2$, the eventual result is: $t_{1/2} = \tau \ln 2 = \frac{\ln 2}{\lambda}.$ By plugging in and manipulating these relationships, we get all of the following equivalent descriptions of exponential decay, in terms of the half-life: $N(t) = N_0 \left(\frac {1}{2}\right)^{t/t_{1/2}} = N_0 2^{-t/t_{1/2}} = N_0 e^{-t\ln(2)/t_{1/2}}$ $t_{1/2} = t/\log_2(N_0/N(t)) = t/(\log_2(N_0)-\log_2(N(t))) = (\log_{2^t}(N_0/N(t)))^{-1} = t\ln(2)/\ln(N_0/N(t))$ Regardless of how it's written, we can plug into the formula to get • $N(0)=N_0$ as expected (this is the definition of "initial quantity") • $N(t_{1/2})=\left(\frac {1}{2}\right)N_0$ as expected (this is the definition of half-life) • $\lim_{t\to \infty} N(t) = 0$, i.e. amount approaches zero as t approaches infinity as expected (the longer we wait, the less remains). Decay by two or more processes Some quantities decay by two exponential-decay processes simultaneously. In this case, the actual half-life T[1/2] can be related to the half-lives t[1] and t[2] that the quantity would have if each of the decay processes acted in isolation: $\frac{1}{T_{1/2}} = \frac{1}{t_1} + \frac{1}{t_2}$ For three or more processes, the analogous formula is: $\frac{1}{T_{1/2}} = \frac{1}{t_1} + \frac{1}{t_2} + \frac{1}{t_3} + \cdots$ For a proof of these formulas, see Decay by two or more processes. There is a half-life describing any exponential-decay process. For example: • The current flowing through an RC circuit or RL circuit decays with a half-life of $RC\ln(2)$ or $\ln(2)L/R$, respectively. For this example, the term half time might be used instead of "half life", but they mean the same thing. • In a first-order chemical reaction, the half-life of the reactant is $\ln(2)/\lambda$, where λ is the reaction rate constant. • In radioactive decay, the half-life is the length of time after which there is a 50% chance that an atom will have undergone nuclear decay. It varies depending on the atom type and isotope, and is usually determined experimentally. See List of nuclides. the half life of a species is the time it takes for the concentration of the substance to fall to half of its initial value Half-life in non-exponential decay The decay of many physical quantities is not exponential—for example, the evaporation of water from a puddle, or (often) the chemical reaction of a molecule. In such cases, the half-life is defined the same way as before: as the time elapsed before half of the original quantity has decayed. However, unlike in an exponential decay, the half-life depends on the initial quantity, and the prospective half-life will change over time as the quantity decays. As an example, the radioactive decay of carbon-14 is exponential with a half-life of 5730 years. A quantity of carbon-14 will decay to half of its original amount (on average) after 5730 years, regardless of how big or small the original quantity was. After another 5730 years, one-quarter of the original will remain. On the other hand, the time it will take a puddle to half-evaporate depends on how deep the puddle is. Perhaps a puddle of a certain size will evaporate down to half its original volume in one day. But on the second day, there is no reason to expect that one-quarter of the puddle will remain; in fact, it will probably be much less than that. This is an example where the half-life reduces as time goes on. (In other non-exponential decays, it can increase The decay of a mixture of two or more materials which each decay exponentially, but with different half-lives, is not exponential. Mathematically, the sum of two exponential functions is not a single exponential function. A common example of such a situation is the waste of nuclear power stations, which is a mix of substances with vastly different half-lives. Consider a sample containing a rapidly decaying element A, with a half-life of 1 second, and a slowly decaying element B, with a half-life of one year. After a few seconds, almost all atoms of the element A have decayed after repeated halving of the initial total number of atoms; but very few of the atoms of element B will have decayed yet as only a tiny fraction of a half-life has elapsed. Thus, the mixture taken as a whole does not decay by halves. Half-life in biology and pharmacology A biological half-life or elimination half-life is the time it takes for a substance (drug, radioactive nuclide, or other) to lose one-half of its pharmacologic, physiologic, or radiological activity. In a medical context, the half-life may also describe the time that it takes for the concentration in blood plasma of a substance to reach one-half of its steady-state value (the "plasma The relationship between the biological and plasma half-lives of a substance can be complex, due to factors including accumulation in tissues, active metabolites, and receptor interactions. While a radioactive isotope decays almost perfectly according to so-called "first order kinetics" where the rate constant is a fixed number, the elimination of a substance from a living organism usually follows more complex chemical kinetics. For example, the biological half-life of water in a human being is about 7 to 14 days, though this can be altered by his/her behaviour. The biological half-life of cesium in human beings is between one and four months. This can be shortened by feeding the person prussian blue, which acts as a solid ion exchanger that absorbs the cesium while releasing potassium ions in their place.
{"url":"https://ftp.worldpossible.org/endless/eos-rachel/RACHEL/RACHEL/modules/wikipedia_for_schools/wp/h/Half-life.htm","timestamp":"2024-11-06T15:18:29Z","content_type":"text/html","content_length":"25260","record_id":"<urn:uuid:3b4c1767-ef5b-45db-874a-a5d0ad79b32c>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00817.warc.gz"}
Filtering Data By Custom Fiscal Years And Quarters Using Calculated Columns In Power BI There can be times when your date tables won’t have the custom fiscal years and quarters that you require as filters for your analysis. In this tutorial, we are going to discuss how you can filter your data by financial or fiscal years and quarters using calculated columns in Power BI. You may watch the full video of this tutorial at the bottom of this blog. I’ve already seen inquiries regarding this topic a couple of times in the Enterprise DNA Forum. Fiscal Year In Power BI A fiscal year, also known as a financial year is a one-year period chosen by a company to report its financial information or finances. These finances can be referred to as the past year’s revenue, costs, and even profit margin. When the period of a year starts on January 1 and ends on December 31, the company uses the calendar year as its fiscal year. Any other start date besides January 1 indicates a fiscal year that is not a calendar year. Filtering your data by fiscal years makes it easier to see how your business has done for the whole year. There are few ways to do the filtering. However, in this tutorial, we’ll focus on the simplest way that you can quickly implement this in your date tables whenever you require it. All we have to do is to use a current date table that we may already have inside our model. If you want to learn how to create a date table then check out the link below. Create A Detailed Date Table Fast Discussing The Main Problem Initially, the MonthName column is arranged based on the calendar year. Now, I’d like to sort this by a custom fiscal year then make July the first month and June the last month of the financial/fiscal year. In this instance, what we primarily need to do is to create a new calculated column that will serve as our month sorting column. Creating Calculated Columns In Power BI A calculated column is an extension of a table using a DAX formula that is evaluated for each row. These calculated columns are computed based on data that has already been loaded into your data When you write a calculated column formula, it is automatically applied to the whole table and individually evaluates each row. In this current issue that we need to address, there’s no need for us to create a new date table because we can simply use the current one. One example of a calculated column that we have created in our current date table is the YearWeekSort column. To create a calculated column, just click the New Column option under the Modeling Tab. Alternatively, you can right-click on the table and select New column. After clicking the New Column option, the new calculated column will appear. Creating The Fiscal Month Number Measure Subsequently, we can create the measure for that newly-added column. Just click the column and the formula bar will appear. This is where we will specify the formula/measure for the calculated column that we have just created. We will refer to this measure as the Fiscal month number. Now, we’ll do a simple IF logic for the Fiscal month number. The primary field that we need to consider for the IF logic is the Dates[MonthOfYear]. Then, we need to evaluate if the value of the MonthOfYear column is greater than six. If the condition is true, we’ll subtract 6 from the value of the MonthOfYear column. If not, we will add 6 to the value of the MonthOfYear column instead. To further analyze the data, think of January as the initial value of MonthOfYear which is numerically equal to 1. And 1 is definitely not greater than 6. In that case, we will add 6 to the value of MonthOfYear which will equal 7. And that would make January as the seventh month and July as the first month. After setting the formula, you can go to the Data view and check the highlighted column. As you can see, we now have a month number that we can use to sort the months. You can also see the new column in the Fields list. Sorting The MonthName Column By Fiscal Month Number To verify if our formula is correct, select the MonthName column in our date table. We will then sort this column by Fiscal Month Number. To do this, just select the Sort By Column option from the Modeling Tab then choose Fiscal Month Number. After that, go to the Report View and you will see that our months are now from July to June. This validates that our Fiscal Month Number measure is working accurately. Creating The Fiscal Quarter Number Measure Now that we have learned how to filter data by a fiscal year using calculated columns in Power BI the next thing that we need to learn is how to identify the fiscal year quarters, so we need to implement another sorting formula. Let’s create a new calculated column where we can implement the measure for fiscal or financial quarter. We will refer to this as the Fiscal Quarter Number. The first thing we need to do is to type an opening and closing parenthesis. Inside the parenthesis we need to get the sum of 2 and the value of the Fiscal Month Number. Then divide the result by 3. Now, if you check the date table, you’ll see that it has produced decimal points in the Fiscal Month Number column. Looking further into details, 1 is the initial value of the Fiscal Month Number. If we add 2 to 1, the sum will be 3. Then, the sum will be divided by 3, which will produce 1 as the quotient. As a result, 1 will be the equivalent Fiscal Quarter value of the first fiscal month number, 1.33 for the second, and 1.66 for the third month. To round the value down to the nearest integer, we need to include INT, which represents integer, in our formula. Then, enclose the logic inside a parenthesis. Let’s now check out the result of our new measure. As you can see, the corresponding value of the first to third month in the Fiscal Quarter Number column is 1. Then the fourth to sixth month’s Fiscal Quarter Number value is 2 and so on. This validates the accuracy of our Fiscal Quarter Number measure by setting 3 months for each quarter. Creating The Fiscal Quarter Column Now, let’s add one more column that we will refer to as the Fiscal Quarter. What we’re going to do here is to concatenate the letter “Q” to every value of the Fiscal Quarter Number. As a result, we should have this new column for the Fiscal Quarter. This can also be used as a customized graphic filter, also known as a slicer for our visualizations in the Report View. ***** Related Links***** Sorting Dates By Financial Year In Power BI Calculate Financial Year To Date (FYTD) Sales In Power BI Using DAX How To Create Custom Financial Year Quarters – Power BI If your report is in anything but a calendar year, implementing this type of logic inside your date table is going to be absolutely essential to get the correct numbers and figures showcased in your Making sure that you can dynamically filter by financial years is very important when analyzing any type of financial results within organizations. The key point here is to make sure that the previously discussed logic or formula is integrated into the date table in your data model. By using calculated columns in Power BI, you can integrate your own calculations inside your date table, and the filtering becomes dynamically seamless. Furthermore, enriching your data model with your own calculations will make your reports infinitely more powerful. Good luck with reviewing this technique. This project aims to implement a full data analysis pipeline using Power BI with a focus on DAX formulas to derive actionable insights from the order data. A comprehensive guide to mastering the CALCULATETABLE function in DAX, focusing on practical implementation within Power BI for advanced data analysis. An interactive web-based application to explore and understand various data model examples across multiple industries and business functions. A comprehensive project aimed at enhancing oil well performance through advanced data analysis using Power BI’s DAX formulas. Learn how to leverage key DAX table functions to manipulate and analyze data efficiently in Power BI. Deep dive into the CALCULATETABLE function in DAX to elevate your data analysis skills. One of the main reasons why businesses all over the world have fallen in love with Power BI is because... A hands-on project focused on using the TREATAS function to manipulate and analyze data in DAX. A hands-on guide to implementing data analysis projects using DAX, focused on the MAXX function and its combinations with other essential DAX functions. Learn how to leverage the COUNTX function in DAX for in-depth data analysis. This guide provides step-by-step instructions and practical examples. A comprehensive guide to understanding and implementing the FILTER function in DAX, complete with examples and combinations with other functions. Learn how to implement and utilize DAX functions effectively, with a focus on the DATESINPERIOD function. Comprehensive Data Analysis using Power BI and DAX Exploring CALCULATETABLE Function in DAX for Data Analysis in Power BI Data Model Discovery Library Optimizing Oil Well Performance Using Power BI and DAX Mastering DAX Table Functions for Data Analysis Mastering DAX CALCULATETABLE for Advanced Data Analysis Debugging DAX: Tips and Tools for Troubleshooting Your Formulas Practical Application of TREATAS Function in DAX MAXX in Power BI – A Detailed Guide Leveraging the COUNTX Function In Power BI Using the FILTER Function in DAX – A Detailed Guide With Examples DATESINPERIOD Function in DAX – A Detailed Guide
{"url":"https://blog.enterprisedna.co/filter-your-data-by-unique-financial-years-quarters-power-bi-modeling-technique/","timestamp":"2024-11-13T12:13:30Z","content_type":"text/html","content_length":"519208","record_id":"<urn:uuid:db803214-a77e-4045-8ae3-b23e68f0cbc4>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00679.warc.gz"}
Formalizing predicate boundedness Informally, we defined a predicate F to be bounded if the entities for which F is true can be enclosed in a reasonably "small" class. Of course, for this definition to be of any use we have to clearly specify what we mean by "small", or at least use our intuition to propose some axioms for this concept: • If A and B are small, A U B is small. • If A and B are small, A ∩ B is small. • If A is small and B is included in A, B is small. • If A is small then −A, the class of all entities outside A, is not small. In the definition of −A given above we have assumed that we are working inside some universal class comprising any conceivable entity. The axioms for "small" are far from categorical. We give a few models for this notion: • If we decide to work within a mathematical class theory such as NBG, there is an easy definition for "small": A class is small if it is a set (not all classes are sets). • We can take "small" as meaning "finite", provided the universal class is infinite, or as having cardinality less than or equal to some fixed infinite cardinal κ, provided the universal class is larger than κ. • Suppose we have a fixed set of "primitive" predicates F[i] with domains D[i] such that UD[i] is smaller than the universal class. We can define the class of small sets as the least class S comprising all D[i] that is closed under finite union, finite intersection and comprehension: if X belongs to S and Y is a subset of X, Y belongs to S. This definition of "small" can be seen as a formal model for Quine's notion of natural kinds. This is probably as far as we can get with respect to defining what "small" means in the context of predicate boundedness.
{"url":"http://bannalia.blogspot.com/2008/07/formalizing-predicate-boundedness.html","timestamp":"2024-11-01T20:11:17Z","content_type":"application/xhtml+xml","content_length":"67138","record_id":"<urn:uuid:e91ba764-dcbc-4de1-9256-3917dc1d8084>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00126.warc.gz"}
Douglas College Physics 1104 Custom Textbook – Winter and Summer 2020 Chapter 15 Electric Current, Resistance, and Ohm’s Law 15.4 Electric Power and Energy • Calculate the power dissipated by a resistor and power supplied by a power supply. • Calculate the cost of electricity under various circumstances. Power in Electric Circuits Power is associated by many people with electricity. Knowing that power is the rate of energy use or energy conversion, what is the expression for electric power? Power transmission lines might come to mind. We also think of lightbulbs in terms of their power ratings in watts. Let us compare a 25-W bulb with a 60-W bulb. (See Figure 1(a).) Since both operate on the same voltage, the 60-W bulb must draw more current to have a greater power rating. Thus the 60-W bulb’s resistance must be lower than that of a 25-W bulb. If we increase voltage, we also increase power. For example, when a 25-W bulb that is designed to operate on 120 V is connected to 240 V, it briefly glows very brightly and then burns out. Precisely how are voltage, current, and resistance related to electric power? Figure 1. (a) Which of these lightbulbs, the 25-W bulb (upper left) or the 60-W bulb (upper right), has the higher resistance? Which draws more current? Which uses the most energy? Can you tell from the color that the 25-W filament is cooler? Is the brighter bulb a different color and if so why? (credits: Dickbauch, Wikimedia Commons; Greg Westfall, Flickr) (b) This compact fluorescent light (CFL) puts out the same intensity of light as the 60-W bulb, but at 1/4 to 1/10 the input power. (credit: dbgg1979, Flickr) Electric energy depends on both the voltage involved and the charge moved. This is expressed most simply as [latex]\boldsymbol{\textbf{PE} = qV}[/latex], where [latex]\boldsymbol{q}[/latex] is the charge moved and [latex]\boldsymbol{V}[/latex] is the voltage (or more precisely, the potential difference the charge moves through). Power is the rate at which energy is moved, and so electric power [latex]\boldsymbol{P =}[/latex] [latex]\boldsymbol{\frac{PE}{t}}[/latex] [latex]\boldsymbol{=}[/latex] [latex]\boldsymbol{\frac{qV}{t}}[/latex]. Recognizing that current is [latex]\boldsymbol{I = q/t}[/latex] (note that [latex]\boldsymbol{\Delta t=t}[/latex] here), the expression for power becomes [latex]\boldsymbol{P = IV}.[/latex] Electric power ([latex]\boldsymbol{P}[/latex]) is simply the product of current times voltage. Power has familiar units of watts. Since the SI unit for potential energy (PE) is the joule, power has units of joules per second, or watts. Thus, [latex]\boldsymbol{1 \;\textbf{A} \cdot \;\textbf{V} = 1 \;\textbf{W}}[/latex]. For example, cars often have one or more auxiliary power outlets with which you can charge a cell phone or other electronic devices. These outlets may be rated at 20 A, so that the circuit can deliver a maximum power [latex]\boldsymbol{P = IV = (20 \;\textbf{A})(12 \;\textbf {V}) = 240 \;\textbf{W}}[/latex]. In some applications, electric power may be expressed as volt-amperes or even kilovolt-amperes ( [latex]\boldsymbol{1 \;\textbf{kA} \cdot \;\textbf{V} = 1 \;\textbf{kW}}[/latex]). To see the relationship of power to resistance, we combine Ohm’s law with [latex]\boldsymbol{P = IV}[/latex]. Substituting [latex]\boldsymbol{I = V/R}[/latex] gives [latex]\boldsymbol{P = (V/R)V = V^ 2/R}[/latex]. Similarly, substituting [latex]\boldsymbol{V = IR}[/latex] gives [latex]\boldsymbol{P = I(IR) = I^2R}[/latex]. Three expressions for electric power are listed together here for [latex]\boldsymbol{P = IV}[/latex] [latex]\boldsymbol{P =}[/latex] [latex]\boldsymbol{\frac{V^2}{R}}[/latex] [latex]\boldsymbol{P = I^2R}.[/latex] Note that the first equation is always valid, whereas the other two can be used only for resistors. In a simple circuit, with one voltage source and a single resistor, the power supplied by the voltage source and that dissipated by the resistor are identical. (In more complicated circuits, [latex]\boldsymbol{P}[/latex] can be the power dissipated by a single device and not the total power in the circuit.) Different insights can be gained from the three different expressions for electric power. For example, [latex]\boldsymbol{P = V^2/R}[/latex] implies that the lower the resistance connected to a given voltage source, the greater the power delivered. Furthermore, since voltage is squared in [latex]\boldsymbol{P = V^2/R}[/latex], the effect of applying a higher voltage is perhaps greater than expected. Thus, when the voltage is doubled to a 25-W bulb, its power nearly quadruples to about 100 W, burning it out. If the bulb’s resistance remained constant, its power would be exactly 100 W, but at the higher temperature its resistance is higher, too. Example 1: Calculating Power Dissipation and Current: Hot and Cold Power (a) Consider the examples given in Chapter 20.2 Ohm’s Law: Resistance and Simple Circuits and Chapter 20.3 Resistance and Resistivity. Then find the power dissipated by the car headlight in these examples, both when it is hot and when it is cold. (b) What current does it draw when cold? Strategy for (a) For the hot headlight, we know voltage and current, so we can use [latex]\boldsymbol{P = IV}[/latex] to find the power. For the cold headlight, we know the voltage and resistance, so we can use [latex]\boldsymbol{P = V^2/R}[/latex] to find the power. Solution for (a) Entering the known values of current and voltage for the hot headlight, we obtain [latex]\boldsymbol{P = IV = (2.50 \;\textbf{A})(12.0 \;\textbf{V}) = 30.0 \;\textbf{W}}.[/latex] The cold resistance was [latex]\boldsymbol{0.350 \;\Omega}[/latex], and so the power it uses when first switched on is [latex]\boldsymbol{P =}[/latex] [latex]\boldsymbol{\frac{V^2}{R}}[/latex] [latex]\boldsymbol{=}[/latex] [latex]\boldsymbol{\frac{(12.0 \;\textbf{V})^2}{0.350 \;\Omega}}[/latex] [latex]\boldsymbol{= 411 \;\textbf{W}} .[/latex] Discussion for (a) The 30 W dissipated by the hot headlight is typical. But the 411 W when cold is surprisingly higher. The initial power quickly decreases as the bulb’s temperature increases and its resistance Strategy and Solution for (b) The current when the bulb is cold can be found several different ways. We rearrange one of the power equations, [latex]\boldsymbol{P = I^2R}[/latex], and enter known values, obtaining [latex]\boldsymbol{I =}[/latex] [latex]\sqrt{\frac{P}{R}}[/latex] [latex]\boldsymbol{=}[/latex] [latex]\sqrt{\frac{411 \;\textbf{W}}{0.350 \;\Omega}}[/latex] [latex]\boldsymbol{= 34.3 \;\textbf{A}}[/ Discussion for (b) The cold current is remarkably higher than the steady-state value of 2.50 A, but the current will quickly decline to that value as the bulb’s temperature increases. Most fuses and circuit breakers (used to limit the current in a circuit) are designed to tolerate very high currents briefly as a device comes on. In some cases, such as with electric motors, the current remains high for several seconds, necessitating special “slow blow” fuses. The Cost of Electricity The more electric appliances you use and the longer they are left on, the higher your electric bill. This familiar fact is based on the relationship between energy and power. You pay for the energy used. Since [latex]\boldsymbol{P=E/t}[/latex], we see that [latex]\boldsymbol{E = Pt}[/latex] is the energy used by a device using power [latex]\boldsymbol{P}[/latex] for a time interval [latex]\boldsymbol{t}[/latex]. For example, the more lightbulbs burning, the greater [latex]\boldsymbol{P} [/latex] used; the longer they are on, the greater [latex]\boldsymbol{t}[/latex] is. The energy unit on electric bills is the kilowatt-hour ([latex]\boldsymbol{\textbf{kW} \cdot \;\textbf{h}}[/ latex]), consistent with the relationship [latex]\boldsymbol{E = Pt}[/latex]. It is easy to estimate the cost of operating electric appliances if you have some idea of their power consumption rate in watts or kilowatts, the time they are on in hours, and the cost per kilowatt-hour for your electric utility. Kilowatt-hours, like all other specialized energy units such as food calories, can be converted to joules. You can prove to yourself that [latex]\boldsymbol{1 \;\textbf{kW} \cdot \;\textbf{h} = 3.6 \times 10^6 \;\textbf{J}}[/latex]. The electrical energy ([latex]\boldsymbol{E}[/latex]) used can be reduced either by reducing the time of use or by reducing the power consumption of that appliance or fixture. This will not only reduce the cost, but it will also result in a reduced impact on the environment. Improvements to lighting are some of the fastest ways to reduce the electrical energy used in a home or business. About 20% of a home’s use of energy goes to lighting, while the number for commercial establishments is closer to 40%. Fluorescent lights are about four times more efficient than incandescent lights—this is true for both the long tubes and the compact fluorescent lights (CFL). (See Figure 1(b).) Thus, a 60-W incandescent bulb can be replaced by a 15-W CFL, which has the same brightness and color. CFLs have a bent tube inside a globe or a spiral-shaped tube, all connected to a standard screw-in base that fits standard incandescent light sockets. (Original problems with color, flicker, shape, and high initial investment for CFLs have been addressed in recent years.) The heat transfer from these CFLs is less, and they last up to 10 times longer. The significance of an investment in such bulbs is addressed in the next example. New white LED lights (which are clusters of small LED bulbs) are even more efficient (twice that of CFLs) and last 5 times longer than CFLs. However, their cost is still high. Making Connections: Energy, Power, and Time The relationship [latex]\boldsymbol{E = Pt}[/latex] is one that you will find useful in many different contexts. The energy your body uses in exercise is related to the power level and duration of your activity, for example. The amount of heating by a power source is related to the power level and time it is applied. Even the radiation dose of an X-ray image is related to the power and time of Example 2: Calculating the Cost Effectiveness of Compact Fluorescent Lights (CFL) If the cost of electricity in your area is 12 cents per kWh, what is the total cost (capital plus operation) of using a 60-W incandescent bulb for 1000 hours (the lifetime of that bulb) if the bulb cost 25 cents? (b) If we replace this bulb with a compact fluorescent light that provides the same light output, but at one-quarter the wattage, and which costs $1.50 but lasts 10 times longer (10,000 hours), what will that total cost be? To find the operating cost, we first find the energy used in kilowatt-hours and then multiply by the cost per kilowatt-hour. Solution for (a) The energy used in kilowatt-hours is found by entering the power and time into the expression for energy: [latex]\boldsymbol{E = Pt = (60 \;\textbf{W})(1000 \;\textbf{h}) = 60,000 \;\textbf{W} \cdot \;\textbf{h}}.[/latex] In kilowatt-hours, this is [latex]\boldsymbol{E= 60.0 \;\textbf{kW} \cdot \;\textbf{h}}.[/latex] Now the electricity cost is [latex]\boldsymbol{\textbf{cost} = (60.0 \;\textbf{kW} \cdot \textbf{h})(\$0.12 / \;\textbf{kW} \cdot \textbf{h}) = \$7.20}.[/latex] The total cost will be $7.20 for 1000 hours (about one-half year at 5 hours per day). Solution for (b) Since the CFL uses only 15 W and not 60 W, the electricity cost will be $7.20/4 = $1.80. The CFL will last 10 times longer than the incandescent, so that the investment cost will be 1/10 of the bulb cost for that time period of use, or 0.1($1.50) = $0.15. Therefore, the total cost will be $1.95 for 1000 hours. Therefore, it is much cheaper to use the CFLs, even though the initial investment is higher. The increased cost of labor that a business must include for replacing the incandescent bulbs more often has not been figured in here. Making Connections: Take-Home Experiment—Electrical Energy Use Inventory 1) Make a list of the power ratings on a range of appliances in your home or room. Explain why something like a toaster has a higher rating than a digital clock. Estimate the energy consumed by these appliances in an average day (by estimating their time of use). Some appliances might only state the operating current. If the household voltage is 120 V, then use [latex]\boldsymbol{P = IV}[/latex]. 2) Check out the total wattage used in the rest rooms of your school’s floor or building. (You might need to assume the long fluorescent lights in use are rated at 32 W.) Suppose that the building was closed all weekend and that these lights were left on from 6 p.m. Friday until 8 a.m. Monday. What would this oversight cost? How about for an entire year of weekends? Section Summary • Electric power [latex]\boldsymbol{P}[/latex] is the rate (in watts) that energy is supplied by a source or dissipated by a device. • Three expressions for electrical power are [latex]\boldsymbol{P = IV},[/latex] [latex]\boldsymbol{P =}[/latex] [latex]\boldsymbol{\frac{V^2}{R}}[/latex], [latex]\boldsymbol{P = I^2 R}.[/latex] • The energy used by a device with a power [latex]\boldsymbol{P}[/latex] over a time [latex]\boldsymbol{t}[/latex] is [latex]\boldsymbol{E = Pt}[/latex]. Conceptual Questions 1: Why do incandescent lightbulbs grow dim late in their lives, particularly just before their filaments break? 2: The power dissipated in a resistor is given by [latex]\boldsymbol{P = V^2/R}[/latex], which means power decreases if resistance increases. Yet this power is also given by [latex]\boldsymbol{P = I^ 2R}[/latex], which means power increases if resistance increases. Explain why there is no contradiction here. Probles & Exercises 1: What is the power of a [latex]\boldsymbol{1.00 \times 10^2 \;\textbf{MV}}[/latex] lightning bolt having a current of [latex]\boldsymbol{2.00 \times 10^4 \;\textbf{A}}[/latex]? 2: What power is supplied to the starter motor of a large truck that draws 250 A of current from a 24.0-V battery hookup? 3: A charge of 4.00 C of charge passes through a pocket calculator’s solar cells in 4.00 h. What is the power output, given the calculator’s voltage output is 3.00 V? (See Figure 2.) Figure 2. The strip of solar cells just above the keys of this calculator convert light to electricity to supply its energy needs. (credit: Evan-Amos, Wikimedia Commons) 4: How many watts does a flashlight that has [latex]\boldsymbol{6.00 \times 10^2 \;\textbf{C}}[/latex] pass through it in 0.500 h use if its voltage is 3.00 V? 5: Find the power dissipated in each of these extension cords: (a) an extension cord having a [latex]\boldsymbol{0.0600 -\;\Omega}[/latex] resistance and through which 5.00 A is flowing; (b) a cheaper cord utilizing thinner wire and with a resistance of [latex]\boldsymbol{0.300 \;\Omega}[/latex] 6: Verify that the units of a volt-ampere are watts, as implied by the equation [latex]\boldsymbol{P = IV}[/latex]. 7: Show that the units [latex]\boldsymbol{1 \;\textbf{V}^2 / \;\Omega = 1 \;\textbf{W}}[/latex], as implied by the equation [latex]\boldsymbol{P = V^2/R}[/latex]. 8: Show that the units [latex]\boldsymbol{1 \;\textbf{A}^2 \cdot \;\Omega = 1 \;\textbf{W}}[/latex], as implied by the equation [latex]\boldsymbol{P = I^2 R}[/latex]. 9: Verify the energy unit equivalence that [latex]\boldsymbol{1 \;\textbf{kW} \cdot \;\textbf{h} = 3.60 \times 10^6 \;\textbf{J}}[/latex]. 10: Electrons in an X-ray tube are accelerated through [latex]\boldsymbol{1.00 \times 10^2 \;\textbf{kV}}[/latex] and directed toward a target to produce X-rays. Calculate the power of the electron beam in this tube if it has a current of 15.0 mA. 11: An electric water heater consumes 5.00 kW for 2.00 h per day. What is the cost of running it for one year if electricity costs [latex]\boldsymbol{12.0 \;\textbf{cents/kW} \cdot \;\textbf{h}}[/ latex]? See Figure 3 below for an example of this type of heater. Figure 3. On-demand electric hot water heater. Heat is supplied to water only when needed. (credit: aviddavid, Flickr) 12: With a 1200-W toaster, how much electrical energy is needed to make a slice of toast (cooking time = 1 minute)? At [latex]\boldsymbol{9.0 \;\textbf{cents/kW} \cdot \;\textbf{h}}[/latex], how much does this cost? 13: What would be the maximum cost of a CFL such that the total cost (investment plus operating) would be the same for both CFL and incandescent 60-W bulbs? Assume the cost of the incandescent bulb is 25 cents and that electricity costs [latex]\boldsymbol{10 \;\textbf{cents/kWh}}[/latex]. Calculate the cost for 1000 hours, as in the cost effectiveness of CFL example. 14: Some makes of older cars have 6.00-V electrical systems. (a) What is the hot resistance of a 30.0-W headlight in such a car? (b) What current flows through it? 15: Alkaline batteries have the advantage of putting out constant voltage until very nearly the end of their life. How long will an alkaline battery rated at 1.00 Ah and 1.58 V keep a 1.00-W flashlight bulb burning? 16: A cauterizer, used to stop bleeding in surgery, puts out 2.00 mA at 15.0 kV. (a) What is its power output? (b) What is the resistance of the path? 17: The average television is said to be on 6 hours per day. Estimate the yearly cost of electricity to operate 100 million TVs, assuming their power consumption averages 150 W and the cost of electricity averages [latex]\boldsymbol{12.0 \;\textbf{cents/kW} \cdot \;\textbf{h}}[/latex]. 18: An old lightbulb draws only 50.0 W, rather than its original 60.0 W, due to evaporative thinning of its filament. By what factor is its diameter reduced, assuming uniform thinning along its length? Neglect any effects caused by temperature differences. 19: 00-gauge copper wire has a diameter of 9.266 mm. Calculate the power loss in a kilometer of such wire when it carries [latex]\boldsymbol{1.00 \times 10^2 \;\textbf{A}}[/latex]. 20: Integrated Concepts Cold vaporizers pass a current through water, evaporating it with only a small increase in temperature. One such home device is rated at 3.50 A and utilizes 120 V AC with 95.0% efficiency. (a) What is the vaporization rate in grams per minute? (b) How much water must you put into the vaporizer for 8.00 h of overnight operation? (See Figure 4.) Figure 4. This cold vaporizer passes current directly through water, vaporizing it directly with relatively little temperature increase. 21: Integrated Concepts (a) What energy is dissipated by a lightning bolt having a 20,000-A current, a voltage of [latex]\boldsymbol{1.00 \times 10^2 \;\textbf{MV}}[/latex], and a length of 1.00 ms? (b) What mass of tree sap could be raised from 18.0ºC to its boiling point and then evaporated by this energy, assuming sap has the same thermal characteristics as water? 22: Integrated Concepts What current must be produced by a 12.0-V battery-operated bottle warmer in order to heat 75.0 g of glass, 250 g of baby formula, and [latex]\boldsymbol{3.00 \times 10^2 \;\textbf{g}}[/latex] of aluminum from 20.0ºC to 90.0ºC in 5.00 min? 23: Integrated Concepts How much time is needed for a surgical cauterizer to raise the temperature of 1.00 g of tissue from 37.0ºC to 100ºC and then boil away 0.500 g of water, if it puts out 2.00 mA at 15.0 kV? Ignore heat transfer to the surroundings. 24: Integrated Concepts 24: Hydroelectric generators (see Figure 5) at Hoover Dam produce a maximum current of [latex]\boldsymbol{8.00 \times 10^3 \;\textbf{A}}[/latex] at 250 kV. (a) What is the power output? (b) The water that powers the generators enters and leaves the system at low speed (thus its kinetic energy does not change) but loses 160 m in altitude. How many cubic meters per second are needed, assuming 85.0% Figure 5. Hydroelectric generators at the Hoover dam. (credit: Jon Sullivan) 25: Integrated Concepts (a) Assuming 95.0% efficiency for the conversion of electrical power by the motor, what current must the 12.0-V batteries of a 750-kg electric car be able to supply: (a) To accelerate from rest to 25.0 m/s in 1.00 min? (b) To climb a [latex]\boldsymbol{2.00 \times 10^2 \;\textbf{-m}}[/latex] -high hill in 2.00 min at a constant 25.0-m/s speed while exerting [latex]\boldsymbol{5.00 \times 10^2 \;\textbf{N}}[/latex] of force to overcome air resistance and friction? (c) To travel at a constant 25.0-m/s speed, exerting a [latex]\boldsymbol{5.00 \times 10^2 \;\textbf{N}}[/latex] force to overcome air resistance and friction? See Figure 6. Figure 6. This REVAi, an electric car, gets recharged on a street in London. (credit: Frank Hebbert) 26: Integrated Concepts A light-rail commuter train draws 630 A of 650-V DC electricity when accelerating. (a) What is its power consumption rate in kilowatts? (b) How long does it take to reach 20.0 m/s starting from rest if its loaded mass is [latex]\boldsymbol{5.30 \times 10^4 \;\textbf{kg}}[/latex], assuming 95.0% efficiency and constant power? (c) Find its average acceleration. (d) Discuss how the acceleration you found for the light-rail train compares to what might be typical for an automobile. 27: Integrated Concepts (a) An aluminum power transmission line has a resistance of [latex]\boldsymbol{0.0580 \;\Omega / \textbf{km}}[/latex]. What is its mass per kilometer? (b) What is the mass per kilometer of a copper line having the same resistance? A lower resistance would shorten the heating time. Discuss the practical limits to speeding the heating by lowering the resistance. 28: Integrated Concepts (a) An immersion heater utilizing 120 V can raise the temperature of a [latex]\boldsymbol{1.00 \times 10^2 \textbf{-g}}[/latex] aluminum cup containing 350 g of water from 20.0ºC to 95.0ºC in 2.00 min. Find its resistance, assuming it is constant during the process. (b) A lower resistance would shorten the heating time. Discuss the practical limits to speeding the heating by lowering the 29: Integrated Concepts (a) What is the cost of heating a hot tub containing 1500 kg of water from 10.0ºC to 40.0ºC, assuming 75.0% efficiency to account for heat transfer to the surroundings? The cost of electricity is 9.0 cents/kW h. (b) What current was used by the 220-V AC electric heater, if this took 4.00 h? 30: Unreasonable Results (a) What current is needed to transmit [latex]\boldsymbol{1.00 \times 10^2 \;\textbf{MW}}[/latex] of power at 480 V? (b) What power is dissipated by the transmission lines if they have a [latex]\ boldsymbol{1.00 - \Omega}[/latex] resistance? (c) What is unreasonable about this result? (d) Which assumptions are unreasonable, or which premises are inconsistent? 31: Unreasonable Results (a) What current is needed to transmit [latex]\boldsymbol{1.00 \times 10^2 \;\textbf{MW}}[/latex] of power at 10.0 kV? (b) Find the resistance of 1.00 km of wire that would cause a 0.0100% power loss. (c) What is the diameter of a 1.00-km-long copper wire having this resistance? (d) What is unreasonable about these results? (e) Which assumptions are unreasonable, or which premises are 32: Construct Your Own Problem Consider an electric immersion heater used to heat a cup of water to make tea. Construct a problem in which you calculate the needed resistance of the heater so that it increases the temperature of the water and cup in a reasonable amount of time. Also calculate the cost of the electrical energy used in your process. Among the things to be considered are the voltage used, the masses and heat capacities involved, heat losses, and the time over which the heating takes place. Your instructor may wish for you to consider a thermal safety switch (perhaps bimetallic) that will halt the process before damaging temperatures are reached in the immersion unit. electric power the rate at which electrical energy is supplied by a source or dissipated by a device; it is the product of current times voltage Problems & Exercise 1: 2.00 x 10 ^12 W 5: (a) 1.50 W (b) 7.50 W 7: [latex]\boldsymbol{\frac{\textbf{V}^2}{\Omega}}[/latex] [latex]\boldsymbol{=}[/latex] [latex]\boldsymbol{\frac{\textbf{V}^2}{\textbf{V/A}}}[/latex] [latex]\boldsymbol{ = \;\textbf{AV} =}[/latex] [latex]\boldsymbol{(\frac{\textbf{C}}{\textbf{s}}) \; (\frac{\textbf{J}}{\textbf{C}})}[/latex] 9: [latex]\boldsymbol{1 \;\textbf{kW} \cdot \;\textbf{h} =}[/latex] [latex]\boldsymbol{(\frac{1 \times 10^3}{1 \;\textbf{s}})}[/latex] [latex]\boldsymbol{(1 \;\textbf{h})}[/latex] [latex]\boldsymbol {(\frac{3600 \;\textbf{s}}{1 \;\textbf{h}})}[/latex] [latex]\boldsymbol{ = 3.60 \times 10^6 \;\textbf{J}}[/latex] 11: $438/y 13: $6.25 15: 1.58 h 17: $3.94 billion/year 19: 25.5 W 21: (a) 2.00 x 10^ 9 J (b) 769 kg 23: 45.0 s 25: (a) 343 A (b) 2.17×10^3 A (c) 1.10 x 10^ 3 A 27: (a) 1.23 x 10 ^3 kg b) 2.64 x 10^3 kg 29: heat energy needed = 1.88 x 10^8 J electrical energy required = ( 1.88 x 10^8 J ) / 0.75 = 2.51 x 10^8 J = 69.8 kWh = 623 cents = $6 to 1 sig. fig. 30: (a) 2.08 x 10^5 A b) 4.33 x 10 ^4 MW (c) The transmission lines dissipate more power than they are supposed to transmit. (d) A voltage of 480 V is unreasonably low for a transmission voltage. Long-distance transmission lines are kept at much higher voltages (often hundreds of kilovolts) to reduce power losses.
{"url":"https://pressbooks.bccampus.ca/practicalphysicsphys1104/chapter/20-4-electric-power-and-energy/","timestamp":"2024-11-02T18:40:27Z","content_type":"text/html","content_length":"183655","record_id":"<urn:uuid:69bf1e86-c283-4889-87be-ae870b5bbac1>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00618.warc.gz"}
Data Types The mathematical functions accept arguments of different data types. PI is the only predefined constant within the math engine. All other constants are user-defined. Constants are case sensitive. A matrix is a two-dimensional array. It is a collection of vectors containing the same number of elements. The dimensions of a matrix are described as m x n, pronounced "m by n". m refers to the number of rows (values read horizontally) and n refers to the number of columns (values read vertically). An example of a matrix is: { {1, 2, 3, 5}, {-1, -1, -2, -3} } which is a 2 x 4 matrix and written as: Figure 1. In the previous example it could be said the matrix has two row vectors of four elements, or four column vectors of two elements. If the matrix has the same number of rows as columns, the matrix is a square matrix. There are some functions that require a square matrix. Scalars are numbers that denote a magnitude without direction. Examples of a scalar value are 11, -4.5, and the value of PI. Strings & String Arrays Strings contain one or more alphanumeric characters, punctuation marks, and other special symbols. If the elements in a vector are strings, it is referred to as a string array. An example of a string "This is a test" An example of a string array is: {"This", "is", "a", "test"} Addition, +, and equality, ==, are the only mathematical operations that can be performed on a string or string array. Concatenation performs a similar, but not identical, operation. A vector is a one-dimensional array. Examples of a vector are: {2, 4} {3, -16, 8.5} {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
{"url":"https://2022.help.altair.com/2022.1/hwdesktop/hwd/topics/reference/math/data_types_r.htm","timestamp":"2024-11-11T23:55:19Z","content_type":"application/xhtml+xml","content_length":"55382","record_id":"<urn:uuid:cf67cd91-6ae1-44cd-8ab9-c7ff90c9c472>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00202.warc.gz"}
Mathematics is the science and study of quality, structure, space, and change. Mathematicians seek out patterns, formulate new conjectures, and establish truth by rigorous deduction from appropriately chosen axioms and definitions. Mathematics starts with counting. It is not reasonable, however, to suggest that early counting was mathematics. Only when some record of the counting was kept and, therefore, some representation of numbers occurred can mathematics be said to have started. In Babylonia mathematics developed from 2000 BC. Archimedes was a Greek mathematician who flourished from 287 to 212 B.C. He found mathematical problems very intriguing. So much so that he scribbled math equations and plotted graphs on the ground and even on his stomach. Throughout history, different cultures have discovered the maths needed for tasks like understanding groups and relationships, sharing food, looking at astronomical and seasonal patterns, and more. There are probably forms of mathematics that were understood by people we don't even know existed. The oldest mathematical texts from Mesopotamia and Egypt are from 2000 to 1800 BC. Many early texts mention Pythagorean triples and so, by inference, the Pythagorean theorem seems to be the most ancient and widespread mathematical concept after basic arithmetic and geometry. Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics with the major subdisciplines of number theory, algebra,geometry,and analysis,respectively. There is no general consensus among mathematicians about a common definition for their academic discipline. Most mathematical activity involves the discovery of properties of abstract objects and the use of pure reason to prove them. These objects consist of either abstractions from nature or—in modern mathematics—entities that are stipulated to have certain properties, called axioms. A proof consists of a succession of applications of deductive rules to already established results. These results include previously proved theorems, axioms, and—in case of abstraction from nature—some basic properties that are considered true starting points of the theory under consideration. Mathematics is essential in the natural sciences, engineering, medicine, finance, computer science and the social sciences. Although mathematics is extensively used for modeling phenomena, the fundamental truths of mathematics are independent from any scientific experimentation. Some areas of mathematics, such as statistics and game theory, are developed in close correlation with their applications and are often grouped under applied mathematics. Other areas are developed independently from any application (and are therefore called pure mathematics), but often later find practical applications.The problem of integer factorization, for example, which goes back to Euclid in 300 BC, had no practical application before its use in the RSA cryptosystem, now widely used for the security of computer networks. <<< BOFYA HAPA KWA MAJARIBIO MENGINE>>> Historically, the concept of a proof and its associated mathematical rigour first appeared in Greek mathematics, most notably in Euclid's Elements.Since its beginning, mathematics was essentially divided into geometry and arithmetic (the manipulation of natural numbers and fractions), until the 16th and 17th centuries, when algebra and infinitesimal calculus were introduced as new areas. Since then, the interaction between mathematical innovations and scientific discoveries has led to a rapid lockstep increase in the development of both. At the end of the 19th century, the foundational crisis of mathematics led to the systematization of the axiomatic method, which heralded a dramatic increase in the number of mathematical areas and their fields of application. The contemporary Mathematics Subject Classification lists more than 60 first-level areas of mathematics. No comments:
{"url":"https://www.onlineschoolbase.com/2023/05/hisabati-new-exmination-fomart-for.html","timestamp":"2024-11-14T20:25:00Z","content_type":"application/xhtml+xml","content_length":"128824","record_id":"<urn:uuid:1939c1fd-c8e5-4264-b924-43e3b3397162>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00807.warc.gz"}
Defining "Risk" - Students 4 Best Evidence Defining “Risk” Posted on 8th September 2015 by Eero Teppo Tutorials and Fundamentals Adopting probabilistic thinking and language that is powered by evidence and its validity is a hard transition from everyday life for anyone. There are many possible obstacles. The fundamentals – causality, randomness, probability – are not very intuitive or easy to understand or communicate clearly to anyone. Just tell me “yes”, “no” or “I don’t know”, ok? Numbers are hard to remember and some patients may be reluctant to think about them but we want to practice informed and shared decision making as far and precisely as possible. “It may help”. What “help” means and like “0.001% maybe” or “78% maybe”? The common categorizations like “high risk” and “low risk” are easier but relative and imprecise (sometimes practical enough, maybe). Many different actions in life and health care will likely change risks of many different outcomes at the same time so the possible net changes become the real deal and this brings in a lot of complexity. But having solid understanding of risk is clearly important in any practice that wishes to improve themselves by reflecting on reality. Indeed, psychology seems to show that people are far from “rational” decision-makers when using probability and choosing how to proceed under uncertainty. This post starts from the beginning; the definition of “risk”. Let’s go! What is “risk”? After reading the introduction one might think that there’s not too much vagueness in the term “risk”. However, it seems that we are really using “risk” for many quite different concepts and this is a good way to mess things up a little bit right at the start. Should “risk” have a good definition? I would prefer this definition: “probability of an event in a person during a specified period of time” (1). This is something pretty realistic you want to know and inform someone (and yourself) with most of the time, right? Let’s go through this step by step. Probability is usually thought as a relative frequency, in other words, a proportion of certain events in the “pool of opportunities”. Relative frequency teaches us the first lesson of risk in health care: if you’d like to know something about the risk of some event, learn the number of these events and out of how many susceptible people this number is. Probability … in a person This problem can be dodged by saying “relative frequency in the group of people like you”. The definition of this group then could be anything considered currently relevant. These choices are made in the research plans and interpretation of research. However, you can also use probability directly to express your personal uncertainty about the occurrence of the event like you probably have already as it is very intuitive. (You just don’t do Bayesian calculations to update your personal uncertainties about everything.) Does that make a difference? Well, in the details at least, a lot. And if anything it raises another point: Be sure that the numbers are about “people like you” and see that changing the set of features chosen to define “people like you” changes the risk. In the epidemiological lingo, risk is mostly used as a synonym for “incidence proportion”: a count of events in a specified time period divided by a count of people susceptible to the event in the beginning of the period. So it’s a relative frequency of the event in a time period in a particular group of people. With our current definition of risk, incidence proportion can be seen to be synonymous with “average risk”, the average of the 0 and 1 individual risks of this kind of group of people. In other words, incidence proportion is a population measure. (Note that other measures of disease occurrence are sometimes also called a “risk” or their divisions “relative risks” even if they were pretty far from even being a probability or proportion. They just describe a health-related experience of a population in some other way.) The notion of incidence proportion being “average risk” gives us another point: it gives us the chance to avoid a little bit careless assumption that everyone in the group of “people like each other” have a similar individual risk in any sense. Probably they don’t even approximately. We just use these differently defined group averages to give some hint on individual risk. Or perhaps the individuals are not directly of interest. In other contexts, the concept of risk can directly include the magnitude of the event like “risk is -200€” (we have euros in Finland, unfortunately?). In healthcare, of course, the clinical significance of the event is also crucial, in addition to the probability of it happening. Values or magnitudes of loss given for the particular outcomes are just more complex. What is good life for you now and in the future? Event can be generally any transition from one “state” to another. So whatever events or outcomes researchers have dared to imagine and build, the crucial point is that you should always figure out how significant that event is for you or any particular person. During a specified period of time Risk must always come with a time period. Longer the time period, more opportunity there is for the causes to act and event to occur. The period should be meaningful given the situation. Given enough time at least something must happen after all. Where’s the uncertainty: visiting alternative universe Assume there is absolutely no uncertainty and infinite accuracy in an alternative universe. In this universe we know the exact future progression of a full set of health outcomes or variables of a person in the alternative scenarios where a) we don’t intervene and b) we do make a particular intervention. Now let’s start driving towards reality and lose some information in the process. Let’s only focus on one outcome variable. Let’s split it’s more or less continuous nature in half in some meaningful point so that the outcome beyond this point is clinically meaningful to this patient and going beyond this point is called event. Let’s ignore the time past 10 years from now and the time point when the event happens from now and just say if it happens during the 10 years or Now we know the 10-year (relative frequency or subjective uncertainty) risk of event in our person and let’s say it’s 0 instead of 1 with or without intervention. But now we also lose our ability to see the alternative futures of our person, and even worse, the ability to even see into the future. But we know the past perfectly and find a unique and exact match, except calendar time of course, for our person. Then we witness something horrible to happen again; we no longer see the alternative pasts of the exact match. We also begin to lose information about the both persons. What is exact match anymore? Real subjective uncertainty is emerging. Now we know about two infinite groups of people with or without the intervention that very very closely resemble our person, and each other, but begin to see that a few in both groups have different outcomes than the most. What is happening? One could probably say that we have for our person still a great measure of 10-year (relative frequency) risk of the event in a “group of people like our person” with and without treatment, or a great measure of 10-year (subjective probability) risk of the event in our person. Let’s say 1 in a million with intervention and 2 in a million without the intervention (the truth was 0 either way). You can probably see where this would go when we are finally in the reality. We have less information, finite everything and error and biases creep in every step. Careful scientific and systematic practices needs to be invented to have some valid predictions of what our decisions and actions might be really doing to a patient or ourselves, if anything. 1. Rothman, K.J., Greenland, S., & Lash, T.L. (2008). Modern Epidemiology, 3rd Edition. Philadelphia, PA: Lippincott, Williams & Wilkins. No Comments on Defining “Risk” • Philip Voucher There is always risk with everything and we rely on as much evidence as possible. Unfortunately even a lot of evidence has an amount risk and guess work involved so we determine the risk v evidence v trust. We all trust evidence yet there can always an element of risk as we learn. 12th November 2015 at 9:40 am Reply to Philip
{"url":"https://s4be.cochrane.org/blog/2015/09/08/defining-risk/","timestamp":"2024-11-12T00:40:15Z","content_type":"text/html","content_length":"54815","record_id":"<urn:uuid:875445eb-0fea-4224-b90f-ebf4e75744ce>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00680.warc.gz"}
Convert Fermi to Earth's Polar Radius Please provide values below to convert fermi [F, f] to Earth's polar radius, or vice versa. Fermi to Earth's Polar Radius Conversion Table Fermi [F, F] Earth's Polar Radius 0.01 F, f 1.5731242420491E-24 Earth's polar radius 0.1 F, f 1.5731242420491E-23 Earth's polar radius 1 F, f 1.5731242420491E-22 Earth's polar radius 2 F, f 3.1462484840982E-22 Earth's polar radius 3 F, f 4.7193727261473E-22 Earth's polar radius 5 F, f 7.8656212102455E-22 Earth's polar radius 10 F, f 1.5731242420491E-21 Earth's polar radius 20 F, f 3.1462484840982E-21 Earth's polar radius 50 F, f 7.8656212102455E-21 Earth's polar radius 100 F, f 1.5731242420491E-20 Earth's polar radius 1000 F, f 1.5731242420491E-19 Earth's polar radius How to Convert Fermi to Earth's Polar Radius 1 F, f = 1.5731242420491E-22 Earth's polar radius 1 Earth's polar radius = 6.3567769999999E+21 F, f Example: convert 15 F, f to Earth's polar radius: 15 F, f = 15 × 1.5731242420491E-22 Earth's polar radius = 2.3596863630736E-21 Earth's polar radius Popular Length Unit Conversions Convert Fermi to Other Length Units
{"url":"https://www.unitconverters.net/length/fermi-to-earth-s-polar-radius.htm","timestamp":"2024-11-11T17:16:16Z","content_type":"text/html","content_length":"15600","record_id":"<urn:uuid:5069e33f-11b4-4992-850b-17e2c3b2e4a8>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00866.warc.gz"}
Tutorial: Relative Strength Index (RSI) In Lisp? The Relative Strength Index (RSI) is a technical indicator used in financial analysis to measure the magnitude and velocity of price movements in a security. In Lisp, a programming language known for its ability to handle complex data structures and algorithms, you can create a function to calculate the RSI for a given set of price data. To calculate the RSI in Lisp, you can follow the formula provided by the indicator's creator, J. Welles Wilder. This formula involves calculating the average gain and average loss over a specified period (typically 14 days) and then using these values to calculate the relative strength (RS) and finally the RSI itself. By implementing this calculation in Lisp, you can analyze the momentum of a security's price movements and potentially identify overbought or oversold conditions. This can be useful for making informed investment decisions and improving your trading strategies. Overall, using Lisp to calculate the RSI can be a powerful tool for analyzing financial markets and gaining insights into price trends. By understanding how to implement this indicator in Lisp, you can enhance your technical analysis skills and make more informed trading decisions. How to incorporate RSI into a trading algorithm in Lisp? To incorporate Relative Strength Index (RSI) into a trading algorithm in Lisp, you can follow these steps: 1. Define a function to calculate RSI: 1 (defun calculate-rsi (prices n) 2 (let ((changes (mapcar #'(lambda (x y) (- y x)) 3 (subseq prices 0 (1- (length prices))) 4 (cdr prices)))) 5 (let ((ups (remove-if-not #'(lambda (x) (> x 0)) changes)) 6 (downs (remove-if-not #'(lambda (x) (< x 0)) changes))) 7 (let ((avg-up (/ (apply #'+ ups) n)) 8 (avg-down (/ (apply #'+ downs) n))) 9 (if (= avg-down 0) 100 10 (- 100 (/ 100 (+ 1 (/ avg-up avg-down))))))))) 1. Incorporate the RSI calculation function into your trading algorithm: 1 (defun trading-algorithm (prices n threshold) 2 (let ((rsi (calculate-rsi prices n))) 3 (if (> rsi threshold) 4 "SELL" 5 "BUY"))) 1. Use the trading algorithm in your trading strategy: 1 (defparameter *prices* '(100 110 120 130 140 150 160 170 180 190)) 2 (defparameter *n* 14) 3 (defparameter *threshold* 70) 5 (print (trading-algorithm *prices* *n* *threshold*)) This example demonstrates how to calculate RSI and use it in a simple trading algorithm in Lisp. You can customize the parameters such as prices, RSI period (n), and RSI threshold to suit your trading strategy. Additionally, you can further enhance the algorithm by incorporating other technical indicators or risk management strategies. What is the formula for calculating RSI in Lisp? Here is a simple implementation of RSI calculation in Lisp: 1 (defun calculate-rsi (prices period) 2 (let* ((changes (mapcar #'- (cdr prices) prices)) 3 (gains (filter #'> changes 0)) 4 (losses (filter #'< changes 0)) 5 (avg-gain (if (> (length gains) 0) 6 (/ (apply #'+ gains) (length gains)) 7 0)) 8 (avg-loss (if (> (length losses) 0) 9 (/ (apply #'+ losses) (length losses)) 10 0)) 11 (rsi (if (= avg-loss 0) 13 (- 100 (/ 100 (+ 1 (/ avg-gain avg-loss))))))) 14 rsi)) 16 (defun filter (predicate lst) 17 (remove-if-not predicate lst)) This implementation calculates RSI based on a list of prices and a period length. It first calculates the changes between consecutive prices, then filters out gains and losses. It then calculates the average gains and losses, and finally computes the RSI using the formula. How to adjust RSI parameters for different market conditions? The Relative Strength Index (RSI) is a popular momentum oscillator that measures the speed and change of price movements. It is typically used to identify overbought or oversold conditions in a To adjust RSI parameters for different market conditions, consider the following: 1. Period Length: The default period length for RSI is usually 14, which is suitable for most markets. However, you may need to adjust the period length based on the volatility of the market. For more volatile markets, a shorter period length (e.g. 7) may be more appropriate, while for less volatile markets, a longer period length (e.g. 21) may be better. 2. Overbought and Oversold Levels: The default overbought level for RSI is typically set at 70 and the oversold level at 30. In highly trending markets, you may want to adjust these levels to 80 for overbought and 20 for oversold. In range-bound markets, you may want to use tighter levels such as 60 and 40. 3. Divergence: Look for divergences between RSI and price movements to signal potential changes in market direction. In strong trending markets, divergences can be more reliable, so you may want to adjust your parameters to focus more on divergences. 4. Volatility: Consider using a volatility-based RSI, such as the Average True Range (ATR) normalized RSI, which adjusts the RSI values based on the current market volatility. This can help you avoid false signals in highly volatile markets. 5. Backtesting: Before adjusting RSI parameters, it is important to backtest your strategy to see how it performs in different market conditions. This will help you determine the optimal parameters for your specific trading style and the market you are trading in. By adjusting RSI parameters based on the market conditions, you can improve the accuracy and effectiveness of this technical indicator in your trading strategy.
{"url":"https://studentprojectcode.com/blog/tutorial-relative-strength-index-rsi-in-lisp","timestamp":"2024-11-10T11:27:23Z","content_type":"text/html","content_length":"343500","record_id":"<urn:uuid:baf1778a-7b87-4932-8bc1-1768be247e19>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00436.warc.gz"}
Drum Spooling Capacity Formula | E & K Equipment top of page 814-827-9600 ~ info@eandk.net Do you need to know how much cable your drum will spool? This handy formula will show you how to calculate that amount. Before you begin you will need four dimensions. • The drum length (distance between the flanges) which is designated "B" on drawing below. • The flange depth (from the top of the flange to the drum) which is designated "A" on drawing below. • The drum diameter which is designated "D" on drawing below. • The diameter of the cable. When you know the diameter of the cable, use the chart below to determine the value of that diameter. That value is designated "K". The spooling capacity of your drum will be determined by the following formula: SPOOLING CAPACITY = (A + D) x A x B x K First, add together the flange depth (A) and the drum diameter (D). Multiply that number by the flange depth (A). Multiply that number by the drum length (B). Multiply that number by the value of the cable diameter (from chart below). The product will be the spooling capacity of the drum. Note: to obtain the drum diameter, measure the distance around the drum (circumference) then divide that number by 3.14. bottom of page
{"url":"https://www.winchrental.com/drum-spooling-capacity-formula","timestamp":"2024-11-12T18:48:14Z","content_type":"text/html","content_length":"468572","record_id":"<urn:uuid:5054b850-385a-4ed7-8844-041e98f9fd72>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00500.warc.gz"}