content
stringlengths
86
994k
meta
stringlengths
288
619
4792000000000 Kilopounds Force to Newtons (kpf to N) | JustinTOOLs.com Please support this site by disabling or whitelisting the Adblock for "justintools.com". I've spent over 10 trillion microseconds (and counting), on this project. This site is my passion, and I regularly adding new tools/apps. Users experience is very important, that's why I use non-intrusive ads. Any feedback is appreciated. Thank you. Justin XoXo :) Category: forceConversion: Kilopounds Force to Newtons The base unit for force is newtons (Derived SI Unit) [Kilopounds Force] symbol/abbrevation: (kpf) [Newtons] symbol/abbrevation: (N) How to convert Kilopounds Force to Newtons (kpf to N)? 1 kpf = 4448.221616 N. 4792000000000 x 4448.221616 N = Always check the results; rounding errors may occur. In relation to the base unit of [force] => (newtons), 1 Kilopounds Force (kpf) is equal to 4448.221616 newtons, while 1 Newtons (N) = 1 newtons. 4792000000000 Kilopounds Force to common force units 4792000000000 kpf = 2.1315877983872E+16 newtons (N) 4792000000000 kpf = 4.7919995863228E+15 pounds force (lbf) 4792000000000 kpf = 2.5872796568006E+23 atomic units of force (auf) 4792000000000 kpf = 2.1315877983872E+34 attonewtons (aN) 4792000000000 kpf = 2.1736146374014E+20 centigrams force (cgf) 4792000000000 kpf = 2.1315877983872E+18 centinewtons (cN) 4792000000000 kpf = 2.1315877983872E+15 decanewtons (daN) 4792000000000 kpf = 2.1315877983872E+17 decinewtons (dN) 4792000000000 kpf = 2.1315877983872E+21 dynes (dyn) 4792000000000 kpf = 0.021315877983872 exanewtons (EN)
{"url":"https://www.justintools.com/unit-conversion/force.php?k1=kilopounds-force&k2=newtons&q=4792000000000","timestamp":"2024-11-13T09:45:55Z","content_type":"text/html","content_length":"67917","record_id":"<urn:uuid:cfc2ee10-6542-45a9-8ab5-f49766188be0>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00456.warc.gz"}
XmdvTool Home Page: Visualizations XmdvTool supports four methods for displaying multivariate data: each of them has a flat approach and a hierarchical approach: Multivariate Data Multivariate data can be defined as a set of entities E, where the i^th element e[i ]consists of a vector with n varibles, ( x[i1], x[i2], ..., x [in] ). Each variable (dimension) may be independent of or interdependent with one or more of the other variable. Variables may be discrete or continuous in nature, or take on symbolic (nominal) values. Variables also have a scale associated with them, where scales are defined according to the existence or lack of an ordering relationship, a distance (interval) metric, and an absolute zero (origin). When visualizing multivariate data, each variable may map to some graphical entity or attribute. In doing so, the type (discrete, continuous, nominal) or scale may be changed to facilitate display. In such situations, care must be taken, as a graphical variable with a perceived characteristic (type or scale) which is mapped to a data variable with a different characteristic can lead to Hierarchical approach Conventional multivariate visualization techniques generally do not scale well with respect to the size of the data sets. To deal with the clustering and overlapping of large data sets, we extended the flat approaches to hierarchical approaches. In hierarchical approaches, multi-resolutional views of the data via hierarchical clustering substitute static views in flat approaches. Variable-width opacity bands are used to represent the information of clusters. Proximity-based coloring highlights the relationships among clusters. Details of the techniques used in the hierarchical approaches can be obtained here.
{"url":"https://davis.wpi.edu/xmdv/visualizations.shtml","timestamp":"2024-11-14T17:07:32Z","content_type":"text/html","content_length":"21293","record_id":"<urn:uuid:42400abb-fdb4-4f1b-91c0-1fad84a8a71f>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00281.warc.gz"}
Sea surface wave mean period from variance spectral density first frequency moment Alternate Formats Other formats for this page: RDF/XML Turtle JSON-LD Alternate Profiles Other views of this page: Alternate Profiles ?Different Media Types (HTML, text, RDF, JSON etc.) and different information model views, profiles, are available for this resource.
{"url":"http://vocab.nerc.ac.uk/standard_name/sea_surface_wave_mean_period_from_variance_spectral_density_first_frequency_moment/","timestamp":"2024-11-09T00:24:38Z","content_type":"text/html","content_length":"13109","record_id":"<urn:uuid:3c6e718a-e256-48f0-9d6e-68ae00d6cbd9>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00759.warc.gz"}
How to Average Only Positive or Negative Numbers of a Range - Free Excel Tutorial This post will guide you on how to average only positive or negative numbers of a range in Excel 2013/2016/2019/365. Suppose both positive numbers and negative numbers exist in a table. If we want to know the average of only positive numbers in this table, we can create a formula to get average of all positive numbers with all negative numbers ignored. In this article, we will help you to construct a formula with AVERAGE and IF functions to get average of only positive numbers or negative numbers. Refer to above left side table, we can see both positive numbers and negative numbers are listed in range A2:E4. We want to calculate average of positive numbers and negative numbers separately and save results in H2 and I2 correspondingly. In this instance, we will enter below formula into H2. We build this formula with AVERAGE and IF function. 1. Average Only Positive Numbers of a Range The two functions are used frequently in Excel when running mathematics and logical expression. The AVERAGE function returns the average of numbers from a given range reference. The IF function returns “true value” or “false value” based on the result of provided logical test. It is one of the most popular function in Excel. In cell H2, enter the formula: Range reference A2:E4 represents all numbers in this range. This formula will execute IF function firstly to filter and keep all positive numbers in current array. If numbers are greater than 0, they are positive numbers. If logical expression “A2:E4>0” is true (number in A2:E4 is greater than 0), this number will be saved in array A2:E4. After comparing each number in range A2:E4 with 0, below numbers are filtered and saved. Then AVERAGE function will calculate the average of these numbers. After entering the formula, press Ctrl + Shift + Enter to load result because this is an array formula. But on Excel 365 you can directly press Enter as usual to load result. 2. Average Only Negative Numbers of a Range In I2, enter the formula: Press Ctrl + Shift + Enter to load result. 3. Video: Average Only Positive or Negative Numbers of a Range in Excel This video will demonstrate how to easily calculate the average of only positive or negative numbers within a range in Excel. 4. Related Functions • Excel IF function The Excel IF function perform a logical test to return one value if the condition is TRUE and return another value if the condition is FALSE. The IF function is a build-in function in Microsoft Excel and it is categorized as a Logical Function.The syntax of the IF function is as below:= IF (condition, [true_value], • Excel AVERAGE function The Excel AVERAGE function returns the average of the numbers that you provided.The syntax of the AVERAGE function is as below:=AVERAGE (number1,[number2],…)….
{"url":"https://www.excelhow.net/how-to-average-only-positive-or-negative-numbers-of-a-range.html","timestamp":"2024-11-12T22:44:01Z","content_type":"text/html","content_length":"87232","record_id":"<urn:uuid:cfd935bd-6396-4ca8-81f5-816e42b81e81>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00389.warc.gz"}
comparing three numbers Comparing Three Digit Numbers Worksheet - Have Fun Teaching Comparing Three-Digit Numbers Activity (teacher made) Comparing Three-Digit Numbers Worksheet / Worksheet - Twinkl Free Comparing Numbers Worksheets - 3 Digit Numbers - Free ... Comparing 3-digit numbers worksheet | Live Worksheets Comparing 3-digit numbers represented by Base 10 and place value ... Comparing Three-Digit Numbers – Worksheet | Teach Starter Compare 3-digit Numbers Worksheets (Second Grade, printable) Lucky to Learn Math - Compare and Order 3-Digit Numbers - Lesson ... Compare Three integers in C Numbers and Operations in Base 10: Place Value - Comparing 3 Digit ... Lesson: Comparing Three-Digit Numbers | Nagwa 2nd Grade Math 2.12, Compare 3-Digit Numbers, Greater, Less, Equal Comparing Whole Numbers Worksheets for 1st to 5th Grade - PDF Comparing Numbers (3-Digit): Worksheets Comparing 3 Digit Numbers Spring Theme Google Slides – Savvy Apple Comparing in three digit numbers worksheet | Live Worksheets LEARN PROGRAMMING: Flowcharts for comparing three numbers Comparing Numbers - Roll & Compare 3 Digit Numbers Activity for ... Comparing Three Digit Numbers Check-in | Worksheet | Education.com Free - Year 3/4 Comparing 3 and 4- digit numbers | Teaching Resources Comparing Numbers 3-digits Worksheet Comparing 3 Digit Numbers Math Worksheets & Place Value Activities ... Comparing Three-Digit Numbers Activity (teacher made) Comparing Numbers Worksheets | K5 Learning Comparing 3-digit numbers | 2nd grade Math Worksheet | GreatSchools How to Compare 3-Digit Numbers - Elementary Nest Compare two 3-digit numbers up to 1000 | MathBRIX
{"url":"https://worksheets.clipart-library.com/comparing-three-numbers.html","timestamp":"2024-11-03T09:04:40Z","content_type":"text/html","content_length":"22308","record_id":"<urn:uuid:039628ad-2256-4668-b518-eb91be58b13a>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00809.warc.gz"}
Into Neural Networks! Part 1 - How To Train Your Robot Into Neural Networks! Part 1 Keras is a library that makes machine learning easy to run and train without knowing too much of the math behind it. It has many tutorials including an excellent howto by Egghead.io, Pyimagesearch and of course the official documentation and books… …but what if you want to look into the details of how it works? Neural networks are a series of functions that are adjusted over time, and we can “see” what happens in a simple example. Making a very basic network Let’s look at a small neural net, training it on the “OR” function. Just like the example code in the PyImageSearch dog-vs-cat example, we want output in one of two values – True and False, not Dog/ Cat though. Also, there are only two inputs and only one layer is necessary in this simple example: from sklearn.preprocessing import LabelEncoder from sklearn.model_selection import train_test_split from keras.models import Sequential from keras.layers import Activation from keras.optimizers import SGD from keras.layers import Dense from keras.utils import np_utils import numpy as np data = [[0,0],[0,1],[1,0],[1,1]] labels = [0,1,1,1] # encode the labels, converting them from strings to integers le = LabelEncoder() labels = le.fit_transform(labels) data = np.array(data) labels = np_utils.to_categorical(labels, 2) # partition the data into training and testing splits? print("[INFO] constructing training/testing split...") (trainData, testData, trainLabels, testLabels) = train_test_split( data, labels, test_size=0.0, random_state=42) #In this case test must be train, there are only a few values. copy: testData = trainData testLabels = trainLabels # define the architecture of the network model = Sequential() model.add(Dense(2, input_dim=2)) # train the model using SGD print("[INFO] compiling model...") sgd = SGD(lr=0.2) model.compile(loss="binary_crossentropy", optimizer=sgd, #print(model.fit.__doc__) #print documentation model.fit(trainData, trainLabels, epochs=50, batch_size=128, print('Should be [0,1]') print('Should be [0,1]') print('Should be [1,0]')#false print('Should be [0,1] true.') # show the accuracy on the testing set print("[INFO] evaluating on testing set...") (loss, accuracy) = model.evaluate(testData, testLabels, batch_size=128, verbose=1) print("[INFO] loss={:.4f}, accuracy: {:.4f}%".format(loss, accuracy * 100)) print("[INFO] dumping architecture and weights to file...") Note that the result is saved to a file that you could load later… and you can dig into how it works by reading that hdf file! (If you didn’t get 100% accuracy when running that, try again and it should make a 100% accurate one after a couple tries, it starts out randomized and may have started with values that won’t work with this small amount of data.) Note that Ubuntu probably told you to install the “HDFView” to open that file… install it and you will see a collections of numbers in a matrix, something like: Now this matrix is the single layer that was added with model.add(Dense(2, input_dim=2)) Note that you can do the equivalent in a Spreadsheet: The lower yellow cells correspond to values that you see in the stored model in hdf viewer. Note the dot product, which takes sum of multiplied x’th row and y’th column, and the bias which is added to an exponential – note the softmax function, that is e^x / sum(e^x values) in columns G and H, this normalizes the values to always positive and adding to 1. Although there is little data we can give this network to train, it still says “with 83% certainty” that True and True is True, and chooses the correct false/true value for all 4 possibilities. And… there’s more! You can do the same thing to make an AND function in the same way – note the few changes on the labels, only True and True [1,1] is labeled a True value: from sklearn.preprocessing import LabelEncoder from sklearn.model_selection import train_test_split from keras.models import Sequential from keras.layers import Activation from keras.optimizers import SGD from keras.layers import Dense from keras.utils import np_utils import numpy as np data = [[0,0],[0,1],[1,0],[1,1]] labels = [0,0,0,1] # encode the labels, converting them from strings to integers le = LabelEncoder() labels = le.fit_transform(labels) data = np.array(data) labels = np_utils.to_categorical(labels, 2) # partition the data into training and testing splits? print("[INFO] constructing training/testing split...") (trainData, testData, trainLabels, testLabels) = train_test_split( data, labels, test_size=0.0, random_state=42) #In this case test must be train, there are only a few values. copy: testData = trainData testLabels = trainLabels # define the architecture of the network model = Sequential() model.add(Dense(2, input_dim=2)) # train the model using SGD print("[INFO] compiling model...") sgd = SGD(lr=0.2) model.compile(loss="binary_crossentropy", optimizer=sgd, #print(model.fit.__doc__) #print documentation model.fit(trainData, trainLabels, epochs=50, batch_size=128, print('Should be [1,0]')#false print('Should be [1,0]')#false print('Should be [1,0]')#false print('Should be [0,1] true.') # show the accuracy on the testing set print("[INFO] evaluating on testing set...") (loss, accuracy) = model.evaluate(testData, testLabels, batch_size=128, verbose=1) print("[INFO] loss={:.4f}, accuracy: {:.4f}%".format(loss, accuracy * 100)) print("[INFO] dumping architecture and weights to file...") So that’s how the neural network calculates – a larger one is going to make a much larger matrix, and a multi-layer is going to have more matrices, but it’s still using the same type of calculation in the spreadsheet!
{"url":"https://howtotrainyourrobot.com/into-neural-networks-part-1/","timestamp":"2024-11-08T21:50:06Z","content_type":"text/html","content_length":"94771","record_id":"<urn:uuid:8d55e002-dc8c-4be1-b817-404760b0c66e>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00231.warc.gz"}
Lesson 6 Modeling with Inequalities Let's look at solutions to inequalities. 6.1: Possible Values The stage manager of the school musical is trying to figure out how many sandwiches he can order with the $83 he collected from the cast and crew. Sandwiches cost $5.99 each, so he lets \(x\) represent the number of sandwiches he will order and writes \(5.99x \leq 83\). He solves this to 2 decimal places, getting \(x \leq 13.86\). Which of these are valid statements about this situation? (Select all that apply.) 1. He can call the sandwich shop and order exactly 13.86 sandwiches. 2. He can round up and order 14 sandwiches. 3. He can order 12 sandwiches. 4. He can order 9.5 sandwiches. 5. He can order 2 sandwiches. 6. He can order -4 sandwiches. 6.2: Elevator A mover is loading an elevator with many identical 48-pound boxes. The mover weighs 185 pounds. The elevator can carry at most 2000 pounds. 1. Write an inequality that says that the mover will not overload the elevator on a particular ride. Check your inequality with your partner. 2. Solve your inequality and explain what the solution means. 3. Graph the solution to your inequality on a number line. 4. If the mover asked, “How many boxes can I load on this elevator at a time?” what would you tell them? 6.3: Info Gap: Giving Advice Your teacher will give you either a problem card or a data card. Do not show or read your card to your partner. If your teacher gives you the problem card: 1. Silently read your card and think about what information you need to be able to answer the question. 2. Ask your partner for the specific information that you need. 3. Explain how you are using the information to solve the problem. Continue to ask questions until you have enough information to solve the problem. 4. Share the problem card and solve the problem independently. 5. Read the data card and discuss your reasoning. If your teacher gives you the data card: 1. Silently read your card. 2. Ask your partner “What specific information do you need?” and wait for them to ask for information. If your partner asks for information that is not on the card, do not do the calculations for them. Tell them you don’t have that information. 3. Before sharing the information, ask “Why do you need that information?” Listen to your partner’s reasoning and ask clarifying questions. 4. Read the problem card and solve the problem independently. 5. Share the data card and discuss your reasoning. Pause here so your teacher can review your work. Ask your teacher for a new set of cards and repeat the activity, trading roles with your partner. In a day care group, nine babies are five months old and 12 babies are seven months old. How many full months from now will the average age of the 21 babies first surpass 20 months old? We can represent and solve many real-world problems with inequalities. Whenever we write an inequality, it is important to decide what quantity we are representing with a variable. After we make that decision, we can connect the quantities in the situation to write an expression, and finally, the whole inequality. As we are solving the inequality or equation to answer a question, it is important to keep the meaning of each quantity in mind. This helps us to decide if the final answer makes sense in the context of the situation. For example: Han has 50 centimeters of wire and wants to make a square picture frame with a loop to hang it that uses 3 centimeters for the loop. This situation can be represented by \(3+4s=50\), where \(s\) is the length of each side (if we want to use all the wire). We can also use \(3+4s\leq50\) if we want to allow for solutions that don’t use all the wire. In this case, any positive number that is less or equal to 11.75 cm is a solution to the inequality. Each solution represents a possible side length for the picture frame since Han can bend the wire at any point. In other situations, the variable may represent a quantity that increases by whole numbers, such as with numbers of magazines, loads of laundry, or students. In those cases, only whole-number solutions make • solution to an inequality A solution to an inequality is a number that can be used in place of the variable to make the inequality true. For example, 5 is a solution to the inequality \(c<10\), because it is true that \(5<10\). Some other solutions to this inequality are 9.9, 0, and -4.
{"url":"https://im.kendallhunt.com/MS_ACC/students/2/4/6/index.html","timestamp":"2024-11-13T07:45:37Z","content_type":"text/html","content_length":"69090","record_id":"<urn:uuid:d9b3947b-3eaf-4971-81ba-803e0269ace9>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00410.warc.gz"}
CCO '96 P5 - All Roads Lead Where? Canadian Computing Competition: 1996 Stage 2, Day 2, Problem 2 There is an ancient saying that "All Roads Lead to Rome". If this were true, then there is a simple algorithm for finding a path between any two cities. To go from city to city , a traveller could take a road from to Rome, then from Rome to . Of course, a shorter route may exist. The network of roads in the Roman Empire had a simple structure: beginning at Rome, a number of roads extended to the nearby cities. From these cities, more roads extended to the next further cities, and so on. Thus, the cities could be thought of as existing in levels around Rome, with cities in the th level only connected to cities in the st and st levels (Rome was considered to be at level 0). No loops existed in the road network. Any city in level was connected to a single city in level , but was connected to zero or more cities in level . Thus, to get to Rome from a given city in level , a traveller could simply walk along the single road leading to the connected level city, and repeat this process, with each step getting closer to Rome. Given a network of roads and cities, your task is to find the shortest route between any two given cities, where distance is measured in the number of intervening cities. Input Specification The first line of input contains two numbers in decimal notation separated by a single space. The first number is the number of roads in the road network to be considered. The second number represents the number of queries to follow later in the file. The next lines in the input each contain the names of a pair of cities separated by a single space. A city name consists of at most ten letters, the first of which is in uppercase. No two cities begin with the same letter. The name Rome always appears at least once in this section of input; this city is considered to be at level 0, the lowest-numbered level. The pairs of names indicate that a road connects the two named cities. The first city named on a line exists in a lower level than the second named city. The road structure obeys the rules described above. No two lines of input in this section are repeated. The next lines in the input each contain the names of a pair of cities separated by a single space. City names are as described above. These pairs of cities are the query pairs. Your task for each query pair is to find the shortest route from the first named city to the second. Each of the cities in a query pair is guaranteed to have appeared somewhere in the previous input section describing the road structure. Output Specification For each of the query pairs, output a sequence of uppercase letters indicating the shortest route between the two query pair cities. The sequence must be output as consecutive letters, without intervening whitespace, on a single line. The first output line corresponds to the first query pair, the second output line corresponds to the second query pair, and so on. The letters in each sequence indicate the first letter of the cities on the desired route between the query pair cities, including the query pair cities themselves. A city will never be paired with itself in a query. Sample Input Rome Turin Turin Venice Turin Genoa Rome Pisa Pisa Florence Venice Athens Turin Milan Turin Pisa Milan Florence Athens Genoa Sample Output □ the range is pretty much implied since there are 26 letters and no two cities can begin with the same letter there can only be 26 cities
{"url":"https://dmoj.ca/problem/cco96p5","timestamp":"2024-11-12T12:03:27Z","content_type":"text/html","content_length":"30641","record_id":"<urn:uuid:178a89ec-6eb3-494d-a8e9-da38ce7eaffa>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00434.warc.gz"}
Fraction calculator This calculator divides an integer (or whole number) by a fraction. To divide an integer by a fraction, multiply the denominator by the integer and place it over the numerator. Then simplify the result to the lowest terms or a mixed number. The result: 6 / 1/4 = 24/1 = 24 The spelled result in words is twenty-four. How do we solve fractions step by step? Rules for expressions with fractions: - use a forward slash to divide the numerator by the denominator, i.e., for five-hundredths, enter . If you use mixed numbers, leave a space between the whole and fraction parts. Mixed numerals (mixed numbers or fractions) keep one space between the integer and fraction and use a forward slash to input fractions i.e., 1 2/3 . An example of a negative mixed fraction: -5 1/2 Because slash is both sign for fraction line and division, use a colon (:) as the operator of division fractions i.e., 1/2 : 1/3 Decimals (decimal numbers) enter with a decimal point and they are automatically converted to fractions - i.e. The calculator follows well-known rules for the order of operations. The most common mnemonics for remembering this order of operations are: PEMDAS - Parentheses, Exponents, Multiplication, Division, Addition, Subtraction. BEDMAS - Brackets, Exponents, Division, Multiplication, Addition, Subtraction BODMAS - Brackets, Of or Order, Division, Multiplication, Addition, Subtraction. GEMDAS - Grouping Symbols - brackets (){}, Exponents, Multiplication, Division, Addition, Subtraction. MDAS - Multiplication and Division have the same precedence over Addition and Subtraction. The MDAS rule is the order of operations part of the PEMDAS rule. Be careful; always do multiplication and division before addition and subtraction. Some operators (+ and -) and (* and /) have the same priority and must be evaluated from left to right. Last Modified: October 9, 2024
{"url":"https://www.hackmath.net/en/calculator/fraction?input=6+%2F+1%2F4","timestamp":"2024-11-12T05:36:34Z","content_type":"text/html","content_length":"29528","record_id":"<urn:uuid:3668d473-5517-4c5b-871b-a91ed8add070>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00568.warc.gz"}
1 Introduction The Seine estuary, extending over about 160km from the river mouth to the dam of Poses (Fig. 1), is a macrotidal and hyposynchrone estuary. The tidal amplitude reaches seven metres during spring tides at Le Havre. The mean river discharge is of 480m^3/s and varies from 100m^3/s to 2000m^3/s. In this river, suspended particle matter (SPM) varies from a few tens of milligrammes per litre at the free surface to 10 g/l near the sediment bed in the zone of accumulation of suspended materials called turbidity maximum (TM). This last zone moves downstream and upstream following the ebb and the flood tides. The extension and the movement of the TM mainly depends on the tide and the river flow. Obviously, Brenon et al. [5] showed with numerical simulations that the tidal wave asymmetry, caused by hydrodynamical and morphological effects, is responsible for the formation of the TM. Density stratification only affects the form and slightly the location of the TM. Fig. 1 The hydrosedimentary models previously applied to the Seine river are based on the passive scalar hypothesis. In this approach, the settling velocity is imposed by an empirical relation [21]. The main difficulties reside in the prescription of sediment exchange fluxes between the sediment bottom and the water column [12,17] which needs a great amount of in-situ data for the calibration step [19]. Since the early 1990s, an alternative approach for sediment transport, namely the two-phase flow modelling for sediment transport, has been developed [4,13,20]. It differs from the classical one by solving mass and momentum equations for each phase: the fluid phase for the water and the solid phase for the sediment. The solid phase can be considered as a continuum since the spatial scale of averaging is large enough compared to the particle's diameter. This approach gives a theoretical framework for sediment transport modelling including fluid–particle and particle–particle interactions. Given pertinent closures, these models should be able to represent the whole sediment transport processes such as suspended transport, sedimentation and consolidation. The major interest regarding the classical approach consists in the continuous treatment of the whole domain: the sediment fluxes are integrated in the model equations. In this article, we present intermediary results of numerical simulation for the TM in the Seine estuary, using a width-integrated 2D vertical two-phase flow model. Such an application is original and represents a step in the development of a two-phase flow model for sediment transport in estuaries. We point out that the present application is process oriented, the main objective being to demonstrate the capability of the two-phase approach to deal with the TM in an estuary. As the two-phase modelling for sediment transport is in an early stage of development, some specific processes for cohesive sediments are not accounted for in the present model. So the comparison with in-situ measurements can only be very qualitative for the moment. 2 Description of the model In the present model, two phases are considered, a fluid phase and a solid phase for the suspended particle matter (SPM), using an Eulerian approach for both phases. Each phase is treated as a continuum and the governing equations consist of two equations for mass and momentum conservation [6]. Such a model was developed by Barbry et al. [4] for sediment transport. As the geometry of the Seine estuary presents some convergence upstream, it is necessary to take into account the variation of the estuarine width. Based on the work of Barbry et al. [4], a width-integrated two-phase flow model based on Eqs. (1)–(3) is proposed here: $∂αkρkB∂t+∇→(αkρku→kB)=0$ (1) $∂(αkρku→k)∂t+∇→(αkρku→ku→k)=αkρkg→+M→k+∇→αk(−ρkI¯¯+τk¯¯+TkRe¯¯)$ (2) Here, $αk,u→k,ρk$ represent the volume fraction, velocity vector and density of phase k. $g→$ is the gravity acceleration and B the width of the estuary. $τ¯¯k$ and $T¯¯kRe$ represent the viscous stress tensor and the Reynolds stress tensor, respectively. p[k] is the pressure of the phase k. The interfacial momentum transfer term $M→k$ arises from stresses acting on the interface. It is defined following Drew and Lahey [6] by Eq. (4): $M→k=pki∇→αk−τ¯¯ki∇→αk+M→′k$ (4) $M→′s=−M→′f=αsρsτfsu→r with u→r=u→s−u→f$ (5) $τfs=4dρs3ρfCDu→rαf−2.65$ (6) The two first terms represent the interfacially averaged pressure p[ki] and shear stress $τ¯¯ki$ of phase k. The last term $M→′k$ represents forces associated with drag, virtual mass, lift force and unsteady effects. The particulate Reynolds number is defined by: $Rep=αfdu→r/νf$ where d represents the particles diameter, ν[f], the kinematic viscosity of the fluid and $u→r$ stands for the relative velocity between solid and fluid phases (5). In the case of small sediment particles falling in water the particulate Reynolds number is of the order of unity [11]. Thus the drag force is dominant and only this force is considered here. τ[fs] is the particle relaxation time defined by Eq. (6) [7] where C[D] is the averaged drag coefficient for a single particle in a suspension given by [18]: $CD=(24/Rep)(1+0.15Rep0.687)$. The constitutive law is modelled, Eq. (7), following Lundgren [16], by introducing an amplification factor for viscous strain, namely β, Eq. (8) [9], appearing in the effective viscosity expression: $μff=αfμf;μfs=αsμf;μss=αs2βμf;μsf=αsαfβμf$. The parameter β takes into account the non Newtonian characteristic of the flow when α[s] reaches high values. $∇(αkτ¯¯k)=1B∇→μkfDfb¯¯+∇→μksDsb¯¯ with Dkb¯¯=12∇→(ukB)+∇→(ukB)T$ (7) $β=52+94αs11+ξ/21ξ−11+ξ−1(1+ξ)2$ (8) From geometrical arguments, Drew and Lahey [6] proposed the following formulae for ξ: where h represents the interparticle spacing and $αsmax$ the maximum volume fraction of particles. For rigid spheres, this value corresponds to the maximum packing and is equal to 0.6. Kinematic and dynamic conditions are imposed at the free surface whereas a no-slip condition for the fluid phase velocity and a slip condition for the solid phase velocity are imposed at the bottom. The bottom shear stress is estimated by a Strickler law. Details concerning the values of the Strickler coefficient will be given in section 3. A zero equation model is used to simulate the turbulence of the fluid phase in which the mixing length is modelled by the formulation of Escudier [8]. The fluid turbulent viscosity is added to the fluid molecular viscosity in the constitutive law (7). The numerical solution is based on a fractional step algorithm coupled with a finite difference formulation. A σ coordinate system is implemented in order to fit the computational mesh to the free surface at each time step (see [10] for a detailed description). 3 Physical and computational settings The computational domain extends from the extremity of the semi-submersible dykes to the dam of Poses. The bathymetry of the estuary comes from the SHOM^1 (bathymetry of 1989). A 320×31 grid is used with a horizontal refinement near the river mouth (250–1250m) and a vertical one near the bottom (Fig. 2). The tidal elevation is imposed at the sea boundary from SHOM prediction and the velocity is given by a simplified momentum equation for each phase. A radiation condition is set for the free surface elevation at the inland boundary and the velocity is imposed with reference to the river discharge. No solid discharge is set at the inland boundary (α[s]=0). The initial condition is imposed for the sediment concentration as a one metre thickness layer of sediments with a mean concentration of 25g/l between 20 and 60km from the river mouth. This corresponds to 650 000 tons of mobilisable sediments. The initial repartition of sediments is quite arbitrary, but it has been chosen not too far from the physical location of the TM zone in order to reduce the initialisation simulation. Fig. 2 The particle's diameter is chosen equal to 16 μm with a density of 1700kg/m^3. This choice is justified by observations from Lesourd et al. [15] who measured the sediment particles in the Seine estuary in the framework of the Seine Aval Project. They concluded that there are mainly two representative populations of sediment: the fine sediment particles with a mean diameter of 4 μm and a true-density of ρ[s]=1400kg/m^3; and the macroflocs with a mean diameter of 20μm and a true-density of ρ[s]=1100kg/m^3. The particle's settling velocity for the numerical simulations is greater than the observed ones. However, this choice for the particles characteristics leads to a settling velocity of the order of the one observed in the estuary (W[stokes]≈10^−5m/s). The simulations have been performed over a semi-lunar cycle with a river discharge of 300m^3/s. This choice is justified to limit the exchange of sediments between the estuary and the open sea. If such exchanges arise, they are represented by a virtual reservoir. Initially, this reservoir is empty. It will be fulfilled by the sediment fluxes going out from the estuary during ebb tide. For inflow, the solid fraction at the sea boundary will be calculated by the ratio of the sediment mass in the reservoir over the total water volume outflowing during the previous ebb tide. The Strickler coefficients decrease from the river mouth (60m^1/3/s) to the dam of Poses (20m^1/3/s). These values were obtained by a calibration step in order to fit the numerical results to the measurements of the SHOM, especially, to reproduce the tidal wave asymmetry. The comparison with the Schéma d’aptitude et d’utilisation de la mer (SAUM) [3] observations is made in the second part of the semi-lunar cycle. The initial condition for sediment is lost after seven days of simulation. We point out that only the diameter and the density of sediment particles are imposed and no critical shear stresses or erosion flux module are imposed in the following simulation. 4 Results and discussion Water levels and mean current velocities are compared with measures of SHOM for spring and neap tide at different stations along the Seine estuary (Fig. 3). The surface water level is nearly sinusoidal at Honfleur (8km from the river mouth) and becomes strongly asymmetric at Duclair (87km from the river mouth). The flood lasts four hours whereas it is eight hours for the ebb tide. As a consequence, current velocity at the flood tide is stronger than the ebb's ones. These strong currents influence the suspended sediment transport by increasing the bottom erosion at flood tide leading to an upstream movement of sediment until the point where the river flow becomes dominant for the transport. This phenomenon is called “tidal pumping” [1]. In the Seine estuary, this process is preponderant for the formation and the displacement of the TM [5]. Fig. 3 Afterwards, the model is run with tidal and river flow conditions closed to the one of the SAUM [3]. Fig. 4 shows the numerical results (left side) and observations of the SAUM (right side) in terms of sediment concentration. It has been observed that during neap tide and at low river discharge, the quantity of SPM is low and the concentration is about 0.2g/l [2]. Therefore, the TM is not clearly formed. The numerical model also predicts a low quantity of SPM (less than 0.1g/l) with a maximum near Honfleur. The TM is not developed. Fig. 4 Fig. 5 shows numerical results and observations during spring tide. The observations show a well-developped TM over the water depth at low water levels (LW) and a strong sedimentation at high water levels (HW). Concentration is greater than 1g/l and the TM moves horizontally over a distance of about 15km between Honfleur and Tancarville during a tidal cycle. The numerical model predicts a TM clearly formed with concentration of about 1g/l near the bottom. Its core is located near Honfleur, less than 10km from the river mouth, at LW and it is located at 20km from the river mouth at HW. The numerical TM has a horizontal displacement of about one ten of kilometers during a tidal cycle. This is in quite good agreement with in-situ observations. The vertical expansion of the TM especially at LW is not well represented on the numerical results. There is no SPM in the upper part of the water column on the contrary to the observations. Fig. 5 The results presented above show that the two-phase flow model gives a coherent description of the suspended sediment transport in estuary. As stated in the introduction, the simulation of the near bed region is one of the main interest of the two-phase flow model compared to the classical approach. Fig. 6 shows sediment concentration in the near bed region during a spring tide. A relatively concentrated layer, with concentration of about one ten of gram per litre at the bottom, and a dilute suspension above this layer are observed. This concentrated layer exchanges sediment matters with the dilute suspension during the tidal cycle: it plays the role of a sediment reservoir for the SPM. This illustrates the fact that deposition and resuspension processes are potentially represented by the two-phase flow model. Moreover, this concentrated layer moves horizontally under the influence of tidal currents and river flow as shown on Fig. 6. This concentrated layer can be related to the existence of the fluid–mud layer associated with the TM . In the two-phase flow model, the fluid–mud layer is represented as a non-Newtonian fluid. This characteristic is taken into account by the introduction of the parameter β (8) in the constitutive equation (7). So the fluid–mud layer is integrated in the same domain as the suspension and the sediment bottom and it is represented by the same type of equations as the fluid. However, these results are purely qualitative for the moment and need to be further studied. In particular, a quantitative comparison with experimental measurements is necessary to validate the simulated erosion/deposition flux. Another issue raised by these results is the order of magnitude of the concentration in the “concentrated” fluid–particle layer. The concentration simulated here is of the order of one to ten grams per litre, however, the measured values for the fluid–mud in the Seine estuary is of the order of hundred grams per litre. In the actual model, the dissipation in the sediment bed and in the layer just above it, is only partially taken into account. Some other processes have not been taken into account in the present model such as the fluid–particle turbulent interactions, the flocculation, or the exchanges of sediments with the intertidal mudflats of the Seine estuary [14] or the waves action at the inlet amongst others. All these phenomena could also significantly affect the SPM dynamics in the estuary. Fig. 6 5 Conclusion A two-phase flow model was adapted for the simulation of the TM in the Seine estuary. These numerical results are in rather good agreement with observations and the TM's motion is qualitatively reproduced for different tidal conditions. Moreover, a concentrated sediment layer is observed on the computational results which can be assimilated to a fluid mud layer. These results illustrate one of the major interests of a two-phase flow model for sediment transport in estuaries: the modelling of the whole water–sediment column, from the sediment bottom to the suspension. The introduction of some other effects like the floculation or the turbulence interactions could improve the modelling. Actual developments concern the turbulence modelling and more especially the fluid–particle turbulent interactions. Concerning the long-term simulations at the scale of an estuary, one must keep in mind that such two-phase flow model is time consuming and cannot be used for such application for the moment. This work has been financially supported by the European Commission (FLOCODS Project, FP5-Contract no ICA4-CT2001-10035). The computations have been carried out at the Centre de ressources informatiques de Haute-Normandie, Saint-Étienne du Rouvray (CRIHAN). ^1 S.H.O.M.: Service hydrographique et océanographique de la marine.
{"url":"https://comptes-rendus.academie-sciences.fr/geoscience/articles/10.1016/j.crte.2009.04.002/","timestamp":"2024-11-06T17:03:53Z","content_type":"text/html","content_length":"93931","record_id":"<urn:uuid:69360d6d-ba42-4a0e-85bc-f1a79bca1bc8>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00329.warc.gz"}
A polynomial algorithm for checking the fulfillment of the condition of the morphic image of the extended maximal prefix code The maximum prefix code is defined in the usual way, “based on the things stated in the student courses”. An extended maximum prefix code is a finite language containing some maximum prefix code as a subset (proper or improper one). Also the (homo)morphism is defined in the usual way, and, based on it, the inverse morphism does. In our previous publications, infinite iterations of finite languages as om−languages have been considered. A hypothesis was formulated that infinite iterations of two given finite languages coincide if and only if both of these languages can be obtained using the following algorithm. First, some new alphabet is selected; secondly, some two extended maximal prefix codes are considered over this alphabet; third, to these two extended maximal prefix codes, some morphism is applied (the same in both cases), which translates words over the new alphabet into words over the original alphabet. At the same time, the obtained morphic images of the two considered extended maximal prefix codes should coincide with the two given finite languages. (More precisely, some equivalent versions of this hypothesis have been formulated, as well as another hypothesis, which has a less strong statement, but which makes sense to talk about if the first hypothesis is not fulfilled.) This paper solves the problem of checking whether one language can be obtained using a similar algorithm applied to some other language. More precisely, a non-deterministic algorithm for such a problem is trivial, and was given in one of our previous publications; here we also give a deterministic polynomial algorithm for checking the fulfillment of this condition. Thus, the problem considered in this paper can be considered a step towards verifying the formulated Melnikov B., Melnikova A. Infinite trees in the algorithm for checking the equivalence condition of iterations of finite languages. Part I // International Journal of Open Information Technologies. – 2021. – Vol. 9. No. 4. – P. 1–11 (in Russian). Melnikov B., Melnikova A. Infinite trees in the algorithm for checking the equivalence condition of iterations of finite languages. Part II // International Journal of Open Information Technologies. – 2021. – Vol. 9. No. 5. – P. 1–11 (in Russian). Melnikov B Variants of finite automata corresponding to infinite iterative morphism trees. Part I // International Journal of Open Information Technologies. – 2021. – Vol. 9. No. 7. – P. 5–13 (in Melnikov B Variants of finite automata corresponding to infinite iterative morphism trees. Part II // International Journal of Open Information Technologies. – 2021. – Vol. 9. No. 10. – P. 1–8 (in Melnikov B. The equality condition for infinite catenations of two sets of finite words // International Journal of Foundation of Computer Science. – 1993. – Vol. 4. No. 3. – P. 267–274. Semi-lattices of the subsets of potential roots in the problems of the formal languages theory. Part I. Extracting the root from the language // International Journal of Open Information Technologies. – 2022. – Vol. 10. No. 4. – P. 1–9 (in Russian). Melnikov B. Semi-lattices of the subsets of potential roots in the problems of the formal languages theory. Part II. Constructing an inverse morphism // International Journal of Open Information Technologies. – 2022. – Vol. 10. No. 5. – P. 1–8 (in Russian). Melnikov B. Semi-lattices of the subsets of potential roots in the problems of the formal languages theory. Part III. The condition for the existence of a lattice // International Journal of Open Information Technologies. – 2022. – Vol. 10. No. 7. – P. 1–9 (in Russian). Мельников Б. Подклассы класса контекстно-свободных языков (монография). – М., Изд-во Московского университета. – 1995. – с. – ISBN 5-211-03448-1. Abramyan M., Melnikov B. Algorithms of transformation of finite automata, corresponding to infinite iterative trees // Modern information technologies and IT education. – 2021. – Vol. 17. No. 1. – P. 13–23. (in Russian). Melnikov B., Melnikova A. A polynomial algorithm for constructing a finite automaton for checking the equality of infinite iterations of two finite languages // International Journal of Open Information Technologies. – 2021. – Vol. 9. No. 11. – P. 1–10. (in Russian). Lallement G. Semigroups and Combinatorial Applications. – NJ, Wiley & Sons, Inc. – 1979. – 376 p. Graham R., Knuth D., Patashnik O. Concrete Mathematics. A foundation for computer science. – USA, Addison-Wesley Professional. – 1994. – xiv+657 p. • There are currently no refbacks. Abava Кибербезопасность IT Congress 2024 ISSN: 2307-8162
{"url":"http://injoit.ru/index.php/j1/article/view/1407","timestamp":"2024-11-05T19:39:55Z","content_type":"application/xhtml+xml","content_length":"24390","record_id":"<urn:uuid:9c9fe3fb-10d9-431e-a773-3fc2850613f5>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00066.warc.gz"}
minmaxBias: Generic Function for the Computation of Bias-Optimally Robust... in ROptEst: Optimally Robust Estimation Generic function for the computation of bias-optimally robust ICs in case of infinitesimal robust models. This function is rarely called directly. minmaxBias(L2deriv, neighbor, biastype, ...) ## S4 method for signature 'UnivariateDistribution,ContNeighborhood,BiasType' minmaxBias(L2deriv, neighbor, biastype, symm, trafo, maxiter, tol, warn, Finfo, verbose = NULL) ## S4 method for signature ## 'UnivariateDistribution,ContNeighborhood,asymmetricBias' minmaxBias( L2deriv, neighbor, biastype, symm, trafo, maxiter, tol, warn, Finfo, verbose = NULL) ## S4 method for signature ## 'UnivariateDistribution,ContNeighborhood,onesidedBias' minmaxBias( L2deriv, neighbor, biastype, symm, trafo, maxiter, tol, warn, Finfo, verbose = NULL) ## S4 method for signature ## 'UnivariateDistribution,TotalVarNeighborhood,BiasType' minmaxBias( L2deriv, neighbor, biastype, symm, trafo, maxiter, tol, warn, Finfo, verbose = NULL) ## S4 method for signature 'RealRandVariable,ContNeighborhood,BiasType' minmaxBias(L2deriv, neighbor, biastype, normtype, Distr, z.start, A.start, z.comp, A.comp, Finfo, trafo, maxiter, tol, verbose = NULL, ...) ## S4 method for signature 'RealRandVariable,TotalVarNeighborhood,BiasType' minmaxBias(L2deriv, neighbor, biastype, normtype, Distr, z.start, A.start, z.comp, A.comp, Finfo, trafo, maxiter, tol, verbose = NULL, ...) L2deriv L2-derivative of some L2-differentiable family of probability measures. neighbor object of class "Neighborhood". biastype object of class "BiasType". normtype object of class "NormType". ... additional arguments to be passed to E Distr object of class "Distribution". symm logical: indicating symmetry of L2deriv. z.start initial value for the centering constant. A.start initial value for the standardizing matrix. z.comp logical indicator which indices need to be computed and which are 0 due to symmetry. A.comp matrix of logical indicator which indices need to be computed and which are 0 due to symmetry. trafo matrix: transformation of the parameter. maxiter the maximum number of iterations. tol the desired accuracy (convergence tolerance). warn logical: print warnings. Finfo Fisher information matrix. verbose logical: if TRUE, some messages are printed logical indicator which indices need to be computed and which are 0 due to symmetry. matrix of logical indicator which indices need to be computed and which are 0 due to symmetry. computes the bias optimal influence curve for symmetric bias for L2 differentiable parametric families with unknown one-dimensional parameter. computes the bias optimal influence curve for asymmetric bias for L2 differentiable parametric families with unknown one-dimensional parameter. computes the bias optimal influence curve for symmetric bias for L2 differentiable parametric families with unknown k-dimensional parameter (k > 1) where the underlying distribution is univariate. computes the bias optimal influence curve for symmetric bias for L2 differentiable parametric families in a setting where we are interested in a p=1 dimensional aspect of an unknown k-dimensional parameter (k > 1) where the underlying distribution is univariate. Rieder, H. (1980) Estimates derived from robust tests. Ann. Stats. 8: 106–115. Ruckdeschel, P. (2005) Optimally One-Sided Bounded Influence Curves. Mathematical Methods in Statistics 14(1), 105-131. Kohl, M. (2005) Numerical Contributions to the Asymptotic Theory of Robustness. Bayreuth: Dissertation. For more information on customizing the embed code, read Embedding Snippets.
{"url":"https://rdrr.io/cran/ROptEst/man/minmaxBias.html","timestamp":"2024-11-07T01:24:23Z","content_type":"text/html","content_length":"36566","record_id":"<urn:uuid:3fe28b78-71e9-49be-af7b-02f585ab1fde>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00833.warc.gz"}
GMAT Profit, Loss & Discount Formulas | Profit, Loss & Discount Cheat Sheet [PDF] by Aug. 28, 2024 1481 Profit, Loss & Discount Formulas for GMAT PDF Profit, Loss, and Discount is an important topic for the GMAT, with questions asked under the Word Problem category. The number of concepts in these areas is modest, and the equations may be used to answer the majority of the problems. This document provides a variety of profit, loss, and discount formulas, recommendations, and shortcuts. We may simplify your work and save a lot of time by using the profit, loss & discount formulas. So, here are the formulas for Profit,Loss & Discount. We are providing the PDF of Profit, Loss & Discounts Formulas. You can download PDF below. Download Profit & Loss Formula For GMAT PDF Take Free GMAT Daily Targets Join GMATPoint Telegram channel Profit, Loss & Discount Formulas Profit, Loss, and Discount is a prominent GMAT topic, The number of concepts in these areas is modest, and the equations may be used to answer most of the issues. This page discusses numerous profit, loss, and discount suggestions and algorithms. Profit: The profit made when a product is sold for more than its cost price. Loss: A loss is when a seller loses after selling a product for less than its cost price. Check out the complete GMAT syllabus and Section-wise Preparation Tips Cost Price (CP): A cost price is the amount paid for a product or commodity at the time of purchase. Also abbreviated as CP. This cost price is further broken down into two categories: • Fixed Cost: The fixed cost is constant regardless of the conditions. • Variable Cost: It may differ depending on the amount of units. Selling Price (SP) The Selling Price refers to the price at which a thing is sold. SP is the most common abbreviation. They are also referred to as a sale price. Marked Price Formula (MP) This is basically labelled by shopkeepers to offer a discount to the customers. This Document - Profit Loss & Discount PDF covers • Definition of Cost Price, Selling price and Marked price • Formulas for profit percentage, Loss percentage, Percentage Discount • Two items of same C.P sold one for some profit percent and other for same equal loss percent • Two items of same S.P sold one for some profit percent and other for same equal loss percent • Gain when Trader uses false weight/balance/scale • Profit earned on Buy x get y free • C.P of x article is equal to the selling price of y articles Check out the PDF for Ratio & Proportion Formulas Subscribe To GMATPOINT YouTube Channel Join GMATPOINT Telegram Channel If you are starting your GMAT preparation from scratch, you should definitely check out the GMATPOINT Subscribe To GMAT Preparation Channel
{"url":"https://gmatpoint.com/gmat-formulas-profit-loss-discount-pdf","timestamp":"2024-11-07T03:37:05Z","content_type":"text/html","content_length":"29531","record_id":"<urn:uuid:677a4b43-4d17-42c6-8672-d62bca646ebd>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00476.warc.gz"}
Three Minute Tuesdays Archives - R.I.L.L.I.A.N. Understanding the mode is important. The mode, or the most occurring value in a data set, is a commonly used descriptive statistic that is especially useful when working with nominal or categorical The video below provides a brief overview of what the mode is, answers to frequently asked questions (FAQs) about the mode and gives real world examples of using the mode. The mode is different than the mean (average) and median. The mode is the most occurring value whereas the mean is the sum divided by the number of observations in the data set and the median is the middle number in the data set when the data is put into order by size, from smallest to largest. All of the data in the data set has to be in or converted to the same unit of measurement in order to find the mode. There can be more than one mode in a data set. Some data sets may have multiple values that occur the most. Watch the above video to see how the mode is used to find the state(s) with the most customer orders in the e-commerce store example, and how it’s used to in the plant nursery/garden store example to identify the type of flower or plant that survey respondents like to give as gifts. In conclusion, the mode is useful in a variety of different types of data sets in a variety of different scenarios across many industries. Refreshing your understanding of this basic descriptive statistic will help you use it in ways that are useful to you and your organization. Do have any examples of how the mode is useful in analyzing data for your work or in your life? Share your example in the comments below! Why It’s Useful to Show the Incidence Rate per 10K or per 100K Population Instead of Just Number of Cases Incidence rates, or the rates of new cases of a disease, are often shown as the rate per either 100,000 population or per 10,000 population, and provide valuable additional insight to the number of new cases of the disease. In scenarios when comparing the incidence of a disease across different localities that have varying different sizes in population, looking at the incidence rate of that disease per 10,000 or per 100,000 population helps to show a more complete picture than would only looking at the number of new cases alone. Examples are given in the quick, 3 minute video below to further illustrate this point. In conclusion, showing the incidence rate per 10,000 or per 100,000 population provides additional valuable insight in addition to showing the number of new cases of the disease. This is especially true when comparing the incidence rate of the disease across different localities that have varying sizes in populations. In addition, looking at the incidence rate per population is essential when comparing that rate to a state or national average for that disease or comparing it to a benchmark or goal that is in the format of an incidence rate per population. How to Add Filters in Tableau Adding filters in Tableau helps you make your Tableau dashboards more useful. Filters allow you to display subsections of your data set in your dashboard. One way to add a filter is to use one worksheet that you have in the dashboard as a filter for the other worksheets. To do this, select the filter icon in the menu that appears to the right of the worksheet when you have that worksheet selected. Another way is to go to the drop down menu to the right of the worksheet, hover over “Filters”, which then brings up a drop down menu in which you can select which of the variables in the worksheet you want to use to filter by. The video below shows how to do both of these methods for adding filters in less than 3 minutes. As shown in the above video, adding filters to your Tableau dashboards can make your dashboards more useful and easier to display a selected subsection of your data set. Data Sources for the data shown in this video: • U.S. Census Bureau. (2013-17 and 2008-12). American Community Survey 5 Year Estimates. Retrieved February 5, 2020 from https://factfinder.census.gov/faces/nav/jsf/pages/index.xhtml • County Health Rankings and Roadmaps.(2019). 2019 County Health Rankings Virginia. Health Factors: Air Pollution Particulate Matter. Retrieved February 5, 2020 from https://
{"url":"https://rillianconsulting.com/tag/three-minute-tuesdays/","timestamp":"2024-11-12T23:46:28Z","content_type":"text/html","content_length":"59050","record_id":"<urn:uuid:8730739b-c321-486e-ba45-7ceba1dbd035>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00323.warc.gz"}
KNeighborsClassifier (2024) class sklearn.neighbors.KNeighborsClassifier(n_neighbors=5, *, weights='uniform', algorithm='auto', leaf_size=30, p=2, metric='minkowski', metric_params=None, n_jobs=None)[source]# Classifier implementing the k-nearest neighbors vote. Read more in the User Guide. n_neighborsint, default=5 Number of neighbors to use by default for kneighbors queries. weights{‘uniform’, ‘distance’}, callable or None, default=’uniform’ Weight function used in prediction. Possible values: ○ ‘uniform’ : uniform weights. All points in each neighborhoodare weighted equally. ○ ‘distance’ : weight points by the inverse of their distance.in this case, closer neighbors of a query point will have agreater influence than neighbors which are further away. ○ [callable] : a user-defined function which accepts anarray of distances, and returns an array of the same shapecontaining the weights. Refer to the example entitledNearest Neighbors Classificationshowing the impact of the weights parameter on the decisionboundary. algorithm{‘auto’, ‘ball_tree’, ‘kd_tree’, ‘brute’}, default=’auto’ Algorithm used to compute the nearest neighbors: ○ ‘ball_tree’ will use BallTree ○ ‘kd_tree’ will use KDTree ○ ‘brute’ will use a brute-force search. ○ ‘auto’ will attempt to decide the most appropriate algorithmbased on the values passed to fit method. Note: fitting on sparse input will override the setting ofthis parameter, using brute force. leaf_sizeint, default=30 Leaf size passed to BallTree or KDTree. This can affect thespeed of the construction and query, as well as the memoryrequired to store the tree. The optimal value depends on thenature of the problem. pfloat, default=2 Power parameter for the Minkowski metric. When p = 1, this is equivalentto using manhattan_distance (l1), and euclidean_distance (l2) for p = 2.For arbitrary p, minkowski_distance (l_p) is used. This parameter is expectedto be positive. metricstr or callable, default=’minkowski’ Metric to use for distance computation. Default is “minkowski”, whichresults in the standard Euclidean distance when p = 2. See thedocumentation of scipy.spatial.distance andthe metrics listed indistance_metrics for valid metricvalues. If metric is “precomputed”, X is assumed to be a distance matrix andmust be square during fit. X may be a sparse graph, in whichcase only “nonzero” elements may be considered neighbors. If metric is a callable function, it takes two arrays representing 1Dvectors as inputs and must return one value indicating the distancebetween those vectors. This works for Scipy’s metrics, but is lessefficient than passing the metric name as a string. metric_paramsdict, default=None Additional keyword arguments for the metric function. n_jobsint, default=None The number of parallel jobs to run for neighbors search.None means 1 unless in a joblib.parallel_backend context.-1 means using all processors. See Glossaryfor more details.Doesn’t affect fit method. classes_array of shape (n_classes,) Class labels known to the classifier effective_metric_str or callble The distance metric used. It will be same as the metric parameteror a synonym of it, e.g. ‘euclidean’ if the metric parameter set to‘minkowski’ and p parameter set to 2. Additional keyword arguments for the metric function. For most metricswill be same with metric_params parameter, but may also contain thep parameter value if the effective_metric_ attribute is set to‘minkowski’. Number of features seen during fit. Added in version 0.24. feature_names_in_ndarray of shape (n_features_in_,) Names of features seen during fit. Defined only when Xhas feature names that are all strings. Added in version 1.0. Number of samples in the fitted data. False when y’s shape is (n_samples, ) or (n_samples, 1) during fitotherwise True. See also Classifier based on neighbors within a fixed radius. Regression based on k-nearest neighbors. Regression based on neighbors within a fixed radius. Unsupervised learner for implementing neighbor searches. See Nearest Neighbors in the online documentationfor a discussion of the choice of algorithm and leaf_size. Regarding the Nearest Neighbors algorithms, if it is found that twoneighbors, neighbor k+1 and k, have identical distancesbut different labels, the results will depend on the ordering of thetraining data. >>> X = [[0], [1], [2], [3]]>>> y = [0, 0, 1, 1]>>> from sklearn.neighbors import KNeighborsClassifier>>> neigh = KNeighborsClassifier(n_neighbors=3)>>> neigh.fit(X, y)KNeighborsClassifier(...)>>> print(neigh.predict([[1.1]]))[0]>>> print(neigh.predict_proba([[0.9]]))[[0.666... 0.333...]] fit(X, y)[source]# Fit the k-nearest neighbors classifier from the training dataset. X{array-like, sparse matrix} of shape (n_samples, n_features) or (n_samples, n_samples) if metric=’precomputed’ Training data. y{array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_outputs) Target values. The fitted k-nearest neighbors classifier. Get metadata routing of this object. Please check User Guide on how the routingmechanism works. A MetadataRequest encapsulatingrouting information. Get parameters for this estimator. deepbool, default=True If True, will return the parameters for this estimator andcontained subobjects that are estimators. Parameter names mapped to their values. kneighbors(X=None, n_neighbors=None, return_distance=True)[source]# Find the K-neighbors of a point. Returns indices of and distances to the neighbors of each point. X{array-like, sparse matrix}, shape (n_queries, n_features), or (n_queries, n_indexed) if metric == ‘precomputed’, default=None The query point or points.If not provided, neighbors of each indexed point are returned.In this case, the query point is not considered its own neighbor. n_neighborsint, default=None Number of neighbors required for each sample. The default is thevalue passed to the constructor. return_distancebool, default=True Whether or not to return the distances. neigh_distndarray of shape (n_queries, n_neighbors) Array representing the lengths to points, only present ifreturn_distance=True. neigh_indndarray of shape (n_queries, n_neighbors) Indices of the nearest points in the population matrix. In the following example, we construct a NearestNeighborsclass from an array representing our data set and ask who’sthe closest point to [1,1,1] >>> samples = [[0., 0., 0.], [0., .5, 0.], [1., 1., .5]]>>> from sklearn.neighbors import NearestNeighbors>>> neigh = NearestNeighbors(n_neighbors=1)>>> neigh.fit(samples)NearestNeighbors(n_neighbors=1)>>> print(neigh.kneighbors([[1., 1., 1.]]))(array([[0.5]]), array([[2]])) As you can see, it returns [[0.5]], and [[2]], which means that theelement is at distance 0.5 and is the third element of samples(indexes start at 0). You can also query for multiple points: >>> X = [[0., 1., 0.], [1., 0., 1.]]>>> neigh.kneighbors(X, return_distance=False)array([[1], [2]]...) kneighbors_graph(X=None, n_neighbors=None, mode='connectivity')[source]# Compute the (weighted) graph of k-Neighbors for points in X. X{array-like, sparse matrix} of shape (n_queries, n_features), or (n_queries, n_indexed) if metric == ‘precomputed’, default=None The query point or points.If not provided, neighbors of each indexed point are returned.In this case, the query point is not considered its own neighbor.For metric='precomputed' the shape should be(n_queries, n_indexed). Otherwise the shape should be(n_queries, n_features). n_neighborsint, default=None Number of neighbors for each sample. The default is the valuepassed to the constructor. mode{‘connectivity’, ‘distance’}, default=’connectivity’ Type of returned matrix: ‘connectivity’ will return theconnectivity matrix with ones and zeros, in ‘distance’ theedges are distances between points, type of distancedepends on the selected metric parameter inNearestNeighbors class. Asparse-matrix of shape (n_queries, n_samples_fit) n_samples_fit is the number of samples in the fitted data.A[i, j] gives the weight of the edge connecting i to j.The matrix is of CSR format. See also Compute the (weighted) graph of Neighbors for points in X. >>> X = [[0], [3], [1]]>>> from sklearn.neighbors import NearestNeighbors>>> neigh = NearestNeighbors(n_neighbors=2)>>> neigh.fit(X)NearestNeighbors(n_neighbors=2)>>> A = neigh.kneighbors_graph(X)>>> A.toarray()array([[1., 0., 1.], [0., 1., 1.], [1., 0., 1.]]) Predict the class labels for the provided data. X{array-like, sparse matrix} of shape (n_queries, n_features), or (n_queries, n_indexed) if metric == ‘precomputed’ Test samples. yndarray of shape (n_queries,) or (n_queries, n_outputs) Class labels for each data sample. Return probability estimates for the test data X. X{array-like, sparse matrix} of shape (n_queries, n_features), or (n_queries, n_indexed) if metric == ‘precomputed’ Test samples. pndarray of shape (n_queries, n_classes), or a list of n_outputs of such arrays if n_outputs > 1. The class probabilities of the input samples. Classes are orderedby lexicographic order. score(X, y, sample_weight=None)[source]# Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracywhich is a harsh metric since you require for each sample thateach label set be correctly predicted. Xarray-like of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True labels for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Mean accuracy of self.predict(X) w.r.t. y. Set the parameters of this estimator. The method works on simple estimators as well as on nested objects(such as Pipeline). The latter haveparameters of the form <component>__<parameter> so that it’spossible to update each component of a nested object. Estimator parameters. selfestimator instance Estimator instance. set_score_request(*, sample_weight: bool | None | str = '$UNCHANGED$') → KNeighborsClassifier[source]# Request metadata passed to the score method. Note that this method is only relevant ifenable_metadata_routing=True (see sklearn.set_config).Please see User Guide on how the routingmechanism works. The options for each parameter are: ☆ True: metadata is requested, and passed to score if provided. The request is ignored if metadata is not provided. ☆ False: metadata is not requested and the meta-estimator will not pass it to score. ☆ None: metadata is not requested, and the meta-estimator will raise an error if the user provides it. ☆ str: metadata should be passed to the meta-estimator with this given alias instead of the original name. The default (sklearn.utils.metadata_routing.UNCHANGED) retains theexisting request. This allows you to change the request for someparameters and not others. Added in version 1.3. This method is only relevant if this estimator is used as asub-estimator of a meta-estimator, e.g. used inside aPipeline. Otherwise it has no effect. sample_weightstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED Metadata routing for sample_weight parameter in score. The updated object. Gallery examples# Release Highlights for scikit-learn 0.24 Release Highlights for scikit-learn 0.24 Classifier comparison Classifier comparison Plot the decision boundaries of a VotingClassifier Plot the decision boundaries of a VotingClassifier Caching nearest neighbors Caching nearest neighbors Comparing Nearest Neighbors with and without Neighborhood Components Analysis Comparing Nearest Neighbors with and without Neighborhood Components Analysis Dimensionality Reduction with Neighborhood Components Analysis Dimensionality Reduction with Neighborhood Components Analysis Nearest Neighbors Classification Nearest Neighbors Classification Importance of Feature Scaling Importance of Feature Scaling Digits Classification Exercise Digits Classification Exercise Classification of text documents using sparse features Classification of text documents using sparse features
{"url":"https://meddiving.com/article/kneighborsclassifier","timestamp":"2024-11-09T20:13:19Z","content_type":"text/html","content_length":"137955","record_id":"<urn:uuid:8fa8d3bb-35c2-43fd-ae91-a02210aefdda>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00279.warc.gz"}
FEM analysis of a highly birefringent modified slotted core circular PCF for endlessly single mode operation across Issue J. Eur. Opt. Society-Rapid Publ. Volume 20, Number 2, 2024 Article Number 35 Number of page(s) 11 DOI https://doi.org/10.1051/jeos/2024036 Published online 11 October 2024 J. Eur. Opt. Society-Rapid Publ. 2024, , 35 Research Article FEM analysis of a highly birefringent modified slotted core circular PCF for endlessly single mode operation across E to L telecom bands ^1 Department of Electrical and Electronic Engineering, Rajshahi University of Engineering and Technology, Rajshahi 6204, Bangladesh ^2 Department of Electrical and Electronic Engineering, World University of Bangladesh, Dhaka 1230, Bangladesh ^3 Department of Computer Science and Engineering, World University of Bangladesh, Dhaka 1230, Bangladesh ^4 Department of Physics, COMSATS University Islamabad, Islamabad, Pakistan ^* Corresponding author: amit.rueten@gmail.com Received: 30 May 2024 Accepted: 28 August 2024 This paper describes an exceptionally high birefringent modified slotted core circular photonic crystal fiber (MSCCPCF). At the 1.55 μm telecommunication wavelength, the proposed fiber structure aims to achieve exceptional birefringence performance through the thoughtful placement of air holes and the incorporation of slots. The optical properties of the proposed MSCCPCF are rigorously simulated using the finite element method (FEM). The FEM simulations show high birefringence of up to 8.795 × 10^−2 at 1.55 μm. The suggested fiber exhibits single mode behavior in the E to L communication bands (V[eff] < 2.405). Numerous geometric factors and their effects on other optical properties, such as birefringence, beat length (17.62 μm) and dispersion coefficient (−310.8 ps/(nm · km)) have been meticulously studied. The proposed fiber’s viability and potential uses are evaluated by analyzing modal features like nonlinearity (21.76 W^−1 km^−1), confinement loss (5.615 × 10^−11 dB/cm), and dispersion. The proposed fiber structure has potential for use in polarization-maintaining devices, sensors, and other photonic applications requiring high birefringence and tailored optical Key words: Birefringent / Circular photonic crystal fiber / Finite element method / Polarization-maintaining fiber / Single mode / Slotted core © The Author(s), published by EDP Sciences, 2024 This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. 1 Introduction Photonic crystal fibers (PCFs) have gained popularity in recent years due to their capacity to alter light in ways that regular optical fibers cannot [1]. PCFs have distinct qualities such as large mode area, strong nonlinearity, and customized dispersion characteristics due to the periodic placement of air holes or voids within a solid core [2]. Different forms of PCF or microstructure optical fibers have been proposed in various literature and research. Some of them have also been invented and used to real-life situations. The most common periodic air hole arrangements in cladding designs are hexagonal [3], octagonal [4], decagonal [5], square [6], circular [7], spiral [8], honeycomb [9], and other hybrid patterns [10–12]. These holey fibers can effectively manage dispersion and reduce pulse widening in communication systems [13]. Highly nonlinear and large mode PCFs are effective in sensing and biosensing applications [14]. The necessity for great precision and efficiency in optical technology has resulted in the development of PCFs with high birefringence. Birefringence is an important feature of optical fibers that allows for independent propagation channels for orthogonal polarization modes, which is critical for applications in polarization-sensitive devices, telecommunications, and sensors [15]. Recent research has focused on producing PCFs with extremely high birefringence. Halder (2016) developed a broadband dispersion compensating PCF with ultra-high birefringence of 3.373 × 10^−2, strong negative dispersion coefficient, and large nonlinear coefficient, ideal for high-speed transmission systems. The design includes four elliptical air holes in the core region there still complexity in design procedure remains. Fabrication method is not included in the design procedure. Realizing elliptical air holes in a small pitch value is very difficult. The optical properties seems to be dependent on the ellipticity of the elliptical air holes the effect of this is not addressed and analyzed for that PCF design [16]. Islam (2017) developed a hexagonal PCF with strong birefringence (3.34 × 10^−2 at 1550 nm), low confinement loss, and a large negative dispersion coefficient, making it suitable for sensing applications. The design includes asymmetric air holes in the core region. The ellipticity of the elliptical air holes is narrow possessing a difficulty in fabrication and the results will also effect in real world scenario [17]. Amin (2016) proposed a pure silica index guiding PCF with 3.474 × 10^−2 birefringence at 1.55 μm, ideal for coherent optical communication and sensing applications. The complexity of the design makes it difficult to fabricate by stack and draw method. The last air hole ring and the Perfectly Matched Layer (PML) is kept at a distance in this design so the scattered light reflections and extra mode formation is not suitable for long haul operation [18]. Liu et al. (2018) designed PCFs with an elliptical tellurite core, resulting in unprecedented gains in birefringence (7.57 × 10^−2), nonlinearity (188.39 W^−1 km^−1), and negligible confinement loss (10^−9 dB/m) at 1.55 μm wavelength. Tellurite glass has been infiltrated in an elliptical air hole to achieve high birefringence and nonlinearity but absorption loss in the telecommunication band is not calculated. Absorption loss and temperature coefficient should have to be taken into account to realize the proposed fiber in reality otherwise the outcomes may vary [19]. Chaudhary (2019) constructed a hybrid PCF with a birefringence of 12.046 × 10^−3 at 1550 nm. It has two zero dispersion wavelengths and a strong nonlinear coefficient, making it ideal for use in optical systems. This structure is complex and aperiodic in nature and loss profile is not measured for this design which is very crucial in telecommunication data transfer [20]. Halder (2020) developed a hybrid dispersion compensating fiber with high birefringence of 3.76 × 10^−2 at 1550 nm wavelength, low confinement loss, and a relative dispersion slope similar to single-mode fiber. This efficiently addresses dispersion across S, C, and L telecommunication bands. The design of this hybrid model includes very small air holes around the center core which seems very difficult to fabricate. Higher order mode extinction ratio should be analyzed to verify single mode operation [21]. Liang et al. (2020) demonstrated a unique PCF with a sandwich structure, displaying ultrahigh birefringence of 3.85 × 10^−2 and considerable negative dispersion at 1550 nm wavelength. This suggests possible applications in long-distance optical communication and polarization preserving fibers. The sandwich structure is squeezed so while implementing stack and draw method it is difficult to materialize and confinement loss needs to be addressed in those structures [22]. Benlacheheb et al. (2021) developed a polymer PCF with a triangular shape and circular air pores, resulting in a high birefringence of 4.9 × 10^−2 at 1550 nm. Polymer PCF is more suitable in terahertz band than visible light band due to their high absorption loss in the communication bands. Other loss profiles should be taken into account while analyzing these polymer materials in telecommunication bands [23]. Wang (2021) introduced a new tellurite glass PCF design with a near-elliptic core and six small air holes. The fiber achieves high birefringence (5.05 × 10^−2) and nonlinear coefficient (up to 1896 W^−1 km^−1) at 1.55 μm wavelength, with zero dispersion points around the same wavelength. The design is complex to realize and fabricate also loss profile should have to be measured [24]. Du et al. (2022) designed a tellurite glass-based PCF with a substantial birefringence of 3.79 × 10^−2 and a nonlinear coefficient of 1672.36 W^−1 km^−1 at 1.55 μm wavelength, with potential applications in many domains. The absorption coefficient and temperature coefficient should have to be considered while fabricating this PCF structure. Fused silica is widely used background material for PCF in telecommunication bands. Using tellurite glass in telecom bands is not clear in this literature [25]. Priyadharshini et al. (2023) introduced a cross-core octagonal shaped fiber to achieve birefringence of 1.5 × 10^−3 and beat length of 1.04 × 10^−3 meters. Confinement loss is not measured for this cross-core octagonal shaped PCF also there is a need of chromatic dispersion analysis for long haul optical fiber communication [26]. Halder et al. (2023) presented a defected core hybrid (hexagonal–decagonal) cladding PCF structure which showed a birefringence of 2.372 × 10^−2 and large negative dispersion of −3534 ps/(nm · km). The authors use single pitch to design the whole hybrid PCF cladding structure which leads difficulty to adjust the core and cladding air hole rings if the air filled fraction is changed sometimes hexagonal and decagonal air holes may overlap. So adjustability is a limitation in this proposed hybrid PCF structure [27]. Liu et al. (2023) proposed a simple central trielliptic core inside a hexagonal microstructure holey fiber to achieve a high birefringence of 3.56 × 10^−2. The effective area values should have to be included to verify the nonlinear coefficients of this fiber. The design possesses higher complexity to achieve higher birefringence in order of 10^−2 which can be simply achieved with a less complex structure [28]. Recently, Agbemabiese and Akowuah (2024) reported four different types of hexagonal holey fiber structures with different combinations of air holes to attain birefringence of 3.5 × 10^−3 and nonlinearity of 15.64 W^−1 km^−1. Several PCF structures were analyzed in their study and the structures are aperiodic in nature to achieve lower birefringence and nonlinearity than our previous studies and current study showed. The air hole diameters should show some periodicity while designing such PCF structures. In an air hole ring there are some holes bigger and some are smaller but not in a periodic manner even the authors did not clear the design methodology to some extent why they are motivated to such designs and what are the advantages of it. The clarity of the design methodology should be on focus in these PCF designs [29 ]. These studies demonstrate the promise of PCFs with extraordinarily high birefringence in a wide range of applications. But some shortfalls related to confinement loss, propagation loss, absorption loss, material selection, design complexity and fabrication methodology has been found in these studies. In response to the stringent requirements of these applications, this work presents a unique design for a PCF with ultra-high birefringence: a modified slotted core circular photonic crystal fiber (MSCCPCF). The strategic incorporation of slots within the core region, as well as changes to the crystal fiber’s inner rings, makes this fiber design unique. This configuration has been meticulously designed to significantly increase the fiber’s intrinsic birefringence. Using the Finite Element Method (FEM) for simulation, the design achieved a birefringence level of up to 8.795 × 10^−2 at 1.55 μm, which is relevant for telecom. The design also helps to retain lower confinement loss (5.615 × 10^−11 dB/cm at 1.55 μm). It also demonstrates continuous single mode operation by keeping the normalized frequency V[eff] below 2.405 across the telecommunication window, i.e. the E to L telecom bands. The design goal of this study is to develop a highly birefringent PCF that maintains single-mode operation across the E to L telecom bands while minimizing losses and nonlinear effects. This paper addresses these challenges by introducing a modified slotted core circular PCF structure that strategically optimizes the placement of air holes and slots, achieving exceptionally high birefringence, low confinement loss, and continuous single-mode operation. 2 Design procedure of proposed MSCCPCF The design procedure begins with the determination of important geometric parameters that influence the construction of the PCF. The parameters include the pitch (Λ) and relative air hole diameters ( d[1], d[2], d[3], d[4], d[5], d[6], and d[7]) that are normalized to the pitch. Geometrically modified circular air hole rings have been constructed in seven distinct circular layers. From the center point to each and every layer of air hole arrangements are spaced with a distance equal to pitch (Λ) value. A circular arrangement is formed with different diameter of air holes in each layer. Each layer there is a different type of angular spacing is introduced to form such arrangement. The rectangular slots are structured in the core region horizontally and placed at a distance equal to pitch value. The seven air hole layers are constructed in this manner to confine the light within the core region and lessen the propagation loss during transmission. The structure also exhibits negative dispersion coefficient due to this layer arrangement. The rectangular slots have been intruded to upscale the birefringence value. Figure 1 depicts the cross sectional view of MSCCPCF, with a comprehensive outline of the core and cladding. In the proposed MSCCPCF, the pitch Λ is set to 0.9 μm and the relative diameters are determined as follows: d[1]/Λ = d[2]/Λ = d[4]/Λ = 0.45, d[3]/Λ = d [6]/Λ = 0.7, d[5]/Λ = 0.6, and d[7]/Λ = 0.85. Two rectangular slots are placed into the core portion of the MSCCPCF to increase birefringence and adjust the fiber’s optical properties. The dimensions of these slots are carefully designed to maximize the influence on birefringence and other optical characteristics. In this design, the slot dimensions are as follows: The rectangular slot’s width (a ) is half of the pitch Λ/2, while its height (b) is Λ/2√3. The foundation material of this holey fiber is fused silica, and the air-cores are geometrically drilled. To address reflection and backscattering issues, a circular PML with a thickness of 10% of the cladding radius was applied along the structure’s perimeter. The PML layer effectively absorbs undesirable reflections. The PML thickness around the proposed designs was modified to 1 μm with a scaling factor of 1. Figure 1 Cross sectional view of proposed MSCCPCF. Where, Λ = 0.9 μm, d[1]/Λ = d[2]/Λ = d[4]/Λ = 0.45, d[3]/Λ = d[6]/Λ = 0.7, d[5]/Λ = 0.6 and d[7]/Λ = 0.85. Rectangular slot dimensions: a = Λ/2 and b = Λ/ Figure 2 demonstrates the angle arrangement of seven air hole rings. The distance of the two rectangular slots is kept equal to the pitch (Λ) of the proposed MSCCPCF. This modified circular arrangement is made with different angular orientation. The internal angle between two adjacent air holes from 4th to 7th outer cladding rings is set to 7.5°. The adjacent air hole angle is set to 15° for 2nd to 3rd cladding rings and 30° angle distance is kept within the first air hole ring. Figure 2 Quarter transverse cross-sectional view of air hole arrangement technique. 3 Computational methodology The full-vector FEM is a powerful numerical method for analyzing the physical properties of a PCF; anisotropic PML boundary conditions are chosen independently for each direction. The fundamental equation for the FEM can be expressed using Maxwell’s equations as [30]:$∇ × ( ∇ × H ⃗ε r ) = ω 2 μ r H ⃗c 2 ,$(1)where $H ⃗$ represents the intensity of the magnetic field. ε[r] and μ[r] represent the relative dielectric permittivity and magnetic permeability, respectively. ω represents the angular frequency of the light wave. In a vacuum, light has a velocity of c. The derivation yields the propagation constant β and the plural form of the effective index n[eff]. The properties of the PCF can be calculated using the equations below. The refractive index $n SiO 2 ( λ )$ of the substrate material, silica, was considered wavelength-dependent and computed using the Sellmeier equation, which takes the form [31]:$n SiO 2 ( λ ) = 1 + 0.6961 λ 2 λ 2 - 0.004679 + 0.4079 λ 2 λ 2 - 0.01351 + 0.8974 λ 2 λ 2 - 97.9340 .$(2) With the propagation constant β determined, the effective refractive index n[eff] was subsequently computed using the following formula [32].$n eff = β ( λ , n ( λ ) ) k 0 .$(3) The propagation constant of a wave is denoted as β, while k[0] denotes the wave number in free space, where the wavelength λ is included. The second order dispersion (β[2]) and confinement loss (α[CL]) can be calculated using the following formulas [33, 34]:$β 2 ( λ ) = - λ c ( ∂ 2 Re [ n eff ] ∂ λ 2 ) ps / nm · km ,$(4) $α CL = 8.686 × 2 π λ × Im [ n eff ] × 1 0 - 2 dB / cm ,$(5)where, c is the velocity of light measured in meter per second. The real and imaginary part of the effective refractive index is denoted by Re[n[eff]] and Im[n[eff]], respectively. The formula for calculating the modal birefringence of an optical fiber is as follows [35]:$B = | n eff x - n eff y | ,$(6)where $n eff x$ and $n eff y$ is the effective refractive index for x- and y -polarization modes which also refers to the slow and fast axis of the refractive index in the polarization maintaining fibers. The effective mode area A[eff] is the region which is occupied by the fundamental mode. The formula of effective area is as follows [36]:$A eff = [ ∬ | E ⃗( x , y ) | 2 d x d y ] 2 ∬ | E ⃗( x , y ) | 4 d x d y μ m 2 .$(7) Here, A[eff] is measured in square micrometers (μm^2) and is dependent on the strength of the electric field $E ⃗( x , y )$ in the medium. The effective mode area is directly related to the nonlinear coefficient, which can be computed in the manner given below [37]:$γ = 2 π n 2 λ A eff × 1 0 3 W - 1 k m - 1 ,$(8)where λ is the operating wavelength, c is the light’s velocity, and n[2] = 31 × 10^−21 m^2/W (for fused silica) is the nonlinear refractive index coefficient. The numerical aperture (N[A]) can be expressed mathematically in the following way [38]:$N A = 1 1 + π A eff λ 2 .$(9) The following formula is used to find the V[eff] parameter for the proposed PCF [39]:$V eff = 2 π R N A λ .$(10)Here N[A] is the numerical aperture and R is the radius of the core of the PCF. The beat length in a Polarization-Maintaining Fiber (PMF) is an important parameter because it represents the distance that the polarization state of light propagating through the fiber takes to complete one full cycle. In other words, it is the distance over which the polarization of light in the fiber repeats itself. The formula for the beat length (L[b]) in a birefringent optical fiber is given by [40]:$L b = λ | n eff x - n eff y | = λ Δ n eff ,$(11)where, L[b] is the beat length, λ is the wavelength of light in the fiber, and Δn[eff] is the difference in effective refractive indices between the two principal polarization modes of the fiber. 4 FEM outcome of optical properties In this section, the numerical results of optical properties obtained through finite element method analysis are demonstrated and discussed. The results were obtained by constructing the structure in the COMSOL multiphysics simulation software. The optical properties such as dispersion, birefringence, nonlinear coefficient, effective area, numerical aperture, V-number, confinement loss, and beat length have been calculated using the E-L communication band. During PCF fabrication, global diameters may vary by ±1% [41]. We analyzed the impact of varying parameters by ±1% on dispersion, birefringence, and confinement loss, as discussed in the section below. The parameter which is related to air hole diameters and slot dimensions is a function of pitch. In the design procedure it is evident that each of the air filled fractions (d/Λ) and slot dimensions are related to pitch value. Pitch change will change the air hole diameters and rectangular slot dimension. In fabrication there may be slight variation can be seen in the fiber parameters during preform formation. So this effect is analyzed by varying pitch value to ±1%. The results for wavelength were plotted using MATLAB software. 4.1 Effective refractive index The effective refractive index of a PCF is a crucial parameter that determines its guiding properties. The effective refractive index was analyzed for both x- and y-polarization fundamental modes by mode analysis. In Figure 3 wavelength dependent effective refractive index (n[eff]) plot is demonstrated. The n[eff] value ranges from 1.3200888801638233 to 1.2939963177717213 (for x-polarization) and 1.2546651162545484 to 1.1829793668811053 (for y-polarization) in the E to L communication bands. These values will be used in finding other optical properties like dispersion and confinement Figure 3 Wavelength vs. effective refractive index curve. 4.2 Dispersion characteristics In Figure 4, the wavelength-versus-dispersion plot illustrates the polarization-dependent dispersion characteristics of the proposed MSCCPCF. At 1550 nm wavelength, the x-polarization exhibits a dispersion coefficient of −310.8 ps/(nm · km), while the y-polarization shows −77.96 ps/(nm · km), indicating pronounced birefringence and offering insights into the fiber’s polarization sensitive Figure 4 Wavelength-dependent dispersion characteristics of the proposed MSCCPCF. At 1550 nm operating wavelength, the dispersion of −295.2 ps/(nm · km) resulting from a +1% pitch variation and −326.3 ps/(nm · km) from a −1% pitch change for x-polarization underscores the significant impact of even slight modifications in the MSCCPCF fiber’s pitch parameter, as depicted in Figure 5a. Figure 5 Dispersion changes in response to a ± 1% variation in the pitch parameter of the MSCCPCF for (a) x-polarization and (b) y-polarization. Similarly, for y-polarization after ±1% variation of pitch value the dispersion coefficient varies from −74.06 to −81.86 at 1.55 μm operating wavelength is shown in Figure 5b. Figure 6 presents the wavelength-dependent dispersion curve with a ±1% variation in the air hole diameters (d[1] to d[7]). In Figure 6a, the dispersion coefficient for the x-polarization ranges from −270.4 to −351.2 ps/(nm · km) at a wavelength of 1.55 μm. In Figure 6b, the dispersion coefficient for the y-polarization varies from −67.82 to −88.09 ps/(nm · km) at the same wavelength of 1.55 μm. This sensitivity emphasizes the necessity for precise control over structural parameters to tailor dispersion characteristics for specific applications. Figure 6 Dispersion changes in response to a ± 1% variation in the d[1] to d[7] diameter of the MSCCPCF for (a) x-polarization and (b) y-polarization. 4.3 Birefringence Figure 7 illustrates the wavelength-dependent birefringence plot for the optimized parameters of the MSCCPCF, showcasing its crucial role in polarization-maintaining fiber. At 1550 nm wavelength, the birefringence value peaks at 8.795 × 10^−2, signifying its remarkable magnitude. Additionally, the figure demonstrates a notable trend of increasing birefringence as the wavelength ascends within the E to L communication bands. Figure 7 Wavelength-dependent birefringence plot of the optimized MSCCPCF parameters. Figure 8a illustrates how the birefringence of the proposed MSCCPCF changes with a ±1% variation in the pitch value. This slight pitch adjustment results in birefringence values ranging from 8.663 × 10^−2 (for +1% of pitch) to 8.927 × 10^−2 (for −1% of pitch) at 1550 nm wavelength. Figure 8 Variation of birefringence with (a) ±1% pitch adjustment and (b) ±1% variation of d[1] to d[7] in the proposed MSCCPCF. Figure 9 Depiction of the wavelength-dependent relationship between effective area and nonlinear coefficient in the proposed MSCCPCF structure. Figure 8b illustrates the impact of a ±1% variation in d[1] to d[7] on birefringence. At a wavelength of 1550 nm, the birefringence shifts from 8.681 × 10^−2 to 8.909 × 10^−2 as the air hole diameters vary by +1% and −1%. The birefringence analysis underscores the pivotal role of pitch parameter control in modulating the birefringence of the MSCCPCF, revealing its potential for tailored polarization-maintaining 4.4 Effective area and nonlinear coefficient Figure 9 represents a single plot that shows the relationship between wavelength, effective area, and nonlinear coefficient. At 1550 nm wavelength, the effective area and nonlinear coefficient show an inverse correlation with values of 21.76 W^−1 km^−1 and 5.085 μm^2, respectively. Figure 10 Plot demonstrating the wavelength-dependent numerical aperture of the proposed MSCCPCF. The nonlinear coefficient and effective area values demonstrate that the proposed MSCCPCF structure has the potential to facilitate efficient nonlinear optical processes. This combination shows a balance between strong nonlinear effects and a relatively large effective area, which is useful for nonlinear frequency conversion and supercontinuum generation. 4.5 Numerical aperture At a wavelength of 1550 nm, Figure 10 illustrates a numerical aperture of 0.3616 for the proposed MSCCPCF under optimal parameters. It highlights the fiber’s ability to capture and transmit light efficiently within its core, indicating favorable conditions for optical signal propagation and coupling efficiency in photonic applications. Figure 11 Plot depicting the relationship between wavelength and confinement loss in the proposed MSCCPCF. 4.6 Confinement loss Figure 11 displays the relationship between wavelength and confinement loss, showing a value of 5.615 × 10^−11 dB/cm at 1.55 μm wavelength. Figure 12 Plot illustrating the relationship between wavelength and confinement loss, demonstrating the impact of ±1% variation in (a) pitch value and (b) air hole diameters from d[1] to d[7] on the proposed Figure 12a depicts the plot of wavelength versus confinement loss with a ±1% variation of pitch value. For a −1% pitch change, the confinement loss increases to 6.458 × 10^−11 dB/cm, whereas for a +1% pitch change, it decreases to 4.773 × 10^−11 dB/cm at 1550 nm wavelength. Figure 12b displays the effect of a ±1% variation in the air hole diameters (d[1] to d[7]) on confinement loss. At a wavelength of 1550 nm, the confinement loss ranges from 3.931 × 10^−11 to 7.3 × 10^−11 dB/cm when all the air hole diameters are adjusted by +1% and −1%, respectively. Figure 13 Plot illustrating the wavelength-dependent variation of the V-number for the proposed MSCCPCF. The finding emphasizes the fiber’s ability to effectively confine and transmit light within its core, demonstrating its suitability for high-performance optical communication systems and other photonic applications. 4.7 V-number From the wavelength-dependent V-number plot in Figure 13, the V-number is determined to be 1.319 at 1550 nm wavelength for optimal parameters of the proposed fiber. Figure 14 Wavelength dependent beat length plot of proposed MSCCPCF for optimum parameters. The V-number value is less than 2.405 across the E to L communication bands. It demonstrates fibers’ single mode behavior, despite the fact that this communication band causes it to be endlessly single mode. 4.8 Beat length Figure 14 illustrates the relationship between wavelength and beat length. At an operating wavelength of 1550 nm, the beat length of this fiber is measured to be 17.62 μm. It represents the periodicity of polarization mode coupling within the proposed fiber structure. Figure 15 Electromagnetic field distribution in the proposed MSCCPCF for (a) x-polarization, $LP 01 x$ and (b) y-polarization, $LP 01 y$ mode. This finding is significant because it sheds light on the fiber’s polarization-maintaining properties and its suitability for applications requiring precise control over polarization states, such as fiber optic sensing and telecommunications systems. 4.9 Electro-magnetic field distribution The compactness of the fundamental mode LP[01] is also investigated. In Figure 15 the electromagnetic field distribution is shown for $LP 01 x$ and $LP 01 y$ fundamental modes. 4.10 Comparison with other PCF structures In Table 1 the comparison has been shown among some recent literatures and researches with the proposed PCF structure. Table 1 Comparative assessment of optical characteristics at λ = 1.55 μm for proposed and prior PCFs. Table 1 reveals that the proposed MSCCPCF produced higher birefringence in a single silica background material. The effective area is larger than in previous PCFs, allowing for larger mode area applications. Finally, because the proposed PCF has a shorter beat length than previous PCFs, it is a good candidate for maintaining polarization in an optical fiber. 5 Fabrication challenge and technique Amouzad Mahdiraji et al. (2014) both discuss the stack-and-draw technique, presenting improved methods for preform preparation and fiber fabrication [42]. Yajima et al. (2013) introduces a slurry casting method for silica PCF preform production, which, when combined with an OH^− reduction process, results in a low-loss PCF [43]. Kim et al. (2003) describes the fabrication process of a PCF, including stacking, jacketing, collapsing, and drawing, and presents the measured optical properties of the fiber [44]. These studies collectively highlight the diverse approaches to fabricating PCFs, each with its advantages and potential applications. Circular-hole structures are typically crafted using drilling or sol-gel techniques, whereas stack and draw methods are preferred for circular patterns [45, 46]. Asymmetrical holes such as elliptical or rectangular ones are best created through extrusion or 3D printing [47]. The suggested model, incorporating both circular and rectangular air holes, can be manufactured by employing drilling, sol-gel, stack and draw processes for circular holes, and extrusion, along with 3D printing, for the rectangular air holes. The sol-gel fabrication method can be used to fabricate the whole design except rectangular slots and later rectangular slots can be made by drilling method in the preform. So the proposed MSCCPCF can be fabricated incorporating both sol-gel and drilling fabrication methods. 6 Conclusion The MSCCPCF design has excellent optical properties, including high birefringence at telecommunications wavelengths around 1.55 μm. FEM simulations show significant birefringence of up to 8.795 × 10^ −2 at 1.55 μm, with single-mode behavior across E to L communication bands (V[eff] < 2.405). This study emphasizes the versatility and potential applications of the proposed fiber structure by conducting a systematic investigation into the impact of various geometric factors on birefringence and other optical properties, including previous results and analyses. The MSCCPCF is suitable for polarization-maintaining devices, sensors, and other photonic applications that require tailored optical properties. Modal features like nonlinearity (21.76 W^−1 km^−1), confinement loss (5.615 × 10^ −11 dB/cm), and dispersion characteristics have been thoroughly examined, highlighting its feasibility. These findings help to advance our understanding and practical application of high-performance PCFs in a variety of optical systems. There is no internal or external funding given for this research. Conflicts of interest Authors of this manuscript declare no conflicts of interest. Data availability statement The data that support the findings of this article are not publicly available. They can be requested from the author using the email address [amit.rueten@gmail.com]. Author contribution statement Amit Halder wrote the article with contributions from Yeasin Arafat, who helped in the manuscript writing. Amit Halder, Muhammad Ahsan and Imtiage Ahmed carried out the experiments and testing using COMSOL multiphysics software. Amit Halder developed the simulation software, using conceptualization by Md. Shamim Anower, and implemented the data analysis algorithm. The conceptualization, funding acquisition, and project administration were done by Yeasin Arafat and Md. Riyad Tanshen. Imtiage Ahmed, Muhammad Ahsan, Zubairia Siddiquee, and Md. Riyad Tanshen contributed to data collection, resources, and software validation. 1. De M., Gangopadhyay T.K., Singh V.K. (2019) Prospects of photonic crystal fiber as physical sensor: an overview, Sensors 19, 3, 464. [NASA ADS] [CrossRef] [Google Scholar] 2. Benabid F., Roberts P.J. (2011) Linear and nonlinear optical properties of hollow core photonic crystal fiber, J. Mod. Opt. 58, 2, 87–124. [NASA ADS] [CrossRef] [Google Scholar] 3. Monfared Y.E., Javan A.M., Kashani A.M. (2013) Confinement loss in hexagonal lattice photonic crystal fibers, Optik 124, 24, 7049–7052. [NASA ADS] [CrossRef] [Google Scholar] 4. Hossain M.S., Sen S., Hossain M.M. (2021) Performance analysis of octagonal photonic crystal fiber (O-PCF) for various communication applications, Phys. Scr. 96, 5, 055506. [NASA ADS] [CrossRef] [Google Scholar] 5. Kumar A., Verma P., Jindal P. (2021) Decagonal solid core PCF based refractive index sensor for blood cells detection in terahertz regime, Opt. Quantum Electron. 53, 1–13. [NASA ADS] [CrossRef] [Google Scholar] 6. Olyaee S., Taghipour F. (2011) Design of new square-lattice photonic crystal fibers for optical communication applications, Int. J. Physical Sci. 6, 18, 4405–4411. [Google Scholar] 7. Maji P.S., Chaudhuri P.R. (2016) Studies of the modal properties of circularly photonic crystal fiber (C-PCF) for high power applications, Photon. Nanostruct. Fundam. Appl. 19, 12–23. [NASA ADS] [CrossRef] [Google Scholar] 8. Liao J., Huang T., Xiong Z., Kuang F., Xie Y. (2017) Design and analysis of an ultrahigh birefringent nonlinear spiral photonic crystal fiber with large negative flattened dispersion, Optik 135, 42–49. [NASA ADS] [CrossRef] [Google Scholar] 9. Halder A., Tanshen M.R., Hossain M.A., Akter M.S., Sikdar M.A. (2024) Tailored dispersion and nonlinear effects in flint glass honeycomb PCF for optical communication, J. Opt. Photon. Res. 1, 1, 43–49. [Google Scholar] 10. Halder A., Anower M.S. (2019) Relative dispersion slope matched highly birefringent and highly nonlinear dispersion compensating hybrid photonic crystal fiber, Photon. Nanostruct. Fundament. Appl. 35, 100704. [NASA ADS] [CrossRef] [Google Scholar] 11. Halder A. (2020) Slope matched highly birefringent hybrid dispersion compensating fiber over telecommunication bands with low confinement loss, J. Opt. 49, 2, 187–195. [NASA ADS] [CrossRef] [Google Scholar] 12. Halder A. (2023) Design of a slope matched single mode highly birefringent dispersion compensating hybrid photonic crystal fiber, GRIN Verlag. [Google Scholar] 13. Kumar G., Gupta R.P. (2013) Dispersion modeling of micro structure optical fibers for telecommunication deployment, Sci. Technol. Manage. 17, 26, 10–18. [Google Scholar] 14. Singh S., Chaudhary B., Upadhyay A., Sharma D., Ayyanar N., Taya S.A. (2023) A review on various sensing prospects of SPR based photonic crystal fibers, Photon. Nanostruct.-Fundament. Appl. 54, 101119. [NASA ADS] [CrossRef] [Google Scholar] 15. Lu S., Li W., Guo H., Lu M. (2011) Analysis of birefringent and dispersive properties of photonic crystal fibers, Appl. Opt. 50, 30, 5798–5802. [NASA ADS] [CrossRef] [Google Scholar] 16. Halder A. (2016) Highly birefringent photonic crystal fiber for dispersion compensation over E+ S+ C+ L communication bands, in: 2016 5th International Conference on Informatics, Electronics and Vision (ICIEV), May, IEEE, pp. 1099–1103. [Google Scholar] 17. Islam S.R., Islam M.M., Rahman M.N.A., Mia M.M.A., Hakim M.S., Biswas S.K. (2017) Design of hexagonal photonic crystal fiber with ultra-high birefringent and large negative dispersion coefficient for the application of broadband fiber, Int. J. Eng. Sci. Tech. 2, 1, 9–16. [Google Scholar] 18. Amin M.N., Faisal M., Rahman M.M. (2016, November.) Ultrahigh birefringent index guiding photonic crystal fibers, in: 2016 IEEE Region 10 Conference (TENCON), IEEE, pp. 2722–2725. [Google 19. Liu M., Yuan H., Shum P., Shao C., Han H., Chu L. (2018) Simultaneous achievement of highly birefringent and nonlinear photonic crystal fibers with an elliptical tellurite core, Appl. Opt. 57, 22, 6383–6387. [NASA ADS] [CrossRef] [Google Scholar] 20. Chaudhary V.S., Kumar D., Sharma S. (2020) Design of high birefringence with two zero dispersion wavelength and highly nonlinear hybrid photonic crystal fiber, in: Janyani V., Singh G., Tiwari M., d’Alessandro A. (eds), Optical and wireless technologies. Lecture notes in electrical engineering, vol. 546, Springer, Singapore, pp. 301–306. [CrossRef] [Google Scholar] 21. Halder A. (2020) Slope matched highly birefringent hybrid dispersion compensating fiber over telecommunication bands with low confinement loss, J.Opt. 49, 2, 187–195. [NASA ADS] [CrossRef] [Google Scholar] 22. Liang R., Zhao H., Zhao L., Li X. (2020) Design and analysis of high birefringence photonic crystal fiber with sandwich structure, J. Phys. Conf. Ser. 1650, 2 022021, [CrossRef] [Google Scholar] 23. Benlacheheb M., Cherbi L., Merabet A.N. (2021) Highly birefringent fiber design based on polymer photonic crystal fiber with ultralow confinement loss for sensing application, Micro-Struct. Special. Opt. Fibres VII, 11773, 148–155. [Google Scholar] 24. Wang J. (2021) Numerical investigation of high birefringence and nonlinearity tellurite glass photonic crystal fiber with microstructured core, Appl. Opt. 60, 15, 4455–4461. [NASA ADS] [CrossRef] [Google Scholar] 25. Du Z., Wei F., He J. (2023) High birefringence and nonlinearity photonic crystal fiber, J. Opt. 52, 2, 665–671. [NASA ADS] [CrossRef] [Google Scholar] 26. Priyadharshini C., Devika R., Selvendran S., Raja A.S. (2023) Investigating the cross core octagonal photonic crystal fiber with high birefringence: A design and analysis study, Mater. Today: Proc. https://doi.org/10.1016/j.matpr.2023.03.063. [Google Scholar] 27. Amit H., Emon W., Anower MdS, Tanshen MdR, Forkan Md, Shajib MdSU (2023) Design and numerical analysis of ultra-high negative dispersion, highly birefringent nonlinear single mode core-tune photonic crystal fiber (CT-PCF) over communication bands, Opt. Photon. J. 13, 10, 227–242. [NASA ADS] [CrossRef] [Google Scholar] 28. Liu Z., Wen J., Zhou Z., Dong Y., Yang T. (2023) A highly birefringent photonic crystal fiber based on a central trielliptic structure: FEM analysis, Physica Scripta 98, 11, 115607. [NASA ADS] [CrossRef] [Google Scholar] 29. Agbemabiese P.A., Akowuah E.K. (2024) Numerical analysis of photonic crystal fibre with high birefringence and high nonlinearity, J. Opt. Commun. 44, s1, s543–s550. [NASA ADS] [CrossRef] [Google 30. Monk P. (2003) Finite element methods for Maxwell’s equations, Oxford University Press. [CrossRef] [Google Scholar] 31. Ghosh G. (1997) Sellmeier coefficients and dispersion of thermo-optic coefficients for some optical glasses, Appl. Opt. 36, 7, 1540–1546. [NASA ADS] [CrossRef] [Google Scholar] 32. Shao-Wen G., Jun-Cheng C., Song-Lin F. (2003) Numerical analysis of multilayer waveguides using effective refractive index method, Commun. Theoret. Phys. 39, 3, 327. [NASA ADS] [CrossRef] [Google 33. Ding R., Hou S., Wang D., Lei J., Li X., Ma Y. (2017) Novel design of a diamond-core photonic crystal fiber for terahertz wave transmission, in: 2017 Progress in Electromagnetics Research Symposium-Spring (PIERS), May, IEEE, pp. 1148–1151. [Google Scholar] 34. Wu D., Yu F., Liu Y., Liao M. (2019) Dependence of waveguide properties of anti-resonant hollow-core fiber on refractive index of cladding material, J. Lightwave Technol. 37, 21, 5593–5699. [NASA ADS] [CrossRef] [Google Scholar] 35. Zairmi Y., Veriyanti V., Candra W., Syahputra R.F., Soerbakti Y., Asyana V., Irawan D., Hairi H., Hussein N.A., Anita S. (2020) Birefringence and polarization mode dispersion phenomena of commercial optical fiber in telecommunication networks, J. Phys. Conf. Ser., 1655, 1, 012160, IOP Publishing. [CrossRef] [Google Scholar] 36. Mortensen N.A. (2002) Effective area of photonic crystal fibers, Opt. Exp. 10, 7, 341–348. [CrossRef] [Google Scholar] 37. Yu Y., Lian Y., Hu Q., Xie L., Ding J., Wang Y., Lu Z. (2022) Design of PCF supporting 86 OAM modes with high mode quality and low nonlinear coefficient, Photonics, 9, 4, 266, MDPI. [NASA ADS] [CrossRef] [Google Scholar] 38. Halder A., Tanshen M.R., Akter M.S., Hossain M.A. (2023) Design of highly birefringence and nonlinear Modified Honeycomb Lattice Photonic Crystal Fiber (MHL-PCF) for broadband dispersion compensation in E+ S+ C+ L communication bands, Eng. Proc. 56, 1, 19. [Google Scholar] 39. Halder A., Anower M.S., Emon W., Tanshen M.R., Shajib M.S.U. (2023) Design and finite element analysis of a single-mode modified circular microstructured optical fiber for high negative dispersion and high nonlinearity across E to L communication bands, IN: 2023 26th International Conference on Computer and Information Technology (ICCIT), December. IEEE, pp. 1–5. [Google 40. Luke S., Sudheer S.K., Pillai V.M. (2015) Modeling and analysis of a highly birefringent chalcogenide photonic crystal fiber, Optik 126, 23, 3529–3532. [NASA ADS] [CrossRef] [Google Scholar] 41. Reeves W.H., Knight J.C., Russell P.S.J., Roberts P.J. (2002) Demonstration of ultra-flattened dispersion in photonic crystal fibers, Opt. Exp. 10, 14, 609–613. [NASA ADS] [CrossRef] [Google 42. Amouzad Mahdiraji G., Chow D.M., Sandoghchi S.R., Amirkhan F., Dermosesian E., Yeo K.S., Kakaei Z., Ghomeishi M., Poh S.Y., Yu Gang S., Mahamd Adikan F.R. (2014) Challenges and solutions in fabrication of silica-based photonic crystal fibers: an experimental study, Fiber Integrated Opt. 33, 1–2, 85–104. [NASA ADS] [CrossRef] [Google Scholar] 43. Yajima T., Yamamoto J., Ishii F., Hirooka T., Yoshida M., Nakazawa M. (2013) Low-loss photonic crystal fiber fabricated by a slurry casting method, Opt. Exp. 21, 25, 30500–30506. [NASA ADS] [CrossRef] [Google Scholar] 44. Kim J.C., Kim H.K., Paek U.C., Lee B.H., Eom J.B. (2003) The fabrication of a photonic crystal fiber and measurement of its properties, J. Opt. Soc. Korea 7, 2, 79–83. [CrossRef] [Google Scholar] 45. Zhang P., Zhang J., Yang P., Dai S., Wang X., Zhang W. (2015) Fabrication of chalcogenide glass photonic crystal fibers with mechanical drilling, Optic. Fiber Technol. 26, 176–179. [NASA ADS] [CrossRef] [Google Scholar] 46. Li W., Zhou Q., Zhang L., Wang S., Wang M., Yu C., Feng S., Chen D., Hu L. (2013) Watt-level Yb-doped silica glass fiber laser with a core made by sol-gel method, Chin. Opt. Lett. 11, 9. [Google 47. Pravesh R., Kumar D., Pandey B.P., Chaudhary V.S., Singh D., Kumar S. (2023) Advanced refractive index sensor based on photonic crystal fiber with elliptically split cores, Opt. Quantum Electron. 55, 13, 1205. [NASA ADS] [CrossRef] [Google Scholar] All Tables Table 1 Comparative assessment of optical characteristics at λ = 1.55 μm for proposed and prior PCFs. All Figures Figure 1 Cross sectional view of proposed MSCCPCF. Where, Λ = 0.9 μm, d[1]/Λ = d[2]/Λ = d[4]/Λ = 0.45, d[3]/Λ = d[6]/Λ = 0.7, d[5]/Λ = 0.6 and d[7]/Λ = 0.85. Rectangular slot dimensions: a = Λ/2 and b = Λ/ In the text Figure 2 Quarter transverse cross-sectional view of air hole arrangement technique. In the text Figure 3 Wavelength vs. effective refractive index curve. In the text Figure 4 Wavelength-dependent dispersion characteristics of the proposed MSCCPCF. In the text Figure 5 Dispersion changes in response to a ± 1% variation in the pitch parameter of the MSCCPCF for (a) x-polarization and (b) y-polarization. In the text Figure 6 Dispersion changes in response to a ± 1% variation in the d[1] to d[7] diameter of the MSCCPCF for (a) x-polarization and (b) y-polarization. In the text Figure 7 Wavelength-dependent birefringence plot of the optimized MSCCPCF parameters. In the text Figure 8 Variation of birefringence with (a) ±1% pitch adjustment and (b) ±1% variation of d[1] to d[7] in the proposed MSCCPCF. In the text Figure 9 Depiction of the wavelength-dependent relationship between effective area and nonlinear coefficient in the proposed MSCCPCF structure. In the text Figure 10 Plot demonstrating the wavelength-dependent numerical aperture of the proposed MSCCPCF. In the text Figure 11 Plot depicting the relationship between wavelength and confinement loss in the proposed MSCCPCF. In the text Figure 12 Plot illustrating the relationship between wavelength and confinement loss, demonstrating the impact of ±1% variation in (a) pitch value and (b) air hole diameters from d[1] to d[7] on the proposed In the text Figure 13 Plot illustrating the wavelength-dependent variation of the V-number for the proposed MSCCPCF. In the text Figure 14 Wavelength dependent beat length plot of proposed MSCCPCF for optimum parameters. In the text Figure 15 Electromagnetic field distribution in the proposed MSCCPCF for (a) x-polarization, $LP 01 x$ and (b) y-polarization, $LP 01 y$ mode. In the text Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform. Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days. Initial download of the metrics may take a while.
{"url":"https://jeos.edpsciences.org/articles/jeos/full_html/2024/02/jeos20240046/jeos20240046.html","timestamp":"2024-11-10T06:10:33Z","content_type":"text/html","content_length":"210205","record_id":"<urn:uuid:2a946d30-1b18-4bd1-a839-0a33e385644f>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00238.warc.gz"}
Law, Judith research overview • I modeled a multi-level distribution of three dimensional arrays using a Bayesian approach assuming a separable covariance structure that is the sum of a set of Kronecker products. The goal was to find patterns in the covariance of the survey participants’ responses across quality measures and years to enhance public reporting and quality monitoring.
{"url":"https://experts.colorado.edu/display/fisid_167501","timestamp":"2024-11-09T20:01:32Z","content_type":"text/html","content_length":"24046","record_id":"<urn:uuid:51b2fcd0-e2da-4a10-8c6e-ce76529130f1>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00434.warc.gz"}
JEE Advanced Paper-2 Past Exam Paper | ENTRANCE INDIA JEE (Advanced) Examination – 2022 (Held on Sunday 28^th August, 2022) SECTION-1 : (Maximum Marks : 24) • This section contains EIGHT (08) questions. • The answer to each question is a SINGLE DIGIT INTEGER ranging from 0 to 9, BOTH INCLUSIVE. • For each question, enter the correct integer corresponding to the answer using the mouse and the on-screen virtual numeric keypad in the place designated to enter the answer. • Answer to each question will be evaluated according to the following marking scheme: Full marks : +3 If ONLY the correct integer is entered; Zero Marks : 0 If the question is unanswered; Negative Marks : −1 In all other cases. 1. The particle of mass 1 kg is subjected to a force which depends on the position as ^−^2. At time t = 0, the particle’s position [x] and v[y] denote the x and the y components of the particle’s velocity, respectively. Ignore gravity. When z = 0.5 m, the value of (x v[y] – y v[x]) is ______ m^2s^−^1. 2. In a radioactive decay chain reaction, ^− particles emitted in this process is _____. 3. Two resistances R[1] = XΩ and R[2] = 1Ω are connected to a wire AB of uniform resistivity, as shown in the figure. The radius of the wire varies linearly along its axis from 0.2 mm at A to 1 mm at B. A galvanometer (G) connected to the center of the wire, 50 cm from each end along its axis, shows zero deflection when A and B are connected to a battery. The value of X is ______. 4. In a particular system of units, a physical quantity can be expressed in terms of the electric charge e, electron mass m[e], Planck’s constant h, and Coulomb’s constant [0] is the permittivity of vacuum. In terms of these physical constants, the dimension of the magnetic field is [B] = [e]^α[m[e]]^β [h]^γ [k]^δ. The value of α + β + γ + δ is _______. 5. Consider a configuration of n identical units, each consisting of three layers. The first layer is a column of air of height h = 1/3 cm, and the second and third layers are of equal thickness 6. A charge q is surrounded by a closed surface consisting of an inverted cone of height h and base radius R, and a hemisphere of radius R as shown in the figure. The electric flux through the conical surface is 7. On a frictionless horizontal plane, a bob of mass m = 0.1 kg is attached to a spring with natural length l[0] = 0.1 m. The spring constant is k[1] = 0.009 Nm^−^1 when the length of the spring l > l[0] and is k[2] = 0.016 Nm^−^1 when l < l[0]. Initially the bob is released from l = 0.15 m. Assume that Hooke’s law remains valid throughout the motion. If the time period of the full oscillation is T = (nπ) s, then the integer closest to n is ______. 8. An object and a concave mirror of focal length f = 10 cm both move along the principal axis of the mirror with constant speeds. The object moves with speed V[0] = 15 cm s^−^1 towards the mirror with respect to a laboratory frame. The distance between the object and the mirror at a given moment is denoted by u. When u = 30 cm, the speed of the mirror V[m] is such that image is instantaneously at rest with respect to the laboratory frame, and the object forms a real image. The magnitude of V[m] is _____ cm s^−^1. SECTION-2: (Maximum Marks : 24) • This section contains SIX (06) questions. • Each question has FOUR options (A), (B), (C) and (D). ONE OR MORE THAN ONE of these four option(s) is (are) correct answer(s). • For each question, choose the option(s) corresponding to (all) the correct answer(s). • Answer to each question will be evaluated according to the following marking scheme: Full Marks : +4 ONLY in (all) the correct option(s) is(are) chosen; Partial Marks : +3 If all the four options are correct but ONLY three options are chosen; Partial Marks : +2 If three or more options are correct but ONLY two options are chosen, both of which are correct; Partial Marks : +1 If two or more options are correct but ONLY one option is chosen and it is a correct option; Zero Marks : 0 If unanswered; Negative Marks : −2 In all other cases. 9. In the figure, the inner (shaded) region A represents a sphere of radius r[A] = 1, within which the electrostatic charge density varies with the radial distance r from the center as ρ[A] = kr, where k is positive. In the spherical shell B of outer radius r[B], the electrostatic charge density varies as ρ[B] = 2k/r. Assume that dimensions are taken care of. All physical quantities are in their SI units. Which of the following statement(s) is(are) correct? (A) If (B) If r[B] = 3/2, then the electric potential just outside B is k/∈[0]. (C) If r[B] = 2, then the total charge of the configuration is 15πk. (D) If r[B] = 5/2, then the magnitude of the electric field just outside B is 13πk/∈[0]. 10. In Circuit-1 and Circuit-2 shown in the figures, R[1] = 1 Ω, R[2] = 2 Ω and R[3] = 3 Ω, P[1] and P[2] are the power dissipations in Circuit-1 and Circuit-2 when the switches S[1] and S[2] are in open conditions, respectively. Q[1] and Q[2] are the power dissipations in Circuit-1 and Circuit-2 when the switches S[1] and S[2] are in closed conditions, respectively. Which of the following statement(s) is(are) correct? (A) When a voltage source of 6 V is connected across A and B in both circuits, P[1] < P[2]. (B) When a constant current source of 2 Amp is connected across A and B in both circuits, P[1] > P[2]. (C) When a voltage source of 6 V is connected across A and B in Circuit-1, Q[1] > P[1]. (D) When a constant current source of 2 Amp is connected across A and B in both circuits, Q[2] < Q[1] 11. A bubble has surface tension S. The ideal gas inside the bubble has ratio of specific heats γ = 5/3. The bubble is exposed to the atmosphere and it always retains its spherical shape. When the atmospheric pressure is P[a1], the radius of the bubble is found to be r[1] and the temperature of the enclosed gas is T[1]. When the atmospheric pressure is P[a2], the radius of the bubble and the temperature of the enclosed gas are r[2] and T[2], respectively. Which of the following statement(s) is(are) correct? (A) If the surface of the bubble is a perfect heat insulator, then (B) If the surface of the bubble is a perfect heat insulator, then the total internal energy of the bubble including its surface energy does not change with the external atmospheric pressure. (C) If the surface of the bubble is a perfect heat conductor and the change in atmospheric temperature is negligible, then (D) If the surface of the bubble is a perfect heat insulator, then 12. A disk of radius R with uniform positive charge density σ is placed on the xy plane with its center at the origin. The Coulomb potential along the z-axis is A particle of positive charge q is placed initially at rest at a point on the z axis with z = z[0] and z[0] > 0. In addition to the Coulomb force, the particle experiences a vertical force (A) For (B) For (C) For [0]. (D) For β > 1 and z[0] > 0, the particle always reaches the origin. 13. A double slit setup is shown in the figure. One of the slits is in medium 2 of refractive index n[2]. The other slit is at the interface of this medium with another medium 1 of refractive index n [1](≠ n[2]). The line joining the slits is perpendicular to the interface and the distance between the slits is d. The slit widths are much smaller than d. A monochromatic parallel beam of light is incident on the slits from medium 1. A detector is placed in medium 2 at a large distance from the slits, and at an angle θ from the line joining them, so that θ equals the angle of refraction of the beam. Consider two approximately parallel rays from the slits received by the detector. Which of the following statement(s) is(are) correct? (A) The phase difference between the two rays is independent of d. (B) The two rays interfere constructively at the detector. (C) The phase difference between the two rays depends on n[1] but is independent of n[2]. (D) The phase difference between the two rays vanishes only for certain values of d and the angle of incidence of the beam, with θ being the corresponding angle of refraction. 14. In the given P-V diagram, a monoatomic gas (γ = 5/3) is first compressed adiabatically from state A to state B. Then it expands isothermally from state B to state C. [Given : (1/3)^6 = 0.5, ln 2 = 0.7]. Which of the following statement(s) is(are) correct? (A) The magnitude of the total work done in the process A → B → C is 144 kJ. (B) The magnitude of the work done in the process B → C is 84 kJ. (C) The magnitude of the work done in the process A → B is 60 kJ. (D) The magnitude of the work done in the process C → A is zero. SECTION-3: (Maximum Marks : 12) • This section contains FOUR (04) questions. • Each question has FOUR options (A), (B), (C) and (D). ONLY ONE of these four options is the correct answer. • For each question, choose the option corresponding to the correct answer. • Answer to each question will be evaluated according to the following marking scheme: Full Marks : +3 If ONLY the correct option is chosen: Zero Marks : 0 If none of the options is chosen (i.e. the question is unanswered); Negative Marks : −1 In all other cases. 15. A flat surface of a thin uniform disk A of radius R is glued to a horizontal table. Another thin uniform disk B of mass M and with the same radius R rolls without slipping on the circumference of A, as shown in the figure. A flat surface of B also lies on the plane to the table. The center of mass of B has fixed angular speed ω about the vertical axis passing through the center of A. The angular momentum of B is nMωR^2 with respect to the center of A. Which of the following is the value of n? (A) 2 (B) 5 (C) 7/2 (D) 9/2 16. When light of a given wavelength is incident on a metallic surface, the minimum potential needed to stop the emitted photoelectrons is 6.0 V. This potential drops to 0.6 V if another source with wavelength four times that of the first one and intensity half of the first one is used. What are the wavelength of the first source and the work function of the metal, respectively? (A) 1.72 × 10^−^7 m, 1.20 eV (B) 1.72 × 10^−^7 m, 5.60 eV (C) 3.78 × 10^−^7 m, 5.60 eV (D) 3.78 × 10^−^7 m, 1.20 eV 17. Area of the cross-section of a wire is measured using a screw gauge. The pitch of the main scale is 0.5 mm. The circular scale has 100 divisions and for one full rotation of the circular scale, the main scale shifts by two divisions. The measured readings are listed below. What are the diameter and cross-sectional area of the wire measured using the screw gauge? (A) 2.22 ± 0.02 mm, π(1.23 ± 0.02) mm^2 (B) 2.22 ± 0.01 mm, π(1.23 ± 0.01) mm^2 (C) 2.14 ± 0.02 mm, π(1.14 ± 0.02) mm^2 (D) 2.14 ± 0.01 mm, π(1.23 ± 0.01) mm^2 18. Which one of the following options represents the magnetic field SECTION-1 : (Maximum Marks : 24) • This section contains EIGHT (08) questions. • The answer to each question is a SINGLE DIGIT INTEGER ranging from 0 to 9, BOTH INCLUSIVE. • For each question, enter the correct integer corresponding to the answer using the mouse and the on-screen virtual numeric keypad in the place designated to enter the answer. • Answer to each question will be evaluated according to the following marking scheme: Full marks : +3 If ONLY the correct integer is entered; Zero Marks : 0 If the question is unanswered; Negative Marks : −1 In all other cases. 1. Concentration of H[2]SO[4] and Na[2]SO[4] in a solution is 1 M and 1.8 × 10^–2 M, respectively. Molar solubility of PbSO[4] in the same solution is X × 10^–Y M (expressed in scientific notation). The value of Y is _________. [Given: Solubility product of PbSO[4] (Ksp) = 1.6 × 10^–8. For H[2]SO[4], Ka1 is very large and Ka[2] = 1.2 × 10^–2] 2. An aqueous solution is prepared by dissolving 0.1 mol of an ionic salt in 1.8 kg of water at 35 ºC. The salt remains 90% dissociated in the solution. The vapour pressure of the solution is 59.724 mm of Hg. Vapour pressure of water at 35 ºC is 60.000 mm of Hg. The number of ions present per formula unit of the ionic salt is _______. 3. Consider the strong electrolytes Z[m]X[n], U[m]Y[p] and V[m]X[n]. Limiting molar conductivity (⋀^0) of U[m]Y[p] and V[m]X[n] are 250 and 440 S cm^2 mol^–1, respectively. The value of (m + n + p) is _______. The plot of molar conductivity (⋀) of Z[m]X[n] vs c^1/2 is given below. 4. The reaction of Xe and O[2]F[2] gives a Xe compound P. The number of moles of HF produced by the complete hydrolysis of 1 mol of P is _______. 5. Thermal decomposition of AgNO[3] produces two paramagnetic gases. The total number of electrons present in the antibonding molecular orbitals of the gas that has the higher number of unpaired electrons is _______. 6. The number of isomeric tetraenes (NOT containing sp-hybridized carbon atoms) that can be formed from the following reaction sequence is ________. 7. The number of –CH[2]-(methylene) groups in the product formed from the following reaction sequence is ________. 8. The total number of chiral molecules formed from one molecule of P on complete ozonolysis (O[3], Zn/H[2]O) is ________. SECTION-2: (Maximum Marks : 24) • This section contains SIX (06) questions. • Each question has FOUR options (A), (B), (C) and (D). ONE OR MORE THAN ONE of these four option(s) is (are) correct answer(s). • For each question, choose the option(s) corresponding to (all) the correct answer(s). • Answer to each question will be evaluated according to the following marking scheme: Full Marks : +4 ONLY in (all) the correct option(s) is(are) chosen; Partial Marks : +3 If all the four options are correct but ONLY three options are chosen; Partial Marks : +2 If three or more options are correct but ONLY two options are chosen, both of which are correct; Partial Marks : +1 If two or more options are correct but ONLY one option is chosen and it is a correct option; Zero Marks : 0 If unanswered; Negative Marks : −2 In all other cases. 9. To check the principle of multiple proportions, a series of pure binary compounds (P[m]Q[n]) were analyzed and their composition is tabulated below. The correct option(s) is(are) (A) If empirical formula of compound 3 is P[3]Q[4], then the empirical formula of compound 2 is P[3]Q[5]. (B) If empirical formula of compound 3 is P[3]Q[2] and atomic weight of element P is 20, then the atomic weight of Q is 45. (C) If empirical formula of compound 2 is PQ, then the empirical formula of the compound 1 is P[5]Q[4]. (D) If atomic weight of P and Q are 70 and 35, respectively, then the empirical formula of compound 1 is P[2]Q. 10. The correct option(s) about entropy (S) is(are) [R = gas constant, F = Faraday constant, T = Temperature] (A) For the reaction, M(s) + 2H^+(aq) → H[2](g) + M^2+(aq), if (B) The cell reaction, Pt(s) | H[2](g, 1bar) | H^+(aq, 0.01M) || H^+(aq, 0.1M) | H[2](g, 1bar) | Pt(s), is an entropy driven process. (C) For racemization of an optically active compound, ∆S > 0. (D) ∆S > 0, for [Ni(H[2]O)[6]]^2+ + 3 en → [Ni(en)[3]]^2+ + 6H[2]O (where en = ethylenediamine). 11. The compound(s) which react(s) with NH3 to give boron nitride (BN) is(are) (A) B (B) B[2]H[6] (C) B[2]O[3] (D) HBF[4] 12. The correct option(s) related to the extraction of iron from its ore in the blast furnace operating in the temperature range 900 – 1500 K is(are) (A) Limestone is used to remove silicate impurity. (B) Pig iron obtained from blast furnace contains about 4% carbon. (C) Coke (C) converts CO[2] to CO. (D) Exhaust gases consist of NO[2] and CO. 13. Considering the following reaction sequence, the correct statement(s) is(are) (A) Compounds P and Q are carboxylic acids. (B) Compound S decolorizes bromine water. (C) Compounds P and S react with hydroxylamine to give the corresponding oximes. (D) Compound R reacts with dialkylcadmium to give the corresponding tertiary alcohol. 14. Among the following, the correct statement(s) about polymers is(are) (A) The polymerization of chloroprene gives natural rubber. (B) Teflon is prepared from tetrafluoroethene by heating it with persulphate catalyst at high pressures. (C) PVC are thermoplastic polymers. (D) Ethene at 350-570 K temperature and 1000-2000 atm pressure in the presence of a peroxide initiator yields high density polythene. SECTION-3: (Maximum Marks : 12) • This section contains FOUR (04) questions. • Each question has FOUR options (A), (B), (C) and (D). ONLY ONE of these four options is the correct answer. • For each question, choose the option corresponding to the correct answer. • Answer to each question will be evaluated according to the following marking scheme: Full Marks : +3 If ONLY the correct option is chosen: Zero Marks : 0 If none of the options is chosen (i.e. the question is unanswered); Negative Marks : −1 In all other cases. 15. Atom X occupies the fcc lattice sites as well as alternate tetrahedral voids of the same lattice. The packing efficiency (in %) of the resultant solid is closest to (A) 25 (B) 35 (C) 55 (D) 75 16. The reaction of HClO[3] with HCl gives a paramagnetic gas, which upon reaction with O[3] produces (A) Cl[2]O (B) ClO[2] (C) Cl[2]O[6] (D) Cl[2]O[7] 17. The reaction Pb(NO[3])[2] and NaCl in water produces a precipitate that dissolves upon the addition of HCl of appropriate concentration. The dissolution of the precipitate is due to the formation (A) PbCl[2] (B) PbCl[4] (C) [PbCl[4]]^2^− (D) [PbCl[6]]^2^− 18. Treatment of D- glucose with aqueous NaOH results in a mixture of monosaccharides, which are SECTION-1 : (Maximum Marks : 24) • This section contains EIGHT (08) questions. • The answer to each question is a SINGLE DIGIT INTEGER ranging from 0 to 9, BOTH INCLUSIVE. • For each question, enter the correct integer corresponding to the answer using the mouse and the on-screen virtual numeric keypad in the place designated to enter the answer. • Answer to each question will be evaluated according to the following marking scheme: Full marks : +3 If ONLY the correct integer is entered; Zero Marks : 0 If the question is unanswered; Negative Marks : −1 In all other cases. 1. Let α and β be real numbers such that 2. If y(x) is the solution of the differential equation xdy – (y^2 – 4y)dx = 0 for x > 0, y(1) = 2, and the slope of the curve y = y(x) is never zero, then the value of 10y(√2) is ______. 3. The greatest integer less than or equal to 4. The product of all positive real values of x satisfying the equation 5. If Then the value of 6β is ______. 6. Let β be a real number. Consider the matrix If A^7 – (β – 1)A^6 – βA^5 is a singular matrix, then the value of 9β is _______. 7. Consider the hyperbola [1], where S lies on the positive x-axis. Let P be a point on the hyperbola, in the first quadrant. Let ∠SPS[1] = α, with α < π/2. The straight line passing through the point S and having the same slope as that of the tangent at P to the hyperbola, intersects the straight line S[1]P at P[1]. Let δ be the distance of P from the straight line SP[1], and β= S[1] Then the greatest integer less than or equal to 8. Consider the functions f , g : ℝ → ℝ defined by If α is the area of the region SECTION-2: (Maximum Marks : 24) • This section contains SIX (06) questions. • Each question has FOUR options (A), (B), (C) and (D). ONE OR MORE THAN ONE of these four option(s) is (are) correct answer(s). • For each question, choose the option(s) corresponding to (all) the correct answer(s). • Answer to each question will be evaluated according to the following marking scheme: Full Marks : +4 ONLY in (all) the correct option(s) is(are) chosen; Partial Marks : +3 If all the four options are correct but ONLY three options are chosen; Partial Marks : +2 If three or more options are correct but ONLY two options are chosen, both of which are correct; Partial Marks : +1 If two or more options are correct but ONLY one option is chosen and it is a correct option; Zero Marks : 0 If unanswered; Negative Marks : −2 In all other cases. 9. Let PQRS be a quadrilateral in a plane, where QR = 1, ∠PQR = ∠QRS = 70°, ∠PQS = 15° and ∠PRS = 40°. If ∠RPS = θ°, PQ = α and PS = β, then the interval(s) that contain(s) the value of 4αβ sin θ° is (A) (0, √2) (B) (1, 2) (C) (√2, 3) (D) (2√2, 3√2) 10. Let Let g : [0, 1] → ℝ be the function defined by g(x) = 2^α^x + 2^α^(1 – x) Then, which of the following statements is/are TRUE? (A) The minimum value of g(x) is 2^7/6 (B) The maximum value of g(x) is 1 + 2^1/3 (C) The function g(x) attains its maximum at more than one point (D) The function g(x) attains its minimum at more than one point 11. Let 12. Let G be a circle of radius R > 0. Let G[1], G[2]…,G[n] be n circles of equal radius r > 0. Suppose each of the n circles G[1], G[2]…,G[n] touches the circle G externally. Also, for i = 1, 2,…, n – 1, the circle G[i] touches G[i+1] externally, and G[n] touches G[1] Then, which of the following statements is/are TRUE? (A) If n = 4, then (√2 – 1) r < R (B) If n = 5, then r < R (C) If n = 8, then (√2 – 1) r < R (D) If n = 12, then √2(√3 + 1) r > R 13. Let be three vectors such that b[2]b[3] > 0, Then, which of the following is/are TRUE? 14. For x ∈ ℝ, let the function y(x) be the solution of the differential equation Then, which of the following statements is/are TRUE? (A) y(x) is an increasing function (B) y(x) is a decreasing function (C) There exists a real number β such that the line y = β intersects the curve y = y many points. (D) y(x) is a periodic function SECTION-3: (Maximum Marks : 12) • This section contains FOUR (04) questions. • Each question has FOUR options (A), (B), (C) and (D). ONLY ONE of these four options is the correct answer. • For each question, choose the option corresponding to the correct answer. • Answer to each question will be evaluated according to the following marking scheme: Full Marks : +3 If ONLY the correct option is chosen: Zero Marks : 0 If none of the options is chosen (i.e. the question is unanswered); Negative Marks : −1 In all other cases. 15. Consider 4 boxes, where each box contains 3 red balls and 2 blue balls. Assume that distinct. In how many different ways can 10 balls be chosen from these 4 boxes so box at least one red ball and one blue ball are chosen? (A) 21816 (B) 85536 (C) 12096 (D) 156816 16. If ^2022? 17. Suppose that Box-I contains 8 red, 3 blue and 5 green balls, Box-II contains 24 red, 9 blue and 15 green balls, Box-III contains 1 blue, 12 green and 3 yellow balls, Box-IV contains 10 green, 16 orange and 6 white balls. A ball is chosen randomly from Box-I ; call this ball b. If b is red then a ball is chosen randomly from Box-II, if b is blue then a ball is chosen randomly from Box-III, and if b is green then a ball is chosen randomly from Box-IV. The conditional probability of the event ‘one of the chosen balls is white’ given that the event ꞌat least one of the chosen balls is greenꞌ has happened, is equal to (A) 15/256 (B) 3/16 (C) 5/52 (D) 1/8 18. For positive integer n, define Then, the value of
{"url":"https://entranceindia.com/tag/jee-advanced-paper-2-past-exam-paper/","timestamp":"2024-11-06T02:22:46Z","content_type":"text/html","content_length":"188870","record_id":"<urn:uuid:1482d5be-c0aa-435b-a50c-329a1f392738>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00182.warc.gz"}
All About difference between one way slab and two way slab Slabs are an essential structural element in building construction, providing a flat and level surface for floors, roofs, and ceilings. There are two main types of slabs used in construction: one way slab and two way slab. While the two may seem similar, there are significant differences in their design, construction, and application. In this article, we will delve into the key differences between these two types of slabs and their respective pros and cons, to help you understand which one best suits your building needs. So, let’s dive into the world of slabs and explore the fundamental dissimilarities between one way slab and two way slab. What is difference between one way slab and two way slab A slab is a horizontal structural element that is designed to resist the loads acting on it, such as the weight of the building, live loads, and weather loads. There are two main types of slabs used in construction: one way slab and two-way slab. The main difference between these two slabs lies in their structural design and load-carrying capacity. One-way slab: A one-way slab is a structural element that is designed and constructed to carry most of its load in one direction. This type of slab is used in spans where the length is more significant than the width. In a one-way slab, the primary reinforcement is provided in one direction, usually along the shorter span of the slab. The one-way slab is supported on two opposite sides by beams or walls, and the other two edges remain free. Two-way slab: A two-way slab is a structural element that is designed and constructed to carry the load in both directions, i.e., along the length and breadth. This type of slab is used in small spans where the length and width are similar. The primary reinforcement is generally provided in both directions, and the slab is supported on all four sides. Now, let us understand the differences between these two slabs in more detail: 1. Design and Load-carrying capacity: In a one-way slab, the primary reinforcement is provided in one direction, which means that the slab is best suited for carrying loads in a single direction. On the other hand, in a two-way slab, the primary reinforcement is provided in both directions, making it more suitable for carrying loads in both directions. This difference in the design and reinforcement also affects the load-carrying capacity of the two slabs. Two-way slabs can carry higher loads compared to one-way slabs due to their structural design. 2. Spanning capacity: One-way slabs have a lower spanning capacity compared to two-way slabs. One-way slabs can span up to a maximum of 6 meters, whereas two-way slabs can span up to 10 meters. This is because one-way slabs are supported on only two edges, while two-way slabs are supported on all four edges, making them more efficient in supporting a larger span. 3. Construction process: The construction process for one-way and two-way slabs also differs. One-way slabs are relatively easier to construct as the reinforcement is only provided in one direction. However, for two-way slabs, the reinforcement is provided in both directions, making the construction process slightly more complicated. 4. Deflection: One-way slabs are more prone to deflection as they have a lower load-carrying capacity and are only supported on two edges. On the other hand, two-way slabs have a higher load-carrying capacity and are supported on all four sides, making them less prone to deflection. In summary, the main difference between one-way and two-way slabs lies in their structural design, load-carrying capacity, spanning capacity, construction process, and deflection. Both these slabs have their advantages and disadvantages, and the selection of the type of slab depends on several factors such as span, load, and site conditions. Therefore, it is crucial to consult a structural engineer to determine the most suitable type of slab for a particular construction project. What is one way slab? A one way slab is a type of reinforced concrete slab that is commonly used in construction projects. It is called ‘one-way’ because it is designed to resist only one-way bending forces. This means that the slab will only be supported along two opposite edges, and the other two edges will be free to move. One-way slabs are primarily used in low-rise buildings and horizontal structures such as floors, roofs, and bridges. They are also commonly used in residential, commercial, and industrial buildings as they provide a simple and cost-effective solution for spanning large areas. The construction of a one-way slab involves placing a reinforced concrete layer on top of a supporting structure, which can be beams, walls, or columns. The slab thickness is determined by the span length and the load requirements. The reinforcement is placed within the slab to increase its strength and resistance to bending and cracking. There are several advantages to using one-way slabs in construction projects. Firstly, they are easy to construct and require less formwork compared to other types of slabs. This results in faster construction times, reducing overall project costs. One-way slabs are also lightweight, making them suitable for use in areas with weak soil conditions. One-way slabs can be designed with different types of reinforcement, depending on the load and span requirements. The most commonly used reinforcement types are mild steel bars, high-strength steel bars, and steel mesh. These reinforcements provide the slab with the necessary strength to resist bending forces and distribute the load evenly across its surface. One of the main limitations of one-way slabs is their ability to span shorter distances compared to other types of slabs such as two-way slabs or flat slabs. This makes them unsuitable for use in large-span structures, which require more support to resist loads. In conclusion, one-way slabs are versatile, easy-to-construct, and cost-effective elements in building construction. They are suitable for use in a variety of structures and provide the necessary strength and stability to withstand bending forces. However, their use is limited to shorter spans, and proper design and construction are crucial to ensure their structural integrity. What is two way rcc slab? A two-way reinforced concrete (RCC) slab is a type of structural element commonly used in building construction. It consists of a horizontal concrete slab supported by two or more beams on its opposite edges and columns or walls at its corners. This type of slab is used to span a distance between supporting beams or walls and transfer the load to these supports. There are two main types of two-way RCC slabs: flat slab and grid slab. In a flat slab, the slab is supported directly on the columns or walls without any beams in between, while in a grid slab, the slab is supported on beams that are placed in a grid pattern. The design of a two-way RCC slab is based on the principle of distributing the load over the entire area of the slab, rather than just along its edges. This is achieved through the use of reinforcement, which helps to increase the strength and stability of the slab. The reinforcement is usually in the form of steel bars or mesh embedded within the concrete. The bars are placed in both directions, forming a grid pattern, which gives the slab its name. This reinforcement helps to resist the tensile forces and prevent the slab from cracking or failing. The thickness of the slab is determined by the span between the supporting beams or walls, the type of loads expected, and the strength of the materials used. Typically, a two-way RCC slab is thicker than a one-way slab as it needs to support the additional loads from both directions. The construction process of a two-way RCC slab involves placing formwork on top of the support beams or walls to hold the concrete in place during pouring. The reinforcement is then placed as per the design and tied together to create a strong grid. Finally, the concrete is poured and left to cure before the formwork is removed. Two-way RCC slabs have several advantages over other types of slabs. They are more economical as they require fewer materials and can span longer distances. They also offer greater flexibility in terms of layout and can accommodate openings and penetrations easily. In conclusion, two-way RCC slabs are an efficient and popular choice for building construction due to their strength, stability, and flexibility. They provide a robust and cost-effective solution for spanning large distances and can be designed to meet specific project requirements. What is difference between one way slab and two way slab A slab is a horizontal structural element that is designed to resist the loads acting on it, such as the weight of the building, live loads, and weather loads. There are two main types of slabs used in construction: one way slab and two-way slab. The main difference between these two slabs lies in their structural design and load-carrying capacity. One-way slab: A one-way slab is a structural element that is designed and constructed to carry most of its load in one direction. This type of slab is used in spans where the length is more significant than the width. In a one-way slab, the primary reinforcement is provided in one direction, usually along the shorter span of the slab. The one-way slab is supported on two opposite sides by beams or walls, and the other two edges remain free. Two-way slab: A two-way slab is a structural element that is designed and constructed to carry the load in both directions, i.e., along the length and breadth. This type of slab is used in small spans where the length and width are similar. The primary reinforcement is generally provided in both directions, and the slab is supported on all four sides. Now, let us understand the differences between these two slabs in more detail: 1. Design and Load-carrying capacity: In a one-way slab, the primary reinforcement is provided in one direction, which means that the slab is best suited for carrying loads in a single direction. On the other hand, in a two-way slab, the primary reinforcement is provided in both directions, making it more suitable for carrying loads in both directions. This difference in the design and reinforcement also affects the load-carrying capacity of the two slabs. Two-way slabs can carry higher loads compared to one-way slabs due to their structural design. 2. Spanning capacity: One-way slabs have a lower spanning capacity compared to two-way slabs. One-way slabs can span up to a maximum of 6 meters, whereas two-way slabs can span up to 10 meters. This is because one-way slabs are supported on only two edges, while two-way slabs are supported on all four edges, making them more efficient in supporting a larger span. 3. Construction process: The construction process for one-way and two-way slabs also differs. One-way slabs are relatively easier to construct as the reinforcement is only provided in one direction. However, for two-way slabs, the reinforcement is provided in both directions, making the construction process slightly more complicated. 4. Deflection: One-way slabs are more prone to deflection as they have a lower load-carrying capacity and are only supported on two edges. On the other hand, two-way slabs have a higher load-carrying capacity and are supported on all four sides, making them less prone to deflection. In summary, the main difference between one-way and two-way slabs lies in their structural design, load-carrying capacity, spanning capacity, construction process, and deflection. Both these slabs have their advantages and disadvantages, and the selection of the type of slab depends on several factors such as span, load, and site conditions. Therefore, it is crucial to consult a structural engineer to determine the most suitable type of slab for a particular construction project. reinforcement used in one way slab Reinforcement is an essential aspect of construction in civil engineering, especially in the design and construction of slabs. Slabs are horizontal structural elements that are used to provide a flat, stable surface for buildings, bridges, and other structures. They distribute the loads above them to the supporting beams, walls or columns below. In this article, we will discuss the reinforcement used in one-way slab. One-way slab is a type of reinforced concrete slab which is supported by beams on two opposite sides and the loads are transferred along the perpendicular direction to these beams. This type of slab is mainly used in buildings where the span length is long in one direction and the load is applied in that same direction. Reinforcement in one-way slab is mainly in the form of mild steel bars, also known as rebar, which are embedded in the concrete to resist the tensile forces and provide the required strength to the slab. The reinforcement is placed in the bottom of the slab, also known as the tension zone, as this is where the slab is prone to cracking and failure under the influence of bending moments. The spacing and size of reinforcement bars are determined by the structural engineer based on the structural analysis and design. In one-way slab, the bars are placed parallel to each other and perpendicular to the span of the slab. The number of bars and their spacing varies depending on the design requirements, such as the span length, live load, dead load, and the strength of the concrete. The minimum spacing between bars is usually 3 times the bar diameter. In addition to the bottom reinforcement, one-way slabs may also have top reinforcement in the form of steel bars placed parallel to the supporting beams. This top reinforcement helps to resist the shear forces and prevents the slab from collapsing due to excessive bending moments. The size and spacing of these top bars are also determined by the structural engineer during the design phase. To ensure proper bonding between the reinforcement and the concrete, the bars are cleaned and coated with a layer of anti-rust epoxy before being placed in the concrete. This also helps to prevent corrosion of the bars, which can weaken the structure over time. In conclusion, reinforcement is a crucial component in the one-way slab design as it increases the strength and durability of the slab. It helps to distribute the loads and prevent cracking and failure of the slab. As a civil engineer, it is essential to carefully consider the structural analysis and design principles to ensure the appropriate reinforcement is used for one-way slabs. reinforcement used in one way slab Reinforcement is an essential aspect of construction in civil engineering, especially in the design and construction of slabs. Slabs are horizontal structural elements that are used to provide a flat, stable surface for buildings, bridges, and other structures. They distribute the loads above them to the supporting beams, walls or columns below. In this article, we will discuss the reinforcement used in one-way slab. One-way slab is a type of reinforced concrete slab which is supported by beams on two opposite sides and the loads are transferred along the perpendicular direction to these beams. This type of slab is mainly used in buildings where the span length is long in one direction and the load is applied in that same direction. Reinforcement in one-way slab is mainly in the form of mild steel bars, also known as rebar, which are embedded in the concrete to resist the tensile forces and provide the required strength to the slab. The reinforcement is placed in the bottom of the slab, also known as the tension zone, as this is where the slab is prone to cracking and failure under the influence of bending moments. The spacing and size of reinforcement bars are determined by the structural engineer based on the structural analysis and design. In one-way slab, the bars are placed parallel to each other and perpendicular to the span of the slab. The number of bars and their spacing varies depending on the design requirements, such as the span length, live load, dead load, and the strength of the concrete. The minimum spacing between bars is usually 3 times the bar diameter. In addition to the bottom reinforcement, one-way slabs may also have top reinforcement in the form of steel bars placed parallel to the supporting beams. This top reinforcement helps to resist the shear forces and prevents the slab from collapsing due to excessive bending moments. The size and spacing of these top bars are also determined by the structural engineer during the design phase. To ensure proper bonding between the reinforcement and the concrete, the bars are cleaned and coated with a layer of anti-rust epoxy before being placed in the concrete. This also helps to prevent corrosion of the bars, which can weaken the structure over time. In conclusion, reinforcement is a crucial component in the one-way slab design as it increases the strength and durability of the slab. It helps to distribute the loads and prevent cracking and failure of the slab. As a civil engineer, it is essential to carefully consider the structural analysis and design principles to ensure the appropriate reinforcement is used for one-way slabs. reinforcement used in two way rcc slab Reinforcement is used in two-way reinforced concrete (RCC) slabs to increase its strength and ability to withstand loads. Two-way RCC slabs are commonly used in buildings, bridges, and other structures that require large spans and heavy loads. The reinforcement within the slab captures and distributes the loads applied to it, providing structural stability and preventing failure. The reinforcement used in two-way RCC slabs consists of reinforcing bars or steel meshes that are embedded in the concrete matrix in a criss-cross pattern. The two-way slab system is designed to resist both bending and shear stress, unlike one-way slabs which mainly rely on bending resistance. This type of reinforcement creates a grid of interconnected ribs, allowing the slab to distribute loads in both directions. The reinforcement is placed in the bottom half of the slab, with the steel bars placed in one direction placed at the bottom of the slab, while the bars placed in the other direction are placed at the top of the slab. The bars are usually placed in a square or rectangular grid pattern, with spacing between them based on the design requirements and the load demands on the slab. The primary reinforcement bars used in two-way RCC slabs are typically of high-yield strength and are corrosion-resistant. The most commonly used reinforcement is mild steel bars conforming to IS 1786, while high-strength steel bars can also be used as per the design demands. Apart from the primary reinforcement, secondary reinforcement is also used in two-way RCC slabs. These bars are usually placed in the opposite direction to the primary bars and help in controlling shrinkage cracks. They also act as temperature reinforcement, preventing the concrete from cracking due to temperature variations. These bars are usually of smaller diameter compared to the primary reinforcement, and they are evenly distributed throughout the slab. The size, spacing, and layout of the reinforcement in two-way RCC slabs are determined based on the design requirements and the expected loading conditions. The design process involves calculating the bending moments, shear forces, and deflection of the slab under various load conditions and using these values to determine the required amount of reinforcement. In conclusion, the reinforcement used in two-way RCC slabs plays a crucial role in ensuring its strength and stability. The right type, size, and spacing of the reinforcement are essential to withstand the loads and prevent failures. Proper design, placement, and quality control during construction are necessary to ensure the effectiveness of the reinforcement in two-way RCC slabs. In conclusion, understanding the difference between one way slab and two way slab is crucial when it comes to constructing buildings. One way slabs are ideal for longer spans and can withstand heavier loads, while two way slabs are suitable for smaller spans and lighter loads. Additionally, the design and use of reinforcement for each type of slab differ significantly. It is essential to consult a structural engineer to determine the most suitable type of slab for a particular building project to ensure safety and structural integrity. With this knowledge, constructors and engineers can make informed decisions and construct durable and stable structures. The difference between one way slab and two way slab may seem subtle, but it plays a crucial role in the design and construction of buildings. Leave a Comment
{"url":"https://civilstep.com/all-about-difference-between-one-way-slab-and-two-way-slab/","timestamp":"2024-11-07T13:45:52Z","content_type":"text/html","content_length":"215609","record_id":"<urn:uuid:13b4e7df-c5e1-4004-bc5f-d7db8d661176>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00110.warc.gz"}
Common sense, sample selection, representative samples, and sample sizes As Statistics Canada continues to roll-out the results from the National Household survey, I seem to become involved in arguments at least once a week as to the importance of sample selection in survey data. This week, my argument was with IPSOS CEO Darrell Bricker – someone who should know a lot about statistics. In particular, Mr. Bricker should know that you can’t solve a sample selection problem with an increased sample size, and I actually think he does. I think the issue is that he’s thinking about practical polling issues with respect to sampling, not about statistical issues with respect to selected samples. Statistics Canada differentiates between sampling error and non-sampling errors, and I think that’s where our key difference lies. Let me see if I can explain this, and hopefully Mr. Bricker will respond and let me know if I am on the right track. Almost all statistics relies on samples of the population. As long as your sample is representative, or appropriately weighted, your sample estimates will be unbiased or accurate, in that on average you’d get the same value in the sample as in the population as a whole. Your estimates may not be precise, however, as they will have large margins of error if your sample is small. This is why we can use samples to draw conclusions on the characteristics of the population as a whole. Let me give you an example. If you are conducting a poll of Canadians, and you receive responses from 1000 people, you’ve covered 0.003% of the population with your survey. Even if the people you call are selected at random, and non-response is also random, you don’t necessarily have a representative sample. With a small sample, you will invariably over-sample some small groups and under-sample others. For example, the chance of being from Miramichi, NB is approximately 0.05% in the population as a whole. As such, in a representative panel of 1000 Canadians, you’d have 1/2 of a Miramichier. Of course, you can’t have 1/2 a Miramichier, so you either have 1 or more (over-represented) or 0 (under-represented) in your sample. The same goes for other small segments of the population, whether they are differentiated by geographic, demographic, religious, or other characteristics. You can solve this problem with a combination of larger sample sizes, over-sampling smaller groups, and appropriately weighting observations. Luckily, we have the mandatory short form census and other institutional data which allow you to weight observations in a small sample with probabilities which reflect the proportion of the total population made up by the type of person associated with each observation. Weights also allow you to over-sample small regions, so that (to continue the example above) we don’t treat the opinion of one random Miramichier to be taken to be representative of the population of Miramichi, so you’d sample a greater share of the population in smaller regions and down-weight them in the overall results. Done correctly, with random non-response, you’ll get a good sense of the characteristics of the entire population with a relative small sample of it. Statistical weighting can only get you so far – you can’t correct for a sample which is non-representative with respect to variables not in your weighting survey, or a sample which is not random due to non-response bias with sample weighting,. This is the core problem with the NHS. Put simply, you can’t increase the weight applied to observations you don’t have. Statistics Canada states that: In every self-administered voluntary survey, error due to non-response to the survey’s variables makes up a substantial portion of the non-sampling error. Non-response is likely to bias the estimates based on the survey, because non-respondents tend to have different characteristics from respondents. As a result, there is a risk that the results will not be representative of the actual population. This could be solved with a larger sample if it were a sampling problem, or could be solved with weights if we had information on the underlying population in the short form census. The problem is that the entire point of the NHS was to ask questions which we don’t ask in the short form census, so we won’t know if we have low response rates in those areas because we have no reference point – for now, we can rely partially on previous iterations of the census, but those will quickly become obsolete. We can increase the sample size, but it might not help if non-response is inherent to the group itself. As Bricker pointed out on Twitter, this was true of the old long form as well, but errors in the old tool don’t imply reliability of the new one. In response to this issue, Bricker said that, “this is about common sense research issues, not math formulae.” I disagree – this is entirely about math formulae and understanding the difference between sampling and non-sampling error and sample size issues. You can’t always correct for non-response bias using a larger sample size or with re-weighting observations. Worse yet, we won’t always know if our results are biased by non-response, and in this case, higher sample sizes actually exacerbate the problem. How can a larger sample size exacerbate the problem? It’s simple – precision of estimates increases in your sample size. Imagine I take a survey which experiences non-response bias. Even though the true value in the population would be 1, I get a biased estimate of 0.8, +/- 0.2 19 times out of 20, because people with higher values tend to not respond. The +/- 0.2 will decline as my sample size increases, even if the bias remains. With a larger sample, I might get an estimate of 0.8, +/- 0.05 19 times out of 20 – a more precise, but still inaccurate, answer. No survey is perfect, and this includes the census (old or new). That said, it’s important to understand the limits of the information we can glean from the NHS and to not suppose that we will always know when the data are biased or not. That, I agree, is common sense. Leave a Comment
{"url":"https://andrewleach.ca/uncategorized/common-sense-sample-selection-representative-samples-and-sample-sizes/","timestamp":"2024-11-08T10:31:26Z","content_type":"text/html","content_length":"47431","record_id":"<urn:uuid:2e98d8ea-ee1d-44e3-b156-7ef8c4725eb0>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00473.warc.gz"}
LESSON 9. Shear Strength of Soil 9.1. Shear Resistance Sliding between the particles during loading is a major factor for deformation of soil. The resistance that the soil offers during deformation is mainly due to the shear resistance between the particles at their contact points. Figure 9.1 shows the two particles in contact which is similar to the contact between two bodies. The normal force (N) is perpendicular the contact surface and shear force (t) is tangential force parallel to the contact surface. During sliding between two bodies, the maximum shear force can be written as: \[{\tau _{\max }}=\mu N\] (9.1) where μ is the coefficient of friction. In case of soil particles, the maximum shear force can be written as: where ø is the angle in internal friction of soil. 9.2 Mohr Circle At any stressed point, three mutually perpendicular planes exist on which shear stress is zero. These planes are called principal planes. The normal stresses that act on these planes are called principal stress. The largest principal stress is called major principal stress (σ[1]), the lowest principal stress is called minor principal stress ([σ3]) and the third stress is called intermediate stress (σ[2]). The corresponding planes are called major, minor and intermediate plane, respectively. The critical stress values generally occur on the plane normal to the intermediate plane. Thus, only σ[1] and σ[3] are considered. Figure 9.2 shows an element and direction of σ[1] and σ[3]. The major and minor principle planes are also shown. The major and minor principle planes are horizontal and vertical direction, respectively. The normal stress and shear stress at any plane making and angle q with horizontal can be determined analytically as: \[\sigma={{{\sigma _1} + {\sigma _3}} \over 2} + {{{\sigma _1} - {\sigma _3}} \over 2}\cos 2\theta \] (9.3) \[\tau={{{\sigma _1} - {\sigma _3}} \over 2}\sin 2\theta\] (9.4) The stresses can be determined by graphically using Mohr Circle as shown in Figure 9.2. Mohr Circle is drawn in normal (σ) and shear (t) axis. The compressive normal stress is considered as positive. The shear stress that produces anti-clockwise couples on the element is considered as positive. The circle is drawn by taking O [(σ[1] +σ[3])/2, 0] as center and (σ[1] -σ[3])/2 as radius (as shown in Figure 9.2). Now from (σ[3, ]0) point draw a line parallel to AB plane. The line intersects the Mohr Circle at a point whose coordinates represents the normal and shear stress acting on AB plane [D(σ [, ]t)]. The A (σ[3, ]0) point is called pole or the origin of plane. Fig. 9.1. Two particles/bodies in contact. Fig. 9.2. Mohr Circle. Ranjan, G. and Rao, A.S.R. (2000). Basic and Applied Soil Mechanics. New Age International Publisher, New Delhi, India. Suggested Reading Ranjan, G. and Rao, A.S.R. (2000) Basic and Applied Soil Mechanics. New Age International Publisher, New Delhi, India. Arora, K.R. (2003) Soil Mechanics and Foundation Engineering. Standard Publishers Distributors, New Delhi, India. Murthy V.N.S (1996) A Text Book of Soil Mechanics and Foundation Engineering, UBS Publishers’ Distributors Ltd. New Delhi, India. Last modified: Monday, 23 September 2013, 8:47 AM
{"url":"http://ecoursesonline.iasri.res.in/mod/page/view.php?id=285","timestamp":"2024-11-04T20:50:45Z","content_type":"text/html","content_length":"32285","record_id":"<urn:uuid:d17704f2-c134-4c29-a872-08432eb1a2f8>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00043.warc.gz"}
4.5. Random Projection 4.5. Random Projection¶ The sklearn.random_projection module implements a simple and computationally efficient way to reduce the dimensionality of the data by trading a controlled amount of accuracy (as additional variance) for faster processing times and smaller model sizes. This module implements two types of unstructured random matrix: Gaussian random matrix and sparse random matrix. The dimensions and distribution of random projections matrices are controlled so as to preserve the pairwise distances between any two samples of the dataset. Thus random projection is a suitable approximation technique for distance based method. • Sanjoy Dasgupta. 2000. Experiments with random projection. In Proceedings of the Sixteenth conference on Uncertainty in artificial intelligence (UAI‘00), Craig Boutilier and Moisés Goldszmidt (Eds.). Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 143-151. • Ella Bingham and Heikki Mannila. 2001. Random projection in dimensionality reduction: applications to image and text data. In Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining (KDD ‘01). ACM, New York, NY, USA, 245-250. 4.5.1. The Johnson-Lindenstrauss lemma¶ The main theoretical result behind the efficiency of random projection is the Johnson-Lindenstrauss lemma (quoting Wikipedia): In mathematics, the Johnson-Lindenstrauss lemma is a result concerning low-distortion embeddings of points from high-dimensional into low-dimensional Euclidean space. The lemma states that a small set of points in a high-dimensional space can be embedded into a space of much lower dimension in such a way that distances between the points are nearly preserved. The map used for the embedding is at least Lipschitz, and can even be taken to be an orthogonal projection. Knowing only the number of samples, the sklearn.random_projection.johnson_lindenstrauss_min_dim estimates conservatively the minimal size of the random subspace to guarantee a bounded distortion introduced by the random projection: >>> from sklearn.random_projection import johnson_lindenstrauss_min_dim >>> johnson_lindenstrauss_min_dim(n_samples=1e6, eps=0.5) >>> johnson_lindenstrauss_min_dim(n_samples=1e6, eps=[0.5, 0.1, 0.01]) array([ 663, 11841, 1112658]) >>> johnson_lindenstrauss_min_dim(n_samples=[1e4, 1e5, 1e6], eps=0.1) array([ 7894, 9868, 11841]) 4.5.2. Gaussian random projection¶ The sklearn.random_projection.GaussianRandomProjection reduces the dimensionality by projecting the original input space on a randomly generated matrix where components are drawn from the following Here a small excerpt which illustrates how to use the Gaussian random projection transformer: >>> import numpy as np >>> from sklearn import random_projection >>> X = np.random.rand(100, 10000) >>> transformer = random_projection.GaussianRandomProjection() >>> X_new = transformer.fit_transform(X) >>> X_new.shape (100, 3947) 4.5.3. Sparse random projection¶ The sklearn.random_projection.SparseRandomProjection reduces the dimensionality by projecting the original input space using a sparse random matrix. Sparse random matrices are an alternative to dense Gaussian random projection matrix that guarantees similar embedding quality while being much more memory efficient and allowing faster computation of the projected data. If we define s = 1 / density, the elements of the random matrix are drawn from Here a small excerpt which illustrates how to use the sparse random projection transformer: >>> import numpy as np >>> from sklearn import random_projection >>> X = np.random.rand(100,10000) >>> transformer = random_projection.SparseRandomProjection() >>> X_new = transformer.fit_transform(X) >>> X_new.shape (100, 3947) • D. Achlioptas. 2003. Database-friendly random projections: Johnson-Lindenstrauss with binary coins. Journal of Computer and System Sciences 66 (2003) 671–687 • Ping Li, Trevor J. Hastie, and Kenneth W. Church. 2006. Very sparse random projections. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining (KDD ‘06). ACM, New York, NY, USA, 287-296.
{"url":"https://scikit-learn.org/0.18/modules/random_projection.html","timestamp":"2024-11-14T11:49:12Z","content_type":"application/xhtml+xml","content_length":"23591","record_id":"<urn:uuid:662e4eed-e146-4f80-a68c-342fba86d10b>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00647.warc.gz"}
Finding figures of constant width on a chessboard 27 April 2009 Constant width figure of type (11,11,3) p.. We will say a collection of squares on an nxn board is a figure of constant width w if every row, column and diagonal (both main diagonals) intersects the collection in exactly w squares or does not intersects it at all. Such a figure will contain exactly k*w squares and we will say it is of type @(n,k,w)@. Note that the figures of type @(n,n,1)@ are the solutions to the n-queens problem. Finding figures of constant width with the computer is an interesting problem. This problem was first described here. The following is a backtracking algorithm that enumerates all connected figures of type @(n,_,w)@. I use the word connected in the following sense. Imagine each point in the figure is a teleportation unit and you are allowed to jump between any two points laying on the same row or column or any of the two main diagonals. A figure will be connected if there is a path of jumps between any two points in it. For example, the n-queens solutions (figures of constant width 1) are totally disconnected; so this program does not find them. We will start by defining, for every finite subset S of the infinite board, the function where p is point on the board, row(p) is the row containing p, column(p) the column containing p and so on. This function is locally defined, meaning that it satisfies the following expression: where : The algorithm assumes f[S](p)=(w,w,w,w) for every p belonging to the solution S, it then attempts to recursively satisfy the above mentioned locally defined property. This process either converges to a figure of type (n,_,w) or escapes the nxn board boundary. Here is the link to the full F# program (cwfigures.fs) and bellow is the function @find@ that performs the search strategy. let find len F availables buffer = let rec search sol len (avlen,avail) (bufflen,buff) = match buff with |_ when avlen + bufflen < len -> () //not enough left to complete a solution |[] -> F sol //found one! |p::tail -> let (r,rows), (c,columns), (d1,diag1), (d2,diag2), remaining = collect p avail if r < state.r p || c < state.c p || d1 < state.pd p || d2 < state.nd p then () //can not satisfy local constraint (search (p::sol) (len-1) |> (prune_before_continue remaining >> combinator diag2 (state.nd p) >> combinator diag1 (state.pd p) >> combinator columns (state.c p) >> combinator rows (state.r p) )) (bufflen - 1, tail) search [] len availables buffer;; While there are other backtracking strategies, some of them more aesthetically appealing, this was the faster combination I could come up with. To generate all figures of type (11,_, 3) it took roughly 31 days on a Windows XP virtual machine hosted on Linux running on a quad-core. The same program running on the Linux host (with Mono 2.0.1) performed about twice slower. There are 21 solutions of type (11,11,3) and 1 solution of type (11,10,3), module the symmetries of the square. You can see them here blog comments powered by Disqus
{"url":"http://ademar.name/blog/2009/04/finding-figures-of-constant-wi.html","timestamp":"2024-11-03T21:57:55Z","content_type":"text/html","content_length":"15536","record_id":"<urn:uuid:a49b6039-7552-453d-be03-2216af682e68>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00829.warc.gz"}
Algrebra 2 Calculus The site will focus on the high school (and middle school) math progression. What do students learn? Why do they learn it? Does it make sense? Does it succeed? What should they be learning? How should the be learning it. These are big questions, high stake questions, and questions that are at some level unanswerable. Nevertheless, we send an entire generation through the high school math system every year so somehow, they have defacto answers. I'll start slowly building a foundation of understanding. Here as a starting point is a listing of high school math learning progressions. High school math arguably starts in middle school. Generally around 6th or 7th grade, the students started getting separated into different math tracks. The normal fast track: 6th grade - pre-algebra 7th grade - algebra I 8th grade - alegebra II 9th grade - geometry / triginometry 10th grade - pre calculus 11th grade - calculus 12th grade - statistics The fastest track goes: 6th grade - pre-algebra 7th grade - algebra I 8th grade - alegebra II 9th grade - geometry / triginometry 10th grade - calculus 11th grade - calculus AP 12th grade - multivariate calculus The normal track; 6th grade - 6th grade math 7th grade - pre algebra 8th grade - alegebra I 9th grade - alegebra II 10th grade - geometry / triginometry 11th grade - pre-calculus 12th grade - calculus or statistics A slower track; 6th grade - 6th grade math 7th grade - 7th grade math 8th grade - pre algebra (foundations of algebra) 9th grade - alegebra I 10th grade - alegebra II 11th grade - geometry 12th grade - calculus or statistics
{"url":"http://www.algebra2calculus.com/2011/","timestamp":"2024-11-01T19:56:10Z","content_type":"text/html","content_length":"23785","record_id":"<urn:uuid:0b1dc4e5-ddf2-42f1-9f97-620fc7f5305e>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00049.warc.gz"}
Convert a 2D point spread function to a 2D optical transfer fucntion. [xSfGridCyclesDeg,ySfGridCyclesDeg,otf] = PsfToOtf([xGridMinutes,yGridMinutes],psf,varargin) Converts a point spread function specified over two-dimensional positions in minutes to a optical transfer function specified over spatial frequency in cycles per degree. For human vision, these are each natural units. The input positions should be specified in matlab’s grid matrix format and x and y should be specified over the same spatial extent and with the same number of evenly spaced samples. Position (0,0) should be at location floor(n/2)+1 in each dimension. The OTF is returned with spatial frequency (0,0) at location floor(n/2)+1 in each dimension. Spatial frequencies are returned using the same conventions. If you want the spatial frequency representation to have frequency (0,0) in the upper left, as seems to be the more standard Matlab convention, apply ifftshift to the returned value. That is otfUpperLeft = ifftshift(otf); And then if you want to put it back in the form for passing to our OtfToPsf routine, apply fftshift: otf = fftshift(otfUpperLeft); The isetbio code (isetbio.org) thinks about OTFs in the upper left format, at least for its optics structure, which is one place where you’d want to know this convention. No normalization is performed. If the phase of the OTF are very small (less than 1e-10) the routine assumes that the input psf was spatially symmetric around the origin and takes the absolute value of the computed otf so that the returned otf is real. We wrote this rather than simply relying on Matlab’s potf2psf/psf2otf because we don’t understand quite how that shifts position of the passed psf and because we want a routine that deals with the conversion of spatial support to spatial frequency support. If you pass the both position args as empty, both sf grids are returned as empty and just the conversion on the OTF is performed. PsychOpticsTest shows that this works very well when we go back and forth for diffraction limited OTF/PSF. But not exactly exactly perfectly. A signal processing maven might be able to track down whether this is just a numerical thing or whether some is some small error, for example in how position is converted to sf or back again in the OtfToPsf. See also OtfToPsf, PsychOpticsTest.
{"url":"http://psychtoolbox.org/docs/PsfToOtf","timestamp":"2024-11-02T20:33:54Z","content_type":"text/html","content_length":"8064","record_id":"<urn:uuid:e3fd84c3-7cc4-4e3c-aade-9921d372715f>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00747.warc.gz"}
MathGroup Archive: August 1997 [00392] Re: Re: Wrong behavior of CrossProduct • To: mathgroup at smc.vnet.net • Subject: [mg8044] Re: [mg7996] Re: [mg7958] Wrong behavior of CrossProduct • From: seanross at worldnet.att.net • Date: Sat, 2 Aug 1997 22:32:52 -0400 • Sender: owner-wri-mathgroup at wolfram.com > Sean Ross wrote: > ... Looking at the problem you > gave mathematica in spherical coordinates you specified V=(a1,a2,0}, > which is a displacement vector beginning at the origin and going to the > point V. You then wanted to cross it with the vector U={0,0,1}, which > is a displacement vector beginning at the origin and ending at the > origin, so you took a cross product between two vectors, one of which > had a zero magnitude. The answer given by mathematica was correct for > DISPLACEMENT VECTORS. This makes perfect mathematical sense, but is > ludicrous from a physical standpoint since all cross-products that > appear in physical equations are for field vectors, not displacements. Richard W. Finley, M. D. wrote: > Sean, > Regarding the message below, perhaps I missed something....as far as I know a vector with zero magnitude is the zero vector, regardless of whether you consider it a displacement vector or a field vector, and the cross product of any vector with this zero vector should be zero. This would seem to be the only interpretation that makes mathematical OR physical sense. You raise a good point and put your finger on a subtlety that escapes most people. The vector v={0,b,c} in spherical coordinates represents a vector of zero length for displacement vectors. Consider,however, the case of a gradient field, such as an electric field. Certainly we could conceive of an electric field that, at some point in space, had no radial component, but only a theta or phi component. The magnitude of the field would not be zero because the radial component was zero. Most mechanics and electodynamics textbooks pass over this subtlety because physical cross products that occur in nature don't involve displacement vectors, they involve field vectors and vector differential operators which occur at a local point in space. A look at the standard, tensor notation way of writing cross products reveals another, often overlooked point: AxB=g[i,j]epsilon[i,j,k] A[i]B[j] The metric tensor g[i,j] is the identity matrix in cartesian coordinates, but has components that are a function of r and theta for spherical and cylindrical coordinates, so you can't convert cross products back and forth between cartesian and other coordinate systems without specifying at what point in space the two vectors exist. The mathematica result assumes that the vectors exist at the origin, which is another reason there is a discrepancy between mathematica cross products and what might be expected.
{"url":"https://forums.wolfram.com/mathgroup/archive/1997/Aug/msg00392.html","timestamp":"2024-11-03T17:12:19Z","content_type":"text/html","content_length":"46375","record_id":"<urn:uuid:18e1307e-e8f5-4229-9223-a9fec4b0e79c>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00106.warc.gz"}
void FIdWinHamm (double win[], int N, double a) Generate a generalized Hamming window A generalized Hamming window is specified by the following equation, 2 pi i win[i] = (1-a) - a cos (------) , i = 0, ..., N-1 The parameter a is 0.46 for a conventional Hamming window, 0.5 for a full raised-cosine window, and 0 for a rectangular window. Note that for the full raised-cosine window, the two end points of the window are zero. Define the effective window length as the length of a rectangular window which has the same energy as the Hamming window. Then the effective length of the Hamming window is L = N - 2a(N+1) + a^2 (3N+5)/2, for N > 3. 2 - 8a + 8a^2, for N = 2 1 for N = 1 (win[0] = 1) <- double win[] Array containing the window values -> int N Number of window values -> double a Window parameter; a=0.46 for a conventional Hamming window, a=0.5 for a full raised-cosine window, a=0 for a rectangular window. The window is non-negative for 0 <= a <= 0.5. Author / revision P. Kabal / Revision 1.3 2003/05/09 See Also Main Index libtsp
{"url":"https://mmsp.ece.mcgill.ca/Documents/Software/Packages/libtsp/FI/FIdWinHamm.html","timestamp":"2024-11-14T21:02:55Z","content_type":"text/html","content_length":"1877","record_id":"<urn:uuid:896f6cfc-83cb-4ab5-bc5f-1ea299e6ac0d>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00092.warc.gz"}
Computing Methods - Acid Base TutorialComputing Methods - Acid Base Tutorial The page describes the logic utilized to generate the text reports. as well as the development of the necessary equations and iterative subroutines required to convert the raw data into the diagrams on the screen. Technique for the text Interpretations 1. Mathematics. See the Equations below. The pH and PCO[2] are used to calculate: Standard Base Excess (SBE) and Bicarbonate. 2. Radial Search. See the Computing Strategy below. The Acid-Base diagram is searched to find: Location of the Zone; Magnitude of the 2 components. Typical Zones This diagram shows the radial zones employed to generate the sentence fragments. The numbers correspond to the radial search used to generate the code. • 0. Normal • 1. Pure Metabolic Acidosis • 3. Metabolic Alkalosis (Compensated) • 5. Chronic Respiratory Acidosis (Compensated) • 7. Acute Respiratory Acidosis • 10. Pure Metabolic Alkalosis • 12. Metabolic Acidosis (Compensated) • 14. Chronic Respiratory Alkalosis (Compensated) • 16. Acute Respiratory Alkalosis The expanding family of rectangles determine the choice of adjectives used to describe the degree of acidosis and alkalosis. Normal; Minimal; Mild; Moderate; Marked; Severe. The corners of these rectangles corresponds to the slope for pH = 7.4. Computing Strategy. The computer program conducts a radial search of the diagram to determine which sector (0 – 18) contains the data point. The sector determines which component, respiratory or metabolic, is dominant and therefore to be reported first. The adjective and the direction (acidosis or alkalosis) are derived from the value for each component. A final descriptive phrase is included when the location is characteristic of a chronic or an acute disturbance. Grogono Equation. [H^+] (30.17 + BE) = 22.63 (PCO[2] + 13.33) A position on the diagram generates X and Y coordinates (PCO[2] and SBE). An initial approximation is essential. Without it, the iterative process often diverges instead of converging. These equations provide a first approximation, e.g., to obtain bic from BE and PCO[2]. [H^+] x (30.17 + BE) = 22.63 x (PCO[2] + 13.33) bic = (BE + 30.17) / (0.94292 + 12.569 / PCO2) Siggaard Andersen Equation. It is a pleasure to thank Dr. Severinghaus for giving me these equations which are used in iterative procedures to obtain successively better approximations SBE = 0.9287 (bic – 24.4 + 14.83 (pH – 7.4)), which can be simplified to: SBE = 0.9287 * bic + 13.77 * pH – 124.58 bic = BE/0.9287 – 14.83 * pH +134.142 Modified Henderson Equation. This is the equation used to derive [HCO[3]^–] from pH and PCO[2]. [H^+] x [HCO[3]^–] = 24 x PCO[2] Iterative Procedure. Moving the mouse over the diagram generates values for PCO[2] and SBE. The following Javascript Code shows how these equations were employed to derive accurate bicarbonate values: function PCO2andBEtoBIC() { bic = (BE + 30.17) / (0.94292 + 12.569 / PCO2); // bic approximation via Grogono equation for (ii=0;ii<6;ii++) { // iterative procedure six times H = BICandPCO2toH(); // [H+] via Modified Henderson Equation bic = (bic + BEandHtoBIC())/2; // split old value and new Siggaard-Anderson return bic; // return bic function BEandPHtoPCO2() { return Math.exp((9-pH)*2.302585) * ((BE -13.77 * pH +124.578)/0.9287) / 24; function BICandPCO2toH() { return (24*PCO2/bic); //Modified Henderson Equation When this website was introduced, Java was employed to run the diagrams and calculations. It was less than satisfactory originally and became more of a problem recently when constant updates were required. Javascript was adopted instead in 2017 and appears to provide the same, or better, functionality. Any advice or suggestions from Javascript-experts will be appreciated. For many acid-base disturbances the traditional approach to acid-base balance serves well. For the clinician, the three variables of greatest use are the pH, PCO[2], and standard base excess (SBE). Stewart’s approach may be justified in examining and managing the more complex disturbances.
{"url":"https://acid-base.com/computing-methods","timestamp":"2024-11-04T00:40:24Z","content_type":"text/html","content_length":"109801","record_id":"<urn:uuid:6ba83915-f467-4a2b-9dc3-00defd22325c>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00754.warc.gz"}
Unexpected Behaviour with .sub in fenicsx Hey there, The following code generates a line mesh from (0,0) to (1,0) and solves a simple FEM-problem that can be interpreted as a simple bar of length, cross-section and Youngs Modulus = 1 where the point (0,0) is fixed and a force of magnitude 0.1 pulls on the right-hand side (at (1,0)) in positive x-direction. import ufl from dolfinx import io, fem, mesh from mpi4py import MPI from petsc4py import PETSc import gmsh import numpy as np meshSize = 0.05 point1 = gmsh.model.geo.add_point(0.,0.,0.,meshSize) point2 = gmsh.model.geo.add_point(1.,0.,0.,meshSize) edge1 = gmsh.model.geo.add_line(point1,point2) edges = [edge1] model1 = gmsh.model.add_physical_group(1,edges) domain = io.ufl_mesh_from_gmsh(1,2) geometry_data = io.extract_gmsh_geometry(gmsh.model) topology_data = io.extract_gmsh_topology_and_markers(gmsh.model) cells = topology_data[1]["topology"] nodes = geometry_data[:,:2] struct_mesh = mesh.create_mesh(MPI.COMM_WORLD, cells, nodes, domain) Vu = fem.VectorFunctionSpace(struct_mesh,("CG",1)) du = ufl.TrialFunction(Vu) u_ = ufl.TestFunction(Vu) # bilinear form aM = ufl.inner(ufl.grad(du),ufl.grad(u_))*ufl.dx # "fixing" node at (0,0) b_dofs_l = fem.locate_dofs_geometrical(Vu, lambda x : (np.isclose(x[0],0.)) & (np.isclose(x[1],0.))) bc_left = fem.dirichletbc(fem.Constant(struct_mesh,np.array([0.,0.])),b_dofs_l,Vu) # fixing y-dof at (1,0) b_dofs_r = fem.locate_dofs_geometrical(Vu, lambda x : (np.isclose(x[0],1.) & (np.isclose(x[1],0.)))) bc_right = fem.dirichletbc(fem.Constant(struct_mesh,0.),b_dofs_r,Vu.sub(1)) bcs = [bc_left]#, bc_right] # assemble lhs A = fem.petsc.assemble_matrix(fem.form(aM),bcs=bcs) # create solver solver = PETSc.KSP().create(MPI.COMM_WORLD) # create empty rhs-vector LM = ufl.inner(u_,fem.Constant(struct_mesh,np.array([0.,0.])))*ufl.dx b = fem.petsc.create_vector(fem.form(LM)) # applying a force to the node at the very right, at (1,0) xForce = 0.1 b.setValue(b_dofs_r[0]*2,xForce) # accessing x-component of node b_dofs_r # create a function to write the solution into u = fem.Function(Vu) If I run the code like this, I get the expected result: The value of u increases linearly from 0. to .1 - exactly like the (trivial) analytical solution. If I change the line bcs = [bc_left]#, bc_right] to bcs = [bc_left, bc_right], I expect that the boundary condition bc_right is applied too. This BC would constrain the y-dof at the end node ((1,0)). In this case this shouldn’t have an effect on the solution, but sets my engineering mind (which is concerned with static/kinematic determinacy considerations) to peace. However, the result is not as expected at all. The u-function is constant (value 0.) between (0,0) and (0.5,0) and then increases linearly to 0.05 up to the point (1,0). It seems like instead of constraining the y-dof of (1,0), bc_right constrained the x-dof of (0.5,0). How can this be explained? PS: Another question: I noticed that b_dofs_r is 20 in this case. However, node 20 in the array nodes is at (0.95,0) - while node 1 (the second one) is at (1.,0.). Are the nodes reshuffled when the mesh is imported from gmsh to fenicsx? You are not using the correct dofs in your boundary condition. Consider the following modification: from IPython import embed import ufl from dolfinx import io, fem, mesh from dolfinx.io import gmshio from mpi4py import MPI from petsc4py import PETSc import gmsh import numpy as np meshSize = 0.05 point1 = gmsh.model.geo.add_point(0., 0., 0., meshSize) point2 = gmsh.model.geo.add_point(1., 0., 0., meshSize) edge1 = gmsh.model.geo.add_line(point1, point2) edges = [edge1] model1 = gmsh.model.add_physical_group(1, edges) struct_mesh, _, _ = gmshio.model_to_mesh(gmsh.model, MPI.COMM_WORLD, 0, gdim=2) Vu = fem.VectorFunctionSpace(struct_mesh, ("CG", 1)) du = ufl.TrialFunction(Vu) u_ = ufl.TestFunction(Vu) # bilinear form aM = ufl.inner(ufl.grad(du), ufl.grad(u_))*ufl.dx # "fixing" node at (0,0) b_dofs_l = fem.locate_dofs_geometrical(Vu, lambda x: ( np.isclose(x[0], 0.)) & (np.isclose(x[1], 0.))) bc_left = fem.dirichletbc(fem.Constant( struct_mesh, np.array([0., 0.])), b_dofs_l, Vu) # fixing y-dof at (1,0) V_1, _ = Vu.sub(1).collapse() b_dofs_r = fem.locate_dofs_geometrical((Vu.sub(1), V_1), lambda x: ( np.isclose(x[0], 1.) & (np.isclose(x[1], 0.)))) bc_right = fem.dirichletbc(fem.Constant( struct_mesh, 0.), b_dofs_r[0], Vu.sub(1)) bcs = [bc_left, bc_right] # assemble lhs A = fem.petsc.assemble_matrix(fem.form(aM), bcs=bcs) # create solver solver = PETSc.KSP().create(MPI.COMM_WORLD) # create empty rhs-vector LM = ufl.inner(u_, fem.Constant(struct_mesh, np.array([0., 0.])))*ufl.dx b = fem.petsc.create_vector(fem.form(LM)) # applying a force to the node at the very right, at (1,0) xForce = 0.1 b.setValue(b_dofs_r[1]*2, xForce) # accessing x-component of node b_dofs_r fem.petsc.set_bc(b, bcs=bcs) # create a function to write the solution into u = fem.Function(Vu) solver.solve(b, u.vector) with io.XDMFFile(struct_mesh.comm, "u.xdmf", "w") as xdmf: Ah yes, like this it makes perfect sense! Thank you very much!
{"url":"https://fenicsproject.discourse.group/t/unexpected-behaviour-with-sub-in-fenicsx/9410","timestamp":"2024-11-03T13:57:54Z","content_type":"text/html","content_length":"30379","record_id":"<urn:uuid:daa7a8a7-312b-4c29-b370-dd0c7d4bcc81>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00848.warc.gz"}
Hypothesis Testing This lesson will focus on the basics of hypothesis testing and how to perform different types of hypothesis tests. We'll cover the following Hypothesis testing During our analysis of the different datasets, we are often concerned with questions like whether males default more than females? Do self-driving cars crash more than normal cars? Does drug X help prevent/treat disease Y? To answer these questions, we can use another statistical technique known as Hypothesis Testing. During data exploration, we discovered interesting patterns hidden in the data. Hypothesis testing enables us to confirm whether these patterns were present in the data by luck or by some real Null and Alternate hypothesis The aim of the hypothesis test is to determine whether the null hypothesis can be rejected or not. The null hypothesis is a statement that assumes that nothing interesting is going on, or no relationship is present between two variables, or that there is no difference between a sample and a population. For instance, if we suspect that males default more than females, the null hypothesis would be that males do not default more than females. If there is little or no evidence against the null hypothesis, we accept the null hypothesis. Otherwise, we reject the null hypothesis in favor of the alternate hypothesis, which states that something interesting is going on, or there is a relationship between two variables, or that the sample is different from the population. To reiterate, the null hypothesis is assumed true and statistical evidence is required to reject it in favor of the alternative hypothesis. Get hands-on with 1400+ tech skills courses.
{"url":"https://www.educative.io/courses/data-science-for-non-programmers/hypothesis-testing","timestamp":"2024-11-14T20:43:09Z","content_type":"text/html","content_length":"717799","record_id":"<urn:uuid:ace5d69a-bdeb-4583-8626-432a527ac641>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00158.warc.gz"}
MOIC vs. DPI: Do You Understand the Difference? | financestu MOIC vs. DPI: Do You Understand the Difference? The difference is MOIC takes into account the value of both realized and unrealized gains and compares it to the invested capital. Thus, it is a gross measure of return that doesn’t take into account fund fees. Meanwhile, DPI compares the capital investors have given the fund to the fund (paid-in capital) and the capital they have received back through distributions. At first glance, the two metrics appear similar. But a closer look reveals key differences: MOIC vs. DPI Comparison Table That’s the gist of it. For a more comprehensive answer, keep reading: MOIC Definition The Multiple on Invested Capital (MOIC) measures the value of an investment relative to the initial investment. As the name implies, it is expressed as a multiple. It is mostly used in private equity to assess the performance of funds. Private equity funds gather together capital from investors and acquire equity in companies in hopes of selling that equity stake for more at a future date once the company has grown in value. A MOIC over 1.00x means the portfolio companies grew in value. Anything below 1 means underperformance. The higher the MOIC, the better. Unlike the internal rate of return (IRR), the multiple on invested capital ignores the time value of money. Here’s how to calculate MOIC: MOIC Formula The MOIC calculation is as follows: \[MOIC=\frac{Distributions+Residual\ Value}{Invested\ Capital}\] • Distributions (realized value): All the capital the fund has paid back (distributed) to investors as a result of exits from portfolio companies. Early in the fund’s life distributions are usually low, increasing over time as investments are exited. • Residual value (unrealized value): The estimated fair value of all the fund’s active investments (those it hasn’t sold yet). It is generally higher early in the fund’s life cycle and falls over time as the fund exits its investments and distributes capital to investors. It reaches zero once the fund is liquidated. • Invested capital: The cumulative amount of money the fund has effectively invested in portfolio companies. The sum of distributions and the residual value is the total fund value at that point in time. Now, what is DPI? DPI Definition The Distributed to Paid-In Capital (DPI) measures how much money the investors of a fund have received relative to how much they put in. In other words, it is the multiple of the money investors committed to the fund. It tells you how much an investor has received back in their bank account for every $1 dollar it originally sent to the fund. Just like the MOIC, the higher the better. Here’s how to calculate DPI: DPI Formula The DPI calculation is as follows: \[DPI=\frac{Distributions}{Paid–In\ Capital}\] • Distributions: As we’ve seen before, it’s the realized gains returned to investors. • Paid-in capital: The cumulative called capital investors have transferred to the fund. The fund uses the paid-in capital primarily to invest in portfolio companies, but also to pay management fees. Paid-in capital is not the same as invested capital. DPI increases as the fund finalizes exits and distributes capital to investors. Once it makes all distributions, the fund’s DPI becomes equal to the TVPI. Now we’re ready to drill down into the difference between DPI and MOIC: Difference between MOIC and DPI Both MOIC and DPI evaluate the performance of the underlying assets in a fund. The distinction is that DPI focuses on realized gains. As a result, MOIC is a more general assessment of overall fund performance. Everyone is interested in knowing the MOIC of a PE fund, from analysts, to investors, to the fund itself. On the other hand, DPI tells you, as an investor, what have you received back relative to how much you put in. Individual investors are the ones most interested in this metric. In the denominator the MOIC uses the invested capital while the DPI uses the paid-in capital. The difference? Fund managers use the paid-in capital to buy equity in companies. But they also use it to cover fund fees and general expenses. This means the DPI already takes into account those extra costs. It is a measure of returns net of fees, as the extra costs increase the denominator and lower the multiple. This is why investors care so much about the DPI. It tells them how much cash has actually been returned to them, which in the end is what investing is all about. Cash in, cash out. Meanwhile, the MOIC only considers the portion of paid-in capital used to buy equity in portfolio companies. Invested capital doesn’t account for the fund’s fee structure. The meaning of this? MOIC is a pure measure of how good the fund managers are at picking startups to invest in. That’s it. This highlights, again, the idea that DPI is more investor-focused, while MOIC is a general fund performance metric everyone has an interest in. Now, one more thing: Both metrics hold more value towards the end of the fund. Why? They will both fluctuate over time as exits materialize and the fund gets a better or worse multiple here and there. In particular, the MOIC has the element of the estimated value of unrealized returns. For each individual holding, the general partner will estimate its value every quarter through industry-standard valuation methods. The number will be higher or lower, but never the exact true figure because it’s an estimate of the value of an asset that hasn’t been sold. Let’s go through a quick example to put these concepts in motion: MOIC vs. DPI Example Consider a private equity fund where the investors committed $100M. MOIC and DPI Example Highlighted in blue you have the items that go toward the distributed to paid-in capital ratio. And in orange the items for the multiple on invested capital. As expected, DPI is zero for the first few years. This is because there are no distributions in those early years. Instead, the fund is focused on executing its investment strategy, and letting those investments play out. As the years go by and fund analysts notice changes in the performance of the portfolio companies, they adjust the estimates that make up the residual value accordingly. In this case, the investment strategy is working well. DPI increases as a result. Also, notice most of the committed capital is called early in the fund’s life cycle—when it is making new investments. In the end, the MOIC is always lower than the DPI because it doesn’t take into account the administrative costs of running a private equity fund. Other metrics investors care about include the Total Value to Paid-In Capital (TVPI) and the Residual Value to Paid-In Capital (RVPI). They help you get a sense of how the total value of the fund shifts from being mostly residual value early on to mostly distributions to investors near the end. Key Takeaways (FAQs) Is MOIC the same as DPI? No. The MOIC tells you how good the investment strategy of a fund is by comparing the total value of the fund (realized and unrealized) to the invested capital. DPI on the other hand, compares how much the fund has distributed back to investors relative to how much the investors put in the fund. So it is a measure of the net return for investors. What does MOIC tell you? The MOIC tells you how much a fund has generated in the form of capital gains and distributions for every dollar it has invested in portfolio companies. What is DPI in private equity? The DPI reflects the capital a fund has returned to investors at a given point in time. These distributions occur over the life of the PE fund as exits occur and holdings are liquidated. Once the fund is fully liquidated and proceeds are distributed to investors, the distributions used to calculate DPI will represent the total return on the investor’s investment. Thus, DPI is a way to assess the fund manager’s success at returning capital to investors.
{"url":"https://financestu.com/moic-vs-dpi/","timestamp":"2024-11-11T00:17:33Z","content_type":"text/html","content_length":"93493","record_id":"<urn:uuid:a4657ef0-807f-47bd-9c0b-7c4f9df90418>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00487.warc.gz"}
Year 10+ Puzzles Solution: Gliding Against the Wind Puzzle Solution: Gliding Against the Wind Puzzle The Puzzle: Gertie the snail is training for the Snail Olympics in a wind tunnel. If she can glide \(20 cm\) in \(2 \text{ hours}\) with wind assistance, and takes \(3 \text{ hours}\) to cover the same distance, against the wind, how quickly can she cover the \(20 \text{ cm}\) distance if there is no wind? The Solution: It's very tempting to say that Gertie's average speed is \(\dfrac{20 + 20}{2 + 3} = \dfrac{40}{5} = 8 \text{ cm/h}\), so she'll take \(\dfrac{20}{8} = 2 \; \dfrac{1}{2}\) hours to glide the \(20\) cm distance if there's no wind. Unfortunately, this is incorrect! The problem is that she only travels for \(2\) hours with assistance from the wind, and \(3\) hours against the wind, so taking the average speed won't give us an accurate idea of how long Gertie would take to glide \(20\) cm without any wind at all. We need to be a bit cleverer and work out the distances she would travel in equal times in each direction! What should we do? Well, we can work out how far Gertie can glide with the wind in \(3\) hours. This will give us a total distance travelled in \(6\) hours, which should cancel out the effects of the wind, and we can use this to find the time it will take Gertie to glide for \(20\) centimetres without the wind. If it takes Gertie \(2\) hours to glide \(20\) cm with the wind, she should be able to glide \(1.5\) times that distance (\(30 \text{ cm}\)) in \(3\) hours. So, her speed without any wind should be given by \(\begin{eqnarray*} \text{average speed} &= \dfrac{\text{distance travelled}}{\text{time taken}} \\ &= \dfrac{30 + 20}{3 + 3}\\ &= \dfrac{50}{6}\\ &= 8 \; \dfrac{1}{3} \text{ cm/h} \end{eqnarray*} \) We can use this speed to calculate the time taken to glide the \(20\) cm distance without any wind: \(\begin{eqnarray*} \text{time} &= \dfrac{\text{distance}}{\text{speed}} \\ &= \dfrac{20}{8\;\dfrac{1}{3}}\\ &= \dfrac{60}{25}\\ &= \dfrac{12}{5}\\ &= 2.4 \text{ hours}. \end{eqnarray*} \) So, it would take Gertie \(2.4\) hours to glide \(20\) cm without the wind. This series of puzzles are for Year 10 or higher students, these puzzles tests your skills and also train you with problem solving and thinking out of the box Year 10 students or higher Learning Objectives Solving puzzles Author: Subject Coach Added on: 28th Sep 2018 You must be logged in as Student to ask a Question.
{"url":"https://subjectcoach.com/tutorials/math/topic/year-10-puzzles/chapter/solution-gliding-against-the-wind-puzzle","timestamp":"2024-11-11T17:37:18Z","content_type":"text/html","content_length":"116245","record_id":"<urn:uuid:5e816847-a53b-4549-8d44-247770886574>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00013.warc.gz"}
Printable Calendars AT A GLANCE Printable Cylinder Net Printable Cylinder Net - Alphabetically listed printable shapes, nets and patterns. Web introduce your child to geometry and 3 dimensional shapes with this quick 3d cylinder printable, meant to build your little learner’s knowledge and vocabulary! 3d or three dimensional geometric shapes, print ready to cut and fold. Web on this worksheet, you’ll find a net of a cylinder that features two circles and a connecting rectangle. Use this great geometry resource with your students to practice identifying the characteristics of a. Web download this printable 3d cylinder net vector, cylinder net, 3d shapes nets craft, 3d shapes nets png transparent background or vector file for free. Color & b&w versions included. Web geometric nets for 3d shapes: How does it compare to a cube or cone? An insight into this topic helps you know the properties of the net of a cylinder and construct. Math isn’t always your child’s. The net has a number of glue tabs that make it easy to construct a 3d model. Alphabetically listed printable shapes, nets and patterns. Web what makes a cylinder a cylinder? Web introduce your child to geometry and 3 dimensional shapes with this quick 3d cylinder printable, meant to build your little learner’s knowledge and vocabulary! Use the square centimeters printed on each surface of the cylinder 's net to find the surface area of the cylinder. An insight into this topic helps you know the properties of the net of a cylinder and construct. Web download this printable 3d cylinder net vector, cylinder net, 3d shapes nets craft, 3d shapes nets png transparent background or vector file for free. Free math sheets, math games and math help. Web what makes a cylinder a cylinder? Color & b&w versions included. How does it compare to a cube or cone? What Is A Cylinder Cylinder Shape DK Find Out Web alphabetize listed printable shapes, webs and patterns. Free math sheets, math games and math help. Web what makes a cylinder a cylinder? 3d or three dimensional geometric body, print finished to cut and fold on assist with visual and practical learning. Math isn’t always your child’s. How to Draw a Net of a Cylinder Weir Nonsts 3d or three dimensional geometric body, print finished to cut and fold on assist with visual and practical learning. Web alphabetize listed printable shapes, webs and patterns. Web cylinder cylinder download add to favorites use this printable pattern with your students to help them construct their own cylinder to demonstrate their understanding of. 3d or three dimensional geometric shapes, netcylinder Free math worksheets Color & b&w versions included. Math isn’t always your child’s. 3d or three dimensional geometric shapes, print ready to cut and fold. How does it compare to a cube or cone? Web what makes a cylinder a cylinder? 3d Shape Nets Printable for Elementary School 3d shapes nets, 3d Alphabetically listed printable shapes, nets and patterns. Free math sheets, math games and math help. 3d or three dimensional geometric body, print finished to cut and fold on assist with visual and practical learning. Web introduce your child to geometry and 3 dimensional shapes with this quick 3d cylinder printable, meant to build your little learner’s knowledge and vocabulary! An. 3D Shapes Definition, Types, Properties, Nets, Formulas, Facts Use the square centimeters printed on each surface of the cylinder 's net to find the surface area of the cylinder. The net has a number of glue tabs that make it easy to construct a 3d model. Free math sheets, math games and math help. Web download this printable 3d cylinder net vector, cylinder net, 3d shapes nets craft,. Cylinder Net Printable Printable World Holiday The net has a number of glue tabs that make it easy to construct a 3d model. Web geometric nets for 3d shapes: Alphabetically listed printable shapes, nets and patterns. How does it compare to a cube or cone? Free math sheets, math games and math help. 3d Geometric Shapes Nets Color & b&w versions included. How does it compare to a cube or cone? An insight into this topic helps you know the properties of the net of a cylinder and construct. Web download this printable 3d cylinder net vector, cylinder net, 3d shapes nets craft, 3d shapes nets png transparent background or vector file for free. 3d or three. Cylinder Net Printable Free math sheets, math games and math help. Use this great geometry resource with your students to practice identifying the characteristics of a. Free math sheets, math games and math help. How does it compare to a cube or cone? Color & b&w versions included. Cylinder Net Printable Printable Word Searches Free math sheets, math games and math help. How does it compare to a cube or cone? Math isn’t always your child’s. Web what makes a cylinder a cylinder? Web on this worksheet, you’ll find a net of a cylinder that features two circles and a connecting rectangle. Printable Cylinder Net - The net has a number of glue tabs that make it easy to construct a 3d model. Web introduce your child to geometry and 3 dimensional shapes with this quick 3d cylinder printable, meant to build your little learner’s knowledge and vocabulary! Alphabetically listed printable shapes, nets and patterns. 3d or three dimensional geometric shapes, print ready to cut and fold. Use the square centimeters printed on each surface of the cylinder 's net to find the surface area of the cylinder. Web cylinder cylinder download add to favorites use this printable pattern with your students to help them construct their own cylinder to demonstrate their understanding of. Free math sheets, math games and math help. Free math sheets, math games and math help. Web alphabetize listed printable shapes, webs and patterns. 3d or three dimensional geometric body, print finished to cut and fold on assist with visual and practical learning. Web download this printable 3d cylinder net vector, cylinder net, 3d shapes nets craft, 3d shapes nets png transparent background or vector file for free. 3d or three dimensional geometric body, print finished to cut and fold on assist with visual and practical learning. The net has a number of glue tabs that make it easy to construct a 3d model. Web alphabetize listed printable shapes, webs and patterns. Alphabetically listed printable shapes, nets and patterns. Alphabetically listed printable shapes, nets and patterns. The net has a number of glue tabs that make it easy to construct a 3d model. Web download this printable 3d cylinder net vector, cylinder net, 3d shapes nets craft, 3d shapes nets png transparent background or vector file for free. An insight into this topic helps you know the properties of the net of a cylinder and construct. Web What Makes A Cylinder A Cylinder? Web alphabetize listed printable shapes, webs and patterns. Web download this printable 3d cylinder net vector, cylinder net, 3d shapes nets craft, 3d shapes nets png transparent background or vector file for free. 3d or three dimensional geometric body, print finished to cut and fold on assist with visual and practical learning. Use this great geometry resource with your students to practice identifying the characteristics of a. 3D Or Three Dimensional Geometric Shapes, Print Ready To Cut And Fold. Web cylinder cylinder download add to favorites use this printable pattern with your students to help them construct their own cylinder to demonstrate their understanding of. Free math sheets, math games and math help. Math isn’t always your child’s. Color & b&w versions included. Alphabetically Listed Printable Shapes, Nets And Patterns. Free math sheets, math games and math help. Web on this worksheet, you’ll find a net of a cylinder that features two circles and a connecting rectangle. Web geometric nets for 3d shapes: Use the square centimeters printed on each surface of the cylinder 's net to find the surface area of the cylinder. Web Introduce Your Child To Geometry And 3 Dimensional Shapes With This Quick 3D Cylinder Printable, Meant To Build Your Little Learner’s Knowledge And Vocabulary! The net has a number of glue tabs that make it easy to construct a 3d model. How does it compare to a cube or cone? An insight into this topic helps you know the properties of the net of a cylinder and construct. Related Post:
{"url":"https://ataglance.randstad.com/viewer/printable-cylinder-net.html","timestamp":"2024-11-14T10:32:50Z","content_type":"text/html","content_length":"37134","record_id":"<urn:uuid:e4028e7c-0c38-452d-9871-ab8d20aa2f15>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00739.warc.gz"}
Interest Rate Calculator Here is the Interest Rate Calculator Welcome to the Interest Rate Calculator. Follow these steps to calculate your loan payments and view the loan amortization schedule: 1. Enter the loan’s principal amount in the “Principal Amount” field. This is the initial amount you’re borrowing. 2. Enter the annual interest rate in the “Interest Rate” field. This should be in percentage form. 3. Specify the loan’s time period in years in the “Time Period” field. This is the duration of your loan. 4. Choose the number of payments you’ll make per year from the “Payments Per Year” dropdown. For example, if you make monthly payments, select “Monthly.” 5. Click the “Calculate” button to compute your monthly payment amount. Interest Rate Calculator Loan Amortization Schedule: Payment # Payment Date Payment Amount Principal Payment Interest Payment Remaining Balance
{"url":"https://datatipss.com/interest-rate-calculator/","timestamp":"2024-11-07T17:11:34Z","content_type":"text/html","content_length":"154327","record_id":"<urn:uuid:9ec71906-43f7-4896-82b8-183a4a9e5680>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00504.warc.gz"}
Upper bound on the decimation factor M 8 years ago ●8 replies● latest reply 8 years ago 213 views Dear experts, Hi, I am a Chinese student at BeiHang Uni. I am now self-studying DSP. I had a difficulty understanding a restraint on the #Decimation factor M in #Multirate Narrowband Filter. I don't understand why decimation factor must satisfy (8.5.2). I read an article which writes it's to achieve gain in computational efficiency(http://www.mds.com/wp-content/ uploads/2014/05/multirate_article.pdf, page 9) and I still had no clue. Could you give me a hint ? Thanks in advance. [ - ] Reply by ●August 18, 2016 Computational efficiency comes from two places - when you down sample you can skip over blocks of data so you only apply your filter every M samples and second from the number of taps you need to create the filter. The bigger M is, the fewer computations per input samples you need to do. But if $F_0$ is really small, the number of taps is large. So I think there has to be a balance in If M is big, you only have to do a filter step once every M input samples, so it saves you computation effort. The best thing to do is to create an example - build a simple filter and change a few parameters so you can see what the math really tells you. Send in a simple square wave and see what comes out of the filter. Change the parameters, and compare what comes out. It should not take long to understand the formulas after that! Dr. mike [ - ] Reply by ●August 18, 2016 [ - ] Reply by ●August 18, 2016 In the article at the link you provided, there is an expression: "To achieve gain in computational efficiency, the following must hold:B<Fs/4" I'm sure you know that B<Fs/2 is just the Nyquist constraint. And, here, you want to reduce the bandwidth to allow a lower sample rate. Factors of 2 are the most common in doing this. So, the first decimation factor of 2 yields B<Fs/4. [ - ] Reply by ●August 18, 2016 Dr. mike, Do you agree with me on this? I think the upper bond is fs/2F0 for M. Here are my reasons. Here fs stands for sampling rate, F0 is the cutoff frequency of the narrowband filter which is fixed and needs to meet. After down sampling, the sampling is fs/M, so if the bandwidth of the signal is lower than the fs/2M, it won't cause aliasing. Therefore, a lowpass filter with a cutoff frequency, F0, less than fs/2M is possible. From F0<fs/2M, I get M<fs/2F0, twice of the upper bound in the book. Do I make sense? [ - ] Reply by ●August 18, 2016 Making sense can also be wrong :-) It is possible you are right, and the book has a typo - that happens to every author. But there are 2 steps in the filter process, one is down sample and one is up sample. I suspect the other factor of 2 comes from the desire to prevent artefacts from appearing in the up sample spectrum. If all you were doing is changing rate down, then you would be Try it. Build a filter and feed in a square wave. If you get a sine wave out, the filter works. If you get noise - you violated a condition and that extra factor of 2 is there for a reason. Dr. mike [ - ] Reply by ●August 18, 2016 The document(page 9) says B to be Fs/4 for design to be more efficient than using a single rate filter. imagine B = 0.5 Fs (M = 1, i.e. no decimation) B = 0.4Fs(M = 2, not efficient relative to single rate filter) B = 0.125Fs (M = 4, efficiency starts here) B = 0.0625 (M = 8, better) an so on. [ - ] Reply by ●August 18, 2016 Thank you for your time. But how can the second not efficient relative to single rate filter? Could you shed some lights on this? [ - ] Reply by ●August 18, 2016 B = 0.5(M = 1, no decimation possible) B = 0.25Fs(M = 2, not efficient relative to single rate filter) B = 0.125Fs (M = 4, efficiency starts here) B = 0.0625 (M = 8, better) The author has compared the efficiency and came out with some figures. There is really nothing rock solid here. I assume M to be integer 4 or more to be useful. Remember we are comparing one filter (single rate case) with two filters (multirate case). So try your own study with various B and various filter spec to see the difference.
{"url":"https://dsprelated.com/thread/865/upper-bound-on-the-decimation-factor-m","timestamp":"2024-11-12T16:28:28Z","content_type":"text/html","content_length":"44633","record_id":"<urn:uuid:57e90d77-1c52-4e56-9e6d-82f581a7a915>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00436.warc.gz"}
Spinning Block Self Torquing is small The torque a spinning block exerts on itself is smaller than it should be even after you take into account the fact that the moment of inertia of a block is larger by a factor of 2. When you spin a block about neither the largest eigenvector nor the smallest eigenvector of the moment of inertia, the block will torque itself. The torque is very small. This torque might be computed for a “Hollow Block” instead of a solid block, as noted here: Blocks Moment of Inertia is Wrong - #2 by Khanovich
{"url":"https://devforum.roblox.com/t/spinning-block-self-torquing-is-small/185319","timestamp":"2024-11-06T07:13:11Z","content_type":"text/html","content_length":"20625","record_id":"<urn:uuid:5dfe1871-2002-47e5-936d-33ee18f64214>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00605.warc.gz"}
Area Moments of Inertia.xls Area Moments of Inertia for some standard shapes Calculation Reference The area moment of inertia, also known as the second moment of area, is a property of a shape that measures its resistance to bending and deflection. Here are the area moments of inertia for some standard shapes: 1. Rectangle: Ix = (b * h^3) / 12 Iy = (h * b^3) / 12 Where: Ix = moment of inertia about the x-axis Iy = moment of inertia about the y-axis b = width of the rectangle h = height of the rectangle 2. Circle: I = (pi * d^4) / 64 = (pi * r^4) / 4 Where: I = moment of inertia about the central axis d = diameter of the circle r = radius of the circle 3. Hollow Circle (Circular Ring): I = (pi * (d_o^4 - d_i^4)) / 64 = (pi * (r_o^4 - r_i^4)) / 4 Where: I = moment of inertia about the central axis d_o = outer diameter of the circular ring d_i = inner diameter of the circular ring r_o = outer radius of the circular ring r_i = inner radius of the circular ring 4. I-Beam: Ix = (1/12) * (b_f * h_f^3 + b_w * h_w^3) Where: Ix = moment of inertia about the x-axis b_f = width of the flanges h_f = height of the flanges b_w = width of the web h_w = height of the web 5. Triangle: Ix = (b * h^3) / 36 Iy = (h * b^3) / 48 Where: Ix = moment of inertia about the x-axis Iy = moment of inertia about the y-axis b = base of the triangle h = height of the triangle These formulas provide the area moments of inertia for various standard shapes, which are essential for calculating stresses, deflections, and other mechanical properties in structural and mechanical engineering applications. Calculation Preview Full download access to any calculation is available to users with a paid or awarded subscription (XLC Pro). Subscriptions are free to contributors to the site, alternatively they can be purchased. Click here for information on subscriptions 17 years ago Clear and concise calculations for area moments of simple shapes.
{"url":"https://www.excelcalcs.com/calcs/repository/Geometry/Areas/Area-Moments-of-Inertia_xls/","timestamp":"2024-11-06T12:27:34Z","content_type":"text/html","content_length":"27385","record_id":"<urn:uuid:dd6034e7-6562-4de4-8b97-caea76babde6>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00313.warc.gz"}
Principled extremizing of aggregated forecasts — EA Forum Bots In short: In light of a recent result by Eric Neyman, I tentatively recommend extremizing aggregated forecasts by a factor , which is approximately for : When historical resolution rates are available and reasonably constant through time, I more hesitantly recommend using the aggregate: where are the historical logarithmic odds of a positive resolution. Neyman's result is theoretical and it is not clearly applicable in this context, but I show it has good performance on Metaculus data. The forecast aggregation war continues. Extreme forecasting Extremizing is the practice of adjusting aggregated forecasts towards a extreme - either 0% or 100% probability. If the experts predictions are odds , then an extremized aggregation of the odds is: for a given extremization factor . Extremizing is a common practice in academic contexts. It is usually justified on the grounds that different experts have access to different information (Baron et al, 2014). And indeed it has been shown to have a good performance in practice. (Satopää et al, 2014) showed that forecasts in a collection of geopolitical questions are optimally aggregated when using an extremizing factor . However there are reasons to be skeptical of extremizing. It introduces a degree of freedom in the choice of the extremizing parameter. Results like Satopää's where an optimal extremizing factor is derived in hindsight risk overfitting this parameter. Before making a blank recommendation of extremizing, we would prefer to have a grounded way of choosing an extremizing factor. Why I became an extremist Neyman and Roughgarden (2021) have taken a close look at the theory of forecast aggregation and extremizing [1]. They suggest and analyse an unconventional extremizing method, where they move the aggregate estimate away from its baseline value. Concretely, their estimate as applied to logodd aggregation would which can be rearranged as or as This is the same as classical extremizing when we assume , though as we will see later it might be better to use a historical estimate. Neyman and Roughgarden show that when using an extremizing factor this estimate outperforms (in a certain sense) the logodds average. As grows, the recommended extremizing factor approaches . In practice, for the approximation is already pretty good [2]. In which sense does the extremized prediction perform better? The authors analysis is done in terms of what they call the approximation ratio. This measures how close the aggregated estimate approaches gets to the square loss of the idealized prediction of an expert with access to everyone's information [3]. The authors find that under the projective substitutes condition their aggregation scheme performs better than a simple average, in terms of the approximation ratio [4]. What is this projective substitutes condition? Essentially, it states that there are diminishing marginal returns to more forecasts [5]. I think that this is a plausible assumption in the context of forecast aggregation, though it is not a guarantee [6]. Does the recommended aggregation strategy perform well in practice? Yes it does. I looked at 899 resolved binary questions in Metaculus, and compared several aggregation methods. The results of this analysis are included in the appendix. In short, when assuming , Neyman's extremizing factor outperforms in this dataset most other methods I tried, and it is on par with a optimized and constant extremizing rate and with the current Metaculus prediction method. But that is not all. When I used equal to the resolution rate of the currently resolved binary questions in Metaculus the results significantly outperform all other methods I tried, including the current Metaculus prediction [7]. In conclusion I harbored some doubts about extremizing. It was common practice and there was some empirical evidence in its favour. But I lacked a convincing argument to rule out overfitting as the reason for the increased performance. This has now changed. In Neyman and Roughgarden (2021) I find both a sound argument in favour and a recipe to automatically choose an extremizing rate. In response, I now tentatively recommend extremizing average log odds as a default method for aggregating, using Neyman's method to choose an extremizing factor [8]. I am more hesistant to recommend the more complex extremization method where we use the historical baseline resolution log-odds and aggregate forecasts as: I think I would recommend this in cases where there is convincing evidence of a relatively constant resolution rate through time. For example, I believe this is the case for Metaculus binary Note that a big part of my recommendation relies on my idiosincratic reading of Neyman and Roughgarden's results, the assumption that logodd forecast aggregation satisfies the projective substitutes condition and the empirical performance on Metaculus data. While my beliefs have changed enough to change my best-guess recommendation, I do not see the question as settled. Further theoretical and empirical evidence could easily change this conclusion, either showing how this result does not apply or coming up with a better result. Many thanks to Eric Neyman for writing his paper and hepfully answering my questions about it. I thank Simon M for the script and discussion of previous work, and Lawrence Phillips for reproducing the results and finding a mistake. My work is financially supported by the Open Philanthropy Project. Appendix: Testing Neyman's method on Metaculus data I used 899 resolved binary questions in Metaculus to study the empirical performance of Neyman's suggested method for choosing an extremizing factor. Method Weighted Brier -log Questions Neyman aggregate (p=0.36) Yes 0.106 0.340 899 Extremized mean of logodds (d=1.55) Yes 0.111 0.350 899 Neyman aggregate (p=0.5) Yes 0.111 0.351 899 Extremized mean of probabilities (d=1.60) Yes 0.112 0.355 899 Metaculus prediction Yes 0.111 0.361 774 Mean of logodds Yes 0.116 0.370 899 Neyman aggregate (p=0.36) No 0.120 0.377 899 Median Yes 0.121 0.381 899 Extremized mean of logodds (d=1.50) No 0.126 0.391 899 Mean of probabilities Yes 0.122 0.392 899 Neyman aggregate (o=1.00) No 0.126 0.393 899 Extremized mean of probabilities (d=1.60) No 0.127 0.399 899 Mean of logodds No 0.130 0.410 899 Median No 0.134 0.418 899 Mean of probabilities No 0.138 0.439 899 Baseline (p = 0.36) N/A 0.230 0.652 899 In the table above I show the performance of several aggregation methods: the mean of logodds, the mean of probabilities, the median, the extremized average of logodds and Neyman's proposed aggregation method. I include unweighted and weighted versions of each - for the weighted version we weight each experts prediction by its recency, following the same procedure as Metaculus. To compute the Neyman aggregation, we use the formula: where , where is the number of respondents. I used both an uninformative prior and the actual resolution rate among the questions () to derive the baseline logodds . For the extremized average of logodds shown I chose the extremization factors that approximately minimize the log score. I also include the score of the default metaculus aggregation and the baseline score we would have gotten with a constant prediction equal to the mean resolution of the questions (). I included the Brier score and the log loss score - lower scores are better. Note that the Neyman aggregation performed quite well. When assuming a zero baseline logodds it performs better than all simple methods. It also (barely) outperforms the Metaculus aggregation in terms of log score (though not in terms of Brier score). When assuming a baseline logodds that match the actual empirical rate it outperforms all other methods I tried. This is slighly misleading, since we only have access to the empirical resolution rate in hindsight. It is still quite encouraging. The script to replicate my findings is here. It is based off Simon M's script here. [1] Note that in the paper the aggregation is discussed in the context of estimating real-value quantities. Here I am independently arguing that their framework can reasonably apply to the context of estimating discrete logodds and plausibly continuous logdensities. [2] Note that this factor falls within the confidence interval of optimal extremizing factors found by Satopää et al. [3] Suppose we are trying to forecast the odds of an event. Each of experts is granted a piece of evidence that they use to elicit a forecast . We then summarize their beliefs with an aggregate forecast . Our goal is to compare how good this aggregate forecast is with respect to the forecast an expert would make, if they had access to all information, ie . The approximation ratio of an aggregate estimator then would be: Note that the approximation ratio satisfies that and . [4] The relevant part is theorem 4.1. We contrast this result with theorem 3.1, which shows the approximation ratio in the case of a simple average. In short, they find that with their extremizing method they can achieve an approximation ratio of , while a non-extremized average attains an approximation ratio of (the approximations hold for large According to the authors analysis this is the best possible extremizing method of the form with linear in terms of optimizing the approximation ratio. But better non-linear methods might exist, or other results for more appropriate measures of optimality. [5] Being inexact, it means that the information gap between a group of experts sharing all their evidence and a strict subset of it would decrease if we were to add another expert to both the group and the subset. [6] For example, in Ord's Jack, Queen and King example the projective substitutes condition does not hold, since the joint information of players A and B complements each other. The weaker weak substitutes condition does not hold either. [7] Using the resolution baseline logodds is in a sense cheating, since I used the results of the questions to estimate this baseline. But this is an encouraging result assuming that the positive resolution rate (PRR) is roughly constant over time. To investigate this, I ran a bootstrap study on the 995 resolved binary questions on Metaculus. I resampled with replacement 995 questions B=100,000 times and computed the PRR. The resulting 90% confidence interval is . I also studied the rolling positive resolution rate. I computed the PRR up to the ith question for . The resulting 90% confidence interval is . Both of these results are weak evidence that the PRR in Metaculus is relatively stable. [8] Note that Eric Neyman himself raised a few concerns when I showed him a draft of the post: 1. It is not clear that the approximation ratio is the thing we care most about when aggregating forecasts. Neyman remarks that the KL divergence between the aggregation and the "true odds" would be a better metric to optimize. 2. It is not clear that "prior/ baseline log-odds" means in the contexts of aggregating forecasts. Recall that this method was developed to aggregate point estimates of real values. We are taking a license when assuming the analysis would apply to aggregating logodds as well. 3. Ultimately, he argues that the extremization factor to use should be empirically derived. These theoretical results provide guidance in choosing how to aggregate results, but its no substitute for empirical evidence. I eagerly look forward to his future papers. Neyman, Eric, and Tim Roughgarden. 2021. ‘Are You Smarter Than a Random Expert? The Robust Aggregation of Substitutable Signals’. ArXiv:2111.03153 [Cs], November. http://arxiv.org/abs/2111.03153. Satopää, Ville A., Jonathan Baron, Dean P. Foster, Barbara A. Mellers, Philip E. Tetlock, and Lyle H. Ungar. 2014. ‘Combining Multiple Probability Predictions Using a Simple Logit Model’. International Journal of Forecasting 30 (2): 344–56. https://doi.org/10.1016/j.ijforecast.2013.09.009. Baron, Jonathan, Barb Mellers, Philip Tetlock, Eric Stone, and Lyle Ungar. 2014. ‘Two Reasons to Make Aggregated Probability Forecasts More Extreme’. Decision Analysis 11 (June): 133–45. https:// Sevilla, Jaime. 2021. ‘My Current Best Guess on How to Aggregate Forecasts’. EA Forum, 6 October 2021. https://forum.effectivealtruism.org/posts/acREnv2Z5h4Fr5NWz/ Sevilla, Jaime. 2021. ‘When Pooling Forecasts, Use the Geometric Mean of Odds’. EA Forum. 3 September 2021. https://forum.effectivealtruism.org/posts/sMjcjnnpoAQCcedL2/ Hi! I'm an author of this paper and am happy to answer questions. Thanks to Jsevillamol for the summary! A quick note regarding the context in which the extremization factor we suggest is "optimal": rather than taking a Bayesian view of forecast aggregation, we take a robust/"worst case" view. In brief, we consider the following setup: (1) you choose an aggregation method. (2) an adversary chooses an information structure (i.e. joint probability distribution over the true answer and what partial information each expert knows) to make your aggregation method do as poorly as possible in expectation (subject to the information structure satisfying the projective substitutes condition). In this setup, the 1.73 extremization constant is optimal, i.e. maximizes worst-case performance. That said, I think it's probably possible to do even better by using a non-linear extremization technique. Concretely, I strongly suspect that the less variance there is in experts' forecasts, the less it makes sense to extremize (because the experts have more overlap in the information they know). I would be curious to see how low a loss it's possible to get by taking into account not just the average log odds, but also the variance in the experts' log odds. Hopefully we will have formal results to this effect (together with a concrete suggestion for taking variance into account) sometime soon :) Thanks for chipping in Alex! It's the other way around for me. Historical baseline may be somewhat arbitrary and unreliable, but so is 1:1 odds. Agreed! To give some nuance to my recommendation, the reason I am hesitant is mainly because of lack of academic precedent (as far as I know). If the motivation for extremizing is that different forecasters have access to independent sources of information to move them away from a common prior, but that common prior is far from 1:1 odds, then extremizing away from 1:1 odds shouldn't work very well. Note that the data backs this up! Using "pseudo-historical" odds is quite better than using 1:1 odds. See the appendix for more details. [...] use past estimates of the same question. [...] use the odds that experts gave it at some point in the past as a baseline with which to interpret more recent odds estimates provided by experts. I'd be interested in seeing the results of such experiments using Metaculus data! Another possibility is to use two pools of forecasters [...] This one is trippy, I like it! Mentioned in Load more (5/6) Sorted by Click to highlight new comments since: I am more hesitant to recommend the more complex extremization method where we use the historical baseline resolution log-odds It's the other way around for me. Historical baseline may be somewhat arbitrary and unreliable, but so is 1:1 odds. If the motivation for extremizing is that different forecasters have access to independent sources of information to move them away from a common prior, but that common prior is far from 1:1 odds, then extremizing away from 1:1 odds shouldn't work very well, and historical baseline seems closer to a common prior than 1:1 odds does. I'm interested in how to get better-justified odds ratios to use as a baseline. One idea is to use past estimates of the same question. For example, suppose metaculus asks "Does X happen in 2030", and the question closes at the end of 2021, and then it asks the exact same question again at the beginning of 2022. Then the aggregated odds that the first question closed at can be used as a baseline for the second question. Perhaps you could do something more sophisticated, like, instead of closing the question and opening an identical one, keep the question open, but use the odds that experts gave it at some point in the past as a baseline with which to interpret more recent odds estimates provided by experts. Of course, none of this works if there hasn't been an identical question asked previously, and the question has been open for a short amount of time. Another possibility is to use two pools of forecasters, both of which have done calibration training, but one of which consists of subject-matter experts, and the other of which consists of people with little specialized knowledge on the subject matter, and ask the latter group not to do much research before answering. Then the aggregated odds of the non-experts can be used as a baseline when aggregating odds given by the experts, on the theory that the non-experts can give you a well-calibrated prior because of their calibration training, but won't be taking into account the independent sources of knowledge that the experts have.
{"url":"https://forum-bots.effectivealtruism.org/s/hjiBqAJNKhfJFq7kf/p/biL94PKfeHmgHY6qe","timestamp":"2024-11-03T19:51:52Z","content_type":"text/html","content_length":"1049038","record_id":"<urn:uuid:6da5134b-c390-4024-b7b0-22364d83b457>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00053.warc.gz"}
Point Process JavaScript Program Point Process MCMC JavaScript program, by Jeffrey Rosenthal. Description at the bottom. This program runs a random-scan component-wise Metropolis MCMC algorithm on a spatial point process hardcore model for a fixed number of particles, with initial distribution uniform over the pink region, and with target density (with respect to uniform) proportional to exp(-H), where H(z_1, z_2, ..., z_N) = A * sum_{i<j} |z_i − z_j| + B * sum_{i<j} (1 / |z_i − z_j|) + C * sum_i (z_i1) where |...| is Euclidean distance, z_i is the position of the i'th particle, and z_i1 is the first (x) coordinate of z_i. The program accepts the following keyboard inputs (or you can mouse click): • Use the numbers '0' though '9' to set the animation speed level higher or lower. (Note that 0=frozen, and 1=one-step. Alternatively, use 'f' or 's' or 'o' for faster/slower/one.) • Use 'A' and 'a' to increase/decrease the value of A, and similarly 'B' and 'b' for B, and 'C' and 'c' for C. (Yes, negative values are allowed.) • Use 'r' to restart the simulation, or 'z' to just zero the counts. (The initial distribution is always independent uniform placement.) • Use 'p' and 'm' to increase/decrease the size ("rad") of the proposal increments. • Use '+' and '−' to increase/decrease the number N of particles (and restart the simulation). • Use 't' to toggle between (S)ystematic or (R)andom scan type. • Use 'l' or 'L' to toggle showing a thin red line at the right-most particle. This program is written by Jeffrey Rosenthal. See also my other JavaScript, my Stochastic Processes book, and my Java Applets.
{"url":"https://probability.ca/jeff/js/pointproc.html","timestamp":"2024-11-02T18:30:19Z","content_type":"text/html","content_length":"16505","record_id":"<urn:uuid:1f1f5ac2-2a14-4f1c-b241-f276398aec6e>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00109.warc.gz"}
Coherence for Monoidal and Symmetric Monoidal Groupoids in Homotopy Type Theory Doctoral thesis Homotopy Type Theory (HoTT) is a variant of Martin-Löf Type Theory (MLTT) developed in such a way that types can be interpreted as infinity-groupoids, where the iterated construction of identity types represents the different layers of higher path space objects. HoTT can be used as a foundation of mathematics, and the proofs produced in its language can be verified with the aid of specific proof assistant software. In this thesis, we provide a formulation and a formalization of coherence theorems for monoidal and symmetric monoidal groupoids in HoTT. In order to design 1-types FMG(X) and FSMG(X) representing the free monoidal and the free symmetric monoidal groupoid on a 0-type X of generators, we use higher inductive types (HITs), which apply the functionality of inductive definitions to the higher groupoid structure of types given by the identity types. Coherence for monoidal groupoids is established by showing a monoidal equivalence between FMG(X) and the 0-type list (X) of lists over X. For symmetric monoidal groupoids, we prove a symmetric monoidal equivalence between FSMG(X) and a simpler HIT slist(X) based on lists, whose paths and 2-paths make for an auxiliary symmetric structure on top of the monoidal structure already present on list(X). Part of the thesis is devoted to the proof that the subuniverse BS_* of finite types is equivalent to the type slist(1), where 1 is the unit type, and hence that the former is a free symmetric monoidal groupoid. As an intermediate step, we show a symmetric monoidal equivalence between slist(1) and an indexed HIT del_* of deloopings of symmetric groups. The proof of a symmetric monoidal equivalence between del_* and BS_* rests on a few, unformalized statements. Assuming this equivalence, we are able to prove that, in a free symmetric monoidal groupoid, all diagrams involving symmetric monoidal expressions without repetitions commute. This work is accompanied by a computer verification in the proof assistant Coq, which covers most of the results we present in this thesis. The University of Bergen Copyright the Author.
{"url":"https://bora.uib.no/bora-xmlui/handle/11250/2830640","timestamp":"2024-11-12T12:25:34Z","content_type":"text/html","content_length":"23636","record_id":"<urn:uuid:99e215ff-62c0-4135-be88-5b5e4ebe442d>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00143.warc.gz"}
Albert Y. Kim Albert Y. Kim is Assistant Professor of Statistical & Data Sciences at Smith College in Northampton MA. He is a co-author of the fivethirtyeight R package and ModernDive an online textbook for introductory data science and statistics. His research interests include spatial epidemiology and model assessment and selection methods for forest ecology. Previously he worked on the Search Ads Metrics Team at Google as well as at Reed, Middlebury, and Amherst Colleges. R package of datasets and code published by the data journalism website FiveThirtyEight R package of data sets from ‘Mathematical Statistics with Resampling in R’ (2011) by Laura Chihara and Tim Hesterberg.
{"url":"http://rudeboybert.rbind.io/","timestamp":"2024-11-10T17:57:07Z","content_type":"text/html","content_length":"24999","record_id":"<urn:uuid:66879828-e0e6-4556-8814-868a4898ce65>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00732.warc.gz"}
Undergraduate Programme and Module Handbook 2013-2014 (archived) Module PHYS4251 : FOUNDATIONS OF PHYSICS 4A Department: Physics PHYS4251 : FOUNDATIONS OF PHYSICS 4A Type Open Level 4 Credits 20 Availability Available in 2013/14 Module Cap None. Location Durham • Foundations of Physics 2A (PHYS2581) AND (Mathematical Methods in Physics (PHYS2611) OR Analysis in Many Variables (MATH2031)). Excluded Combination of Modules • Foundations of Physics 3A (PHYS3621). • This module is designed primarily for students studying Department of Physics or Natural Sciences degree programmes. • It is designed partly for the benefit of students taking certain MSci Joint Honours degrees and partly for any physics students who undertook their third year abroad and could not match the corresponding learning outcomes at the host institution. • It builds on the Level 2 modules Foundations of Physics 2A (PHYS2581) and Mathematical Methods in Physics (PHYS2611) by providing courses on Quantum Mechanics and Nuclear and Particle Physics. • It develops transferable skills in researching a topic at an advanced level and making a written presentation on the findings. • The syllabus contains: • Quantum Mechanics: Introduction to many-particle systems (wave function for systems of several particles, identical particles, bosons and fermions, Slater determinant); the variational method (ground state, excited states, trial functions with linear variational parameters); the ground state of two-electron atoms; the excited states of two-electron atoms (singlet and triplet states, exchange splitting, exchange interaction written in terms of spin operators); complex atoms (electronic shells, the central-field approximation); the Born-Oppenheimer approximation and the structure of the hydrogen molecular ion, vibrational motion, the rigid rotator and rotational energy levels of molecules; the van der Waals interaction; time-dependent perturbation theory; Fermi’s Golden Rule (applications: photoionization, the dielectric function of semiconductors); periodic perturbations; two-level systems with harmonic perturbation, Rabi flopping; the sudden approximation; the Schrödinger equation for a charged particle in an electromagnetic field; the dipole approximation; transition rates for harmonic perturbations; absorption and stimulated emission; Einstein coefficients; spontaneous emission; quantum jumps; selection rules for electric dipole transitions; lifetimes, line intensities, widths and shapes; the ammonia maser and lasers; the interaction of particles with a static magnetic field (spin and magnetic moment, particle of spin one-half in a uniform magnetic field, charged particles with uniform magnetic fields; Larmor frequency; Landau levels); one-electron atoms in magnetic fields (the Zeeman effect from strong field to weak field, calculation of the Landé g-factor); magnetic resonance. • Nuclear and Particle Physics: Fundamental Interactions, symmetries and conservation Laws, global properties of nuclei (nuclides, binding energies, semi-empirical mass formula, the liquid drop model, charge independence and isospin), nuclear stability and decay (beta-decay, alpha-decay, nuclear fission, decay of excited states), scattering (elastic and inelastic scattering, cross sections, Fermi’s golden rule, Feynman diagrams), geometric shapes of nuclei (kinematics, Rutherford cross section, Mott cross section, nuclear form factors), elastic scattering off nucleons (nucleon form factors, quasi elastic scattering), deep inelastic scattering (nucleon excited states, structure functions, the parton model), quarks, gluons, and the strong interaction (quark structure of nucleons, quarks in hadrons, the quark-gluon interaction, scaling violations), particle production in electron–positron collisions (lepton pair production, resonances, gluon emission), phenomenology of the weak interaction (weak interactions, families of quarks and leptons, parity violation, deep inelastic neutrino scattering), exchange bosons of the weak interaction (real W and Z bosons, electroweak unification), the Standard Model, quarkonia (analogy with Hydrogen atom and positronium, Charmonium, quark–antiquark potential), hadrons made from light quarks (mesonic multiplets, baryonic multiplets, masses and decays), the nuclear force (nucleon–nucleon scattering, the deuteron, the nuclear force), the structure of nuclei (Fermi gas model, shell Model, predictions of the shell model). Learning Outcomes Subject-specific Knowledge: • Having studied this module students will be familiar with some of the key results of quantum mechanics including perturbation theory and its application to atomic physics and the interaction of atoms with light. • They will be able to describe the properties of nuclei and how nucleons interact and have an appreciation of the key ingredients of the Standard Model of particle physics. Subject-specific Skills: • In addition to the acquisition of subject knowledge, students will be able to apply the principles of physics to the solution of complex problems. • They will know how to produce a well-structured solution, with clearly-explained reasoning and appropriate presentation. Key Skills: • Students will have developed skills in researching a topic at an advanced level and making a written presentation. Modes of Teaching, Learning and Assessment and how these contribute to the learning outcomes of the module • Teaching will be by lectures and example classes. • The lectures provide the means to give a concise, focused presentation of the subject matter of the module. The lecture material will be defined by, and explicitly linked to, the contents of the recommended textbooks for the module, thus making clear where students can begin private study. When appropriate, the lectures will also be supported by the distribution of written material, or by information and relevant links on DUO. • Regular problem exercises and example classes will give students the chance to develop their theoretical understanding and problem solving skills. • Students will be able to obtain further help in their studies by approaching their lecturers, either after lectures or at other mutually convenient times. • Lecturers will provide a list of advanced topics related to the module content. Students will be required to research one of these topics in depth and write a dissertation on it. Some guidance on the research and feedback on the dissertation will be provided by the lecturer. • Student performance will be summatively assessed though an examination, problem exercises and the dissertation. The examination and problem exercises will provide the means for students to demonstrate the acquisition of subject knowledge and the development of their problem-solving skills. The dissertation will provide the means for students to demonstrate skills in researching a topic at an advanced level and making a written presentation. • The problem exercises and example classes provide opportunities for feedback, for students to gauge their progress and for staff to monitor progress throughout the duration of the module. Teaching Methods and Learning Hours Activity Number Frequency Duration Total/Hours Lectures 50 2 or 3 per week 1 Hour 50 Examples classes 10 Fortnightly 1 Hour 10 ■ Preparation and Reading 140 Total 200 Summative Assessment Component: Examination Component Weighting: 70% Element Length / duration Element Weighting Resit Opportunity Written examination 3 hours 100% Component: Problem exercises Component Weighting: 10% Element Length / duration Element Weighting Resit Opportunity problem exercises 100% Component: Dissertation Component Weighting: 20% Element Length / duration Element Weighting Resit Opportunity dissertation 100% Formative Assessment: Examples classes and problems solved therein. ■ Attendance at all activities marked with this symbol will be monitored. Students who fail to attend these activities, or to complete the summative or formative assessment specified above, will be subject to the procedures defined in the University's General Regulation V, and may be required to leave the University
{"url":"https://apps.dur.ac.uk/faculty.handbook/2013/UG/module/PHYS4251%20","timestamp":"2024-11-13T12:52:48Z","content_type":"text/html","content_length":"13273","record_id":"<urn:uuid:6780d93c-1367-4ab1-ad6b-16ac8e41c581>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00413.warc.gz"}
French Roulette The game of roulette actually comes from France if most stories are to be believed, so you could say that French roulette is the original variation of the game. The most obvious difference between this variation and the others (European and American roulette), is that the layout of the board is different, and all of the writing is in French. Tables in English speaking countries will often have the English translation alongside the French, but even if they don’t, it’s not difficult to figure out what’s what if you have ever played before. You will also probably find a racetrack and a few betting options you might not have heard of before, again all in French. The less obvious difference is that French roulette will often be played with one of two extra rules in operation; la partage, or en prison. These will be covered fully later in the article, but for now just know that they are of extra benefit to the player. For UK players, French roulette is arguably the least common variation you are likely to see on your casino gaming travels. Basic Rules for French Roulette The player must place their chips on the table during betting time, and cover the numbers they think the ball might land on. They can do this using a number of different bet types, some of which will cover just a single number and others will cover up to 18 different numbers with a single chip. It is possible to change your mind and move your chips provided that betting time is still open, but don’t mess around too much or the dealer might tell you off. After a short while, the dealer will spin the wheel and release the ball before calling “No more bets”. At this point, any bet still on the table is locked in and the players must not place any more chips or move those already placed. Once the ball comes to rest in one of the pockets the result of the spin is decided and all winning bets are paid out, while all losing bets are collected by the dealer. The players must not touch any chips on the table unless the dealer has paid them out. Once all of the chips have been removed, the dealer will call “Place your bets” and the next game begins. Players can join at any stage in the process, so long as they are mindful of whether betting is open or not when they join. If it’s not, just wait until the current game is concluded. You can play for as few or as many games as you like, and you can bet as much or as little as you like on the different bet types too, so long as you are aware of any table minimums and maximums. These must be adhered to. This applies to playing in real life, but of course if you are playing online then there is no dealer and and you will control when the wheel is spun, so you can take as much time as you need to place your bets. Winnings are credited to your casino account automatically. La Partage and En Prison One slight difference with French roulette when compared to European or American roulette are the en prison and la partage rules. You usually only find one or the other, although sometimes the player can request which way they want to go, and they relate to the even money bets; odd/even, high/low, red/black. If a player has a bet on one of these areas and the ball lands on zero, instead of the bet losing like it normally would, either one of these rules come into effect, and this is how they work: • La Partage – “The Divide”. With this rule, the dealer will return half of the bet and the house will keep the other half, so the player’s losses are reduced 50%. • En Prison – “In Prison”. With this rule, the bet stays on the table ‘in prison’ for the next round. If the next spin wins then the player gets their stake back in full, but they do not get a payout. If it loses for a second time, the full amount is lost. These rules do not apply to any other bets on the table, only even money bets, but for those who are playing even money bets the house edge is cut in half from 2.70% to just 1.35%. At tables where the player gets to make the decision, it is essentially a choice between losing half of your stake for certain, or risking it all to reclaim it in its’ entirity. Bets, Payouts and Probabilities To get an idea of the different bet types in French roulette, and also where to place your chips on the board in order to make those bets, have a look at the image below: 1. Straight Up or Single – A bet on one single number. 2. Split – A bet on two specific numbers. 3. Trio – A bet on the zero and two adjacent numbers. 4. Street – A bet on three numbers in a vertical line. 5. Corner or Square – A bet on four numbers in a square. 6. 4 Number Bet – A bet on the zero, one, two, and three. Placed like a six line bet but with the same payout as a corner bet. 7. Six Line or Double Street – A bet on six numbers in a vertical line. 8. Dozen Bet – A bet on twelve numbers, covering one of three thirds of the board, not including the zero. 9. Column Bet – A bet on 12 numbers in a horizontal line across the board. 10. Red/Black – A bet on all numbers that are either red or black, not including the zero. 11. Odd/Even – A bet on all numbers that are either odd or even, not including the zero. 12. High/Low or 1-18/19-36 – A bet on either the first 18 numbers or the second 18 numbers, not including the zero. Bet numbers 1 to 7 are called inside bets, while bet numbers 8-12 are called outside bets. Inside bets are made on the numbers themselves, and bets placed here can cover a maximum of six numbers per chip; outside bets are placed on special sections around the outside of the board which cover either 12 or 18 numbers per bet. Inside bets have a lower chance of winning but higher payouts than outside bets, for which the opposite is true. Note that the Dozen bets are in a different place on the French roulette table, labelled as D12, M12, and P12. P12 stands for premiere douzaine (first dozen), covering numbers 1-12; M12 stands for moyenne douzaine (middle dozen) and covers number 13-24; and D12 stands for derniere douzaine (last dozen) and covers numbers 24-36. You can see the payouts for each bet type below, as well as the probability of that bet winning, which just highlights what was explained above. Bet Payout Probability Straight 35:1 2.70% Split 17:1 5.41% Trio 11:1 8.11% Street 11:1 8.11% Corner 8:1 10.81% 4 Number Bet 8:1 10.81% Six Line 5:1 16.20% Dozen Bet 2:1 32.40% Column Bet 2:1 32.40% Red/Black 1:1 48.65% Odd/Even 1:1 48.65% High/Low 1:1 48.65% This hopefully clearly demonstrates that the riskier your bet is, the higher the payout will be if you win. However, the amount you win in real money will be dependent on your stake, because while 2:1 on a £5 bet gives you a £10 profit (so £15 returned including the stake), the same 2:1 payout on a £10 stake will give you £20 profit (so £30 returned including the £10 stake). Roulette is a great equaliser in this way, because the person playing and small stakes and the person playing a high stakes can be sat right next to each other wagering staggeringly different amounts but receiving the same proportional payout. French Roulette Wheel Diagram The numbers on the roulette betting table are all in a sort of numerical order travelling from the bottom of the board to the top then starting at the bottom of the next column again. In this way, it also helpfully splits the numbers in half and sorts them into dozens. However, the roulette wheel has no such obvious order when you first look at the numbers and their layout. Believe it or not though, the numbers on a French roulette wheel have been very cleverly arranged in order to keep the results as fair and random as possible by not letting any part of the wheel favour a specific bet type. For example, if the numbers 1-12 were all next to each other, a 1st Dozen bet would be guaranteed to win in that third of the wheel; as it is, the 1st Dozen numbers are scattered evenly around the wheel which keeps things fun and exciting. It’s obvious to see that, apart from the green zero, all of the numbers have been placed in a pattern of red then black repeating. This means that the results of odd/even bets are as random as If you look at the wheel with the zero at the top, there are nine even numbers and nine odd numbers on each side of the wheel, and there are also nine high number and nine low numbers on each side. This means that the other even money bets, odd/even and high/low are also equally distributed and show no bias towards any particular area of the wheel. The French wheel is arranged in exactly the same way as the European wheel, but the American wheel has a different order because of the extra space it uses, the double zero. The numbers usually face inwards on the wheel although this isn’t necessarily going to be true all of the time and doesn’t effect the game in any way. The Racetrack and Call Bets One other extra feature with a lot of French roulette tables, although you do sometimes find it on European and American tables too, is the race track. This is another way to place bets known as call bets, and each one has a specific betting layout that requires a specific number of chips. Betting in this way is all about covering sections of the wheel, which you can see visually represented in the image above. The different call bets are as follows: • Voisins du Zero – “Neighbours of Zero”. This bet covers the zero and the sixteen numbers either side of it, so 17 numbers in total. It costs nine chips; splits on 4-7, 12-15, 18-21, 19-22, and 32-35, then two chips on a 0-2-3 trio, and two chips on a 25, 26, 28, 29 corner bet. • Tiers du Cylindre – “Third of the Wheel”. This bet covers the twelve numbers opposite the zero. It costs six chips, and all bets are splits covering the numbers 5-8, 10-11, 13-16, 23-24, 27-30, and 33-36. • Orphelins – “Orphans”. This bet basically covers the numbers that are left over, which is why it is called the Orphans bet. It’s a bet requiring 5 chips; a straight up bet on number 1, and splits on numbers 6-9, 14-17, 17-20, and 31-34. • Jeu Zero – “Zero Game”. This bet isn’t always included as a call bet but is essentially a mini version of Voisins du Zero, covering the zero and the two numbers on its right, and the four numbers on its left. It costs 4 chips; a straight up on number 26, and splits on numbers 0-3, 12-15, and 32-35. Although these bets have collective names, any wins will be paid out at the regular payout rates. So a win on number 26 in Voisins du Zero (the corner bet) would pay out at 8:1 as normal, with all other bets losing, leaving you even. On the other hand, a win on number 7 with Voisins (one of the splits) would pay out at the regular 17:1, with all other bets losing, leaving you 8 chips better House Edge This puts the player at an obvious disadvantage before they even begin, but without it, there would be no casino in the first place, so it can be seen as the casino’s fee for providing the service. They build this edge in by paying out winning bets at lower than true odds. For example, there are 37 numbers on a French roulette wheel including the zero, but a winning bet on number 10 would only pay out 35:1, not the 36:1 that would represent the true odds of your bet winning. Equally, a bet on red or black is paid out at 1:1, which would indicate 50/50 odds, but each even money bet only covers 18 numbers, which is not 50% of the 37 number wheel, it is actually only 48.64% of the wheel. Therefore, French roulette has the same house edge as European roulette, at 2.70%. This is the case for every single bet on the table, regardless of how much money has been staked. You can see how it works using a bet on black as an example in the following equation: • 18 divided by 37 (your chances of winning are 18 in 37) x 2 divided by 1 (your payout and your stake) = 0.9729 • Now we need to turn that into a percentage so… • 0.9729 x 100 (percent) = 97.29% rounded up to 97.30%. This is the RTP or return to player. • 100% – 97.30% = 2.70% which is the house edge. What we have done here is multiplied the probability of a win by the payout, then taken the result away from 100% to reveal the house edge. If it all sounds too complicated don’t worry, all you need to know is that for every bet you make the casino is theoretically going to take 2.7% of your money.
{"url":"https://www.gambling-hall-online.com/roulette/rules/french-roulette-rules.htm","timestamp":"2024-11-02T02:50:17Z","content_type":"text/html","content_length":"44749","record_id":"<urn:uuid:e8ae7965-ddad-48c6-98eb-c2804e8fe7de>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00215.warc.gz"}
Using Number Lines to Add Two Integers with Different Signs In this article, the focus is on teaching you how to Using Number Lines to Add Two Integers with Different Signs. By Using Number Lines, you can simply Add Two Integers with Different Signs. A step-by-step guide to Using Number Lines to Add Two Integers with Different Signs To add two integers with different signs, follow these steps: Step 1: Ignore the signs. Step 2: Find the difference between these two numbers. Step 3: Subtract the bigger absolute value from the lesser one. Step 4: Attach the sign of the larger integer. To add two integers with different signs using number lines, you have to do these steps: Step 1: Find the first integer on the number line. Step 2: If the second integer is negative, move over that number of spaces to the left-hand side from the location of the first integer. But if the second integer is positive, move over that number of spaces to the right-hand side from the location of the first integer. Using Number Lines to Add Two Integers with Different Signs – Example 1 Utilize the number line to find the sum of integers with different signs. Step 1: Find 2 on the number line. Step 2: Move 7 units to the left side since it is negative. Using Number Lines to Add Two Integers with Different Signs – Example 2 Utilize the number line to find the sum of integers with different signs. Step 1: Find 6 on the number line. Step 2: Move 10 units to the left side since it is negative. Related to This Article What people say about "Using Number Lines to Add Two Integers with Different Signs - Effortless Math: We Help Students Learn to LOVE Mathematics"? No one replied yet.
{"url":"https://www.effortlessmath.com/math-topics/using-number-lines-to-add-two-integers-with-different-signs/","timestamp":"2024-11-05T21:42:35Z","content_type":"text/html","content_length":"91361","record_id":"<urn:uuid:da2e468b-00b5-4fb6-be6e-71b0b1ceca21>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00392.warc.gz"}
...And You Will Know Me By The Trail of Papers In my last post , I presented data that suggests that the odor code dramatically changes between the first and subsequent breaths. Later, however, I discovered a subtle mistake (to me at least) in my analysis, which slightly changed the result. Too often in science, we see the end point of research, and don't see how it evolved. Today I'm going to show how I found my mistake, what the mistake was, and how fixing that mistake modified the result. Calculating spike distance Previously, I argued that the odor codes evolves over multiple breaths. After presenting some example cells where the odor code shifted between the first and second breath, I turned to the population level, and showed this figure: A. Schematic of population vector. B. Distance between breaths. Breath identity shown underneath. On the left is the schematic for the analysis. For each cell, I binned the response to an odor over a breathing cycle into 8 bins. To add more cells to the population, I added 8 bins for each cell to the bottom of the population vector. To get different observations, I repeated this for each breath that I recorded. It's possible to calculate the "population spike distance" by just calculating the Euclidian distance between the population vector for each breath. Yet that method is fairly noisy, as most cells are uninformative. When I tried that simple method, the result was similar to that shown above, but not as clean. To make the differences more obvious, I performed on the population vector, and then calculated the distances between the first five principal components (shown above). Here, each of the five components was informative, and the distances between the breaths were much clearer. The problem discovered After looking at different breaths for the same odor, I next wanted to investigate the differences between odors. Rather than look at "spike distances," though, I used a prediction algorithm. To do the prediction, I built population vectors as I did above, but instead of observing different breaths for the same odor, I observed different odors for the same breath. Once again, I transformed the data via PCA and took the first 5-10 principal components. I then created a sample population vector for an individual trial, and calculated the distance between the sample vector and the average vectors for each odor. The "predicted" odor is the one with the smallest distance from the trial. This was repeated for all trials to get the average prediction rate. When I did this, I got the prediction rates show in the top panel: n= 105 neurons, and 6 odors, split between two sets of three odors. 10-12 trials. Here, the predictions are between three different odors, so the chance level is 33%. The first five breaths are pre-odor breaths, while breaths 6-10 are during the odor. As you can see, the odor breaths are >95% correct, which is great. However, many of the pre-odor breaths have prediction rates >50%, which is obviously bad (I have different control breaths for each odor; the pre-odor prediction chooses among the three control breaths). In playing with the data, I then noticed something odd: as I increased the numbers of bins or principal components that I used to make predictions, the pre-odor predictions got higher (panel B above). With 20 bins and 20 PCs, I could get pre-odor predition rates of >70% for each odor! I wasn't making predictions, but was over-fitting my model to the data so that it could never be wrong! And this is where I realized my mistake. When you do PCA, the algorithm tells you how much each component describes the variance in the data. The first few principal components (PC) account for most of the variance, while the later components account for less. When I was doing my prediction algorithm (and my distance calculations above), I was weighing each PC equally, and over-representing principal components which didn't have much meaning. Once I realized this, it was a simple procedure to weigh each PC according to its variance, and re-run the prediction (panel C above). Following that, the pre-odor predictions are at chance; the positive control is finally working. The downside to this correction, however, is that the odor prediction during the odor was now between 60-80%. This lower predictive ability makes more sense, though, given the trial-to-trial noise in the signal, and the relatively low number of neurons in the population vector. Back to breath distance Having realized my error, I returned to my original analysis on the breath distance, and added the proper weightings. When I did this, the results were slightly different: n=11 odors from 5 experiments with >15 cells. The control breaths are still quite distant from the odor breaths. However, the first breath is no longer so different from the subsequent breaths. Indeed, it appears that there is an evolution in the code over the first few breaths before the code stabilizes. The stark difference between the different breaths had blurred. I'm guessing that this is a tyro analysis mistake. I only stumbled upon it because I figured a reviewer would want to see pre-odor prediction rates to compare to those during the odor. I know that when I read a paper, I rarely delve into the detailed methods of these more complicated analyses. And if I do, they aren't always informative. Given how often people perform analysis by themself, with custom code, it's easy to forget how many simple, subtle mistakes one can make. The only way to avoid them is to gain experience, and to constantly question whether what you're doing actually makes sense, and agrees with what you've already done. Today I found an even BIGGER problem with my odor prediction. When I was creating my "average spike population" I was including the test trial in the population. And I was once again getting pre-odor prediction rates near 100%. Excluding the test-trial from the average population made everything MUCH more sensible. As I mentioned last post, I have been recording from the olfactory bulb of awake, head-fixed mice. In general, the responses I'm seeing are in accord with those reported by Shusterman and Rinberg: about half of cell-odor pairs respond to a given odor. In trying to quantify these responsive cell-odor pairs, I stumbled upon another finding, that the odor code for the first breath is different from the rest. One cell's response (Brief methods: To look at whether a given breath is responsive, I segmented the recordings into breaths, and fit each breath to a standard breath length (if a given breath was longer than the average breath length, I deleted all spikes after the end of the standard breath; if a given breath was shorter, I assumed the rest of the time included no spikes). To quantify whether breaths were "responsive," I compared a breath's tonic firing rate to the control, pre-odor breaths (using ANOVA with p<0.05, and Tukey's post-hoc testing); and I tested whether the "phase" or timing of the breath differed from the pre-firing rate (using a Kolmogorov Smirnov test; here I used p<0.02 as the threshold for significance as using p<0.05 yielded many false positives when comparing different control breaths). And when I looked at the data, it was obvious that some cells had strikingly different codes for the first breath versus later breaths.) One example is shown below (this is the same neuron-odor pair from the previous post, albeit different trials). The top panel shows the PSTH of the cell's response to amyl acetate, with 40ms bins. You can see that before odor presentation, the neuron fired irrespective of phase. During the first sniff of the odor, there was a strong, transient burst of activity in the middle of the breathing cycle. However, in the subsequent sniffs, the cell was inhibited. This cell is excited during the first breath before becoming inhibited. (top) PSTH of the response with 40ms bins. The odor is applied at t=0s. Blue dashed lines represent inpiration. (middle) PSTH of the response with a single bin for each breath. The cell is inhibited during breaths 2-4. (bottom) Cumulative distribution of spikes during the ctl breath (black), first breath (blue), and second breath (red). While seeing the difference by eye is nice, I wanted to test these withous bias and quantitatively. To detect tonic changes, I averaged the firing rate for each breath, as shown in the middle panel. In this example, over the whole breath, the first sniff's firing rate does not differ from the control breaths'. However, the three subsequent breaths are all inhibited. To look at the phasic changes, I plotted (for ten trials) the cumulative spike times for control breaths and breaths during the odor (bottom panel, above). Before the odor, the spikes occur without phase bias (black line), while during the first breath you can see that most spikes come between 150-200ms of the breathing cycle. However, on the second breath, the phasic nature of the response has begun to dissipate. Population changes in the odor code The transformation between the first and second breath can take many forms. The example above shows a neuron that switches from a strong, phasic, excitatory response to an inhibitory response. Many other neurons are inhibited during the first breath but not afterward. Below is a more subtle example, where the neuron does not respond to ethyl butyrate during the first sniff. However, on the subsequent sniffs, the timing of the response shifts to earlier in the breathing cycle. This cell appears to not respond during the first breath, but has a phasic response during later breaths. Rather than exhaustively quantify how individual cells change their code between the first and second breath, I took a different approach and looked at the population code. I have three experiments where I was able to record from at least ten cells at the same time. For these three experiments, I created a population vector of the responses to odor, and calculated the "spike distance" between the representation of each breath. Another way to think of spike distance is how dissimilar two representations are: small spike distances imply similar population representations. All distances were normalized to the average distance between control breaths. The odor code for the first breath is as different from control as it is from the 2nd breath. A. For each breath during an odor, I created a "population" vector, where for each breath and cell, I broke the response into eight bins. To reduce the dimensionality, I performed PCA. To calculate the distance, I used the first five PCA scores for each breath. B. Normalized "spike distance" between odor code for different breaths. All distances were normalized to the average distance between control breaths (C) for an experiment-odor pair. All post-odor breaths are distant from the control breaths (white). The first breath is also different from the 2nd-4th breaths (blue). However, the 2nd-4th breaths are relatively similar to each other (red). The population response during all of the odor breaths are distant from the control breaths (white bars). The distance here presumably encodes the presence of the odor. However, if you look at the distance between the first breath and subsequent breaths (blue bars), you can see they are also quite far apart. In fact, the first breath is almost as distant from the other breaths as it is from the control breaths. In contrast, the distance between breaths 2-4 is much lower, and almost comparable to the distance between control breaths. When I showed this to my boss he was not impressed, and said they had already showed this in a previous paper. And indeed, buried in three panels of Fig. 4, they did show something similar (below). There are some significant differences, though. First, those experiments were in anesthetized animals, rather than awake animals. Second, I've shown that individual cells use strikingly different codes between breaths. Third, they did not create their population vector to consider a cell's firing as a whole. This could change the interpretation of the results. In any case, asking around the lab, no one seemed to remember this was even in the paper. The velocity of the population representation is highest during the first odor and post-odor breaths. A. The population vector contains the firing of each cell in a given time-bin. B. Cross-correlation and distance for the population during a given odor. C. The velocity of the population vector (how much the distance changes) is highest at the beginning and end of odor From Bathellier, et al, 2008. Given this result, one needs to be careful with how one characterizes "responsive" cells in the olfactory bulb. First, when determining whether a cell-odor pair is responsive, one needs to always look at more than just the first breath. For example, the second cell shown above was responsive during the second breath, but not the first. Second, when characterizing responses, it is difficult to say whether a cell was excited or inhibited, as cells can be both excited and inhibited in a given breath, as well as be excited for one breath but not others. While it may be unsatisfying, it is probably best to just call them "responsive" cells. These results also gave me an idea for an experiment to test whether the difference in coding is perceptually important. It is now possible to stimulate the olfactory epithelium via Channelrhodopsin while mice sniff (using an OMP-ChR2 line), which makes it possible to mask an odor response with olfactory white noise. To test how important different sniffs are to perception, you would start by establishing the detection threshold for an odor. Then you could measure the detection threshold while masking either the first or second sniff with the olfactory noise. There are a few possible results. First, the threshold might not change at all, as both the first and second sniff contain sufficient information to detect an odor. Second, the sensitivity could be equally decreased when either sniff is blocked. This would also imply their is equal information in each sniff. The third possibility is that masking the first sniff would decrease sensitivity far more than masking the second sniff (which is what I expect). It has been shown that mice and humans can detect odors in a single sniff. And in daily life, no odor is as strong as its first whiff. The difference in odor coding between the first and second sniff might be one step towards explaining why. While this is pretty basic analysis, I had to perform this en route to doing more sophisticated comparisons while trying to measure a form of plasticity in the odor code. This is also the first complete data I've shown from this lab. I would appreciate any feedback on this, as it's always useful to get a perspective outside the insular confines of a lab. Were the figures legible? The analysis convincing? Or is this entirely un-novel? In my last post, I briefly reviewed a paper from the Rinberg lab where they recorded from the olfactory bulb of awake, head-fixed mice. When they were analyzing their neurons' responses, they performed a time-warping manipulation on the data that increased the precision of the responses. Today I'm going to present some counter-evidence that shows why their time warping is a bad idea. Time Warping of Neuronal Responses The first figure of their paper clearly explains how they time-warped their data. They recorded extracellularly from mitral cells in the olfactory bulb while presenting head-fixed mice with odorants. In their recording, as in previous studies, if they did not perform any temporal alignment, they saw very weak responses in response to odorant application (black traces, below). However, it is well known that sniffing can influence olfactory bulb activity, so they realigned all of their mitral cell activity to the first inhalation following odor onset (blue traces, below). When they did this, they found that the mitral cell responses were quite strong, and around 59% of odor-cell pairs were responsive. Aligning responses to inhalation reveals odor responses. a. Diagram of inhalation alignment, and temporal warping. Odor presentation in yellow. c. Spike rasters to odor under three alignment paradigms: to odor (black); to inhalation onset (blue); and time-warped (red). d. Peri-stimulus time responses(?) of the responses in c. From Shusterman, et al, 2011. Not satisfied with the precision of their responses, they performed one more manipulation. The breathing cycle, while fairly regular, does vary; and they reasoned that duration of the breathing cycle could influence neuronal activity. To normalize this, they fit curves to both inhalation and exhalation, and then stretched time (and moved spikes) until the breaths fit a standard breathing cycle (red traces, above). When they did this, they found that both the precision and magnitude of responses were increased. This did not sit well with me for three reasons. First, if you take the perspective of a neuron in the olfactory bulb, it means that the neuron has to somehow keep track of where it is in the breathing cycle, not in terms of time, but in terms of phase. To do this, 50ms following inhalation, a neuron has to know when the next inhalation is going to come. [S:They're psychic! :S][Update: a commenter noted that the OB could receive an efference copy from brainstem respiratory centers. I am not aware of any evidence that it does, however. Which of course does not mean it does not exist.] Second, I think that the timing of mitral cell responses is in large part dictated by the temporal dynamics of the olfactory epithelium. The olfactory epithelium, in turn, has its kinetics determined by the taus of the G-protein signaling cascade, and the concentration of odorants in the epithelium. The kinetics depend on inhalation onset and intensity, not phase. The third reason I have a problem with time-warping is that I have counter-evidence Mitral Cell Response Timing is Independent of Breath Length While I wait for my mice, I have been performing head-fixed recordings from the olfactory bulb of awake mice. In general, I've been getting population responses in line with what the Rinberg lab has suggested: ~50% of the odor-cell pairs in the OB differ from baseline. Like the Rinberg lab, the Carleton lab aligns responses to inhalation onset. Yet, unlike the Rinberg lab, we have not performed any time warping. After seeing the Shusterman paper, I took a closer look at my data, to see whether time-warping makes sense. In general, respiration is regular enough that time-warping would little effect the responses. However, I found a few cases where time warping is a bad idea. Below is a raster plot of the firing of one neuron in response to amyl acetate at 20x dilution. I have zoomed in on the first second following odor onset. Inhalations are denoted by blue lines, while spikes are shown in black. We trigger odor delivery by waiting for an exhale, which is why the inhalation times are non-random. Following the first sniff, you can see that this neuron fires vigorously, with some delay. (I should say that this response is easily one of the highest firing rates in my data set.) Raster plot of a neuron's response to Amyl Acetate. Amyl Acetatate began application at t=6s. Blue lines are inhalations, black lines are spikes. Ten trials shown. We can also align this data by moving time such that the first post-odor breaths all occur at the same time. If you do this, you can more clearly see the strong response to the odor. This response is fairly long, and has a high firing rate (>100Hz at its peak). Same raster plot as above, except aligned to the first inhalation following odor. Here the response is much clearer. And now I can finally address the issue of whether time-warping is a good idea. In the example above, there are two trials with short breaths, trials 6 & 7; and two trials with long breaths, trials 8 & 10. Despite the different lengths of these breaths, both of these trials have high amplitude firing rates between 6.2 and 6.3s. If you were to time-warp these trials, you would be moving the spikes from trials 6&7 later in time, and the spikes from trials 8&10 earlier. Both manipulations would cause a decrease in precision. This is just one extraordinary example, but it shows that time-warping can have deleterious effects on precision. In my view, if you are recording from the olfactory bulb, you should align all your responses to breath onset, and truncate your breaths to the same standard breath. I hope you are convinced that time-warping your data is a bad idea for mitral cells in the olfactory bulb. If I've missed anything, please let me know in the comments.
{"url":"http://www.trailofpapers.net/2011/11/","timestamp":"2024-11-03T12:48:13Z","content_type":"text/html","content_length":"109826","record_id":"<urn:uuid:6a7e3878-6c3b-4920-9660-50237a8e6413>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00546.warc.gz"}
\], then find the value of \ Hint: Here we will logarithmic properties to find the value of \[\log 25\]. First we will write the 25 which is in the log function as a fraction. Then we will simplify it using the logarithmic properties. We will simplify it further and substitute the given logarithmic values. Then we solve the equation to get the required value. Complete step-by-step answer: First, we will write the \[\log 25\] in the modified form by writing the number 25 in a different form. Therefore, we get \[\log 25 = \log \dfrac{{100}}{4}\] Now we will use the property of the logarithmic function i.e. \[\log a - \log b = \log \dfrac{a}{b}\]. Therefore, by using this property, we get \[ \Rightarrow \log 25 = \log 100 - \log 4\] We know that number 100 is the square of number 10 and number 4 is the square of number 2. Now we will write this in the above equation, we get \[ \Rightarrow \log 25 = \log {10^2} - \log {2^2}\] Now by using the property \[\log {a^b} = b\log a\], we get \[ \Rightarrow \log 25 = 2\log 10 - 2\log 2\] We know that the value of \[\log 2\] is given in the question and we know that \[\log 10 = 1\]. Therefore, we get \[ \Rightarrow \log 25 = 2\left( 1 \right) - 2\left( {0.3010} \right)\] Now we will solve the above equation to get the value of \[\log 25\]. Therefore, we get \[ \Rightarrow \log 25 = 2 - 0.6020\] \[ \Rightarrow \log 25 = 1.3980\] Hence, the value of \[\log 25\] is equal to \[1.398\]. Note: Here in this question, we have to modify the number in the main equation according to the given values of log in the question. We should know that the value inside the log function should never be zero or negative it should always be greater than zero. Always remember that the value of the \[\log 10\] is equal to 1. We should simplify the equation carefully and apply the properties of the log function accurately. Some of the basic properties of the log functions are listed below. 1.\[\log a + \log b = \log ab\] 2.\[\log {a^b} = b\log a\] 3.\[\log a - \log b = \log \dfrac{a}{b}\] 4.\[{\log _a}b = \dfrac{{\log b}}{{\log a}}\]
{"url":"https://www.vedantu.com/question-answer/given-log-2-03010-and-log-3-04771-then-find-the-class-11-maths-cbse-5fd741fb147a833c29ece1c2","timestamp":"2024-11-14T14:19:29Z","content_type":"text/html","content_length":"163023","record_id":"<urn:uuid:d2925b56-206a-44fd-8eb4-e266b29c1aab>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00251.warc.gz"}
CFA Exam Tips: Suggestions for Attacking the Test - Finance Train CFA Exam Tips: Suggestions for Attacking the Test By doing several practice exams in the months leading up to the exam, candidates get a feel for how to approach the test. In the event that some candidates would like additional guidance, here are a few suggestions: • Do the Test in Sequential Order – By accepting the test’s given order as your decision rule, you will not waste any time during the exam deciding which section to do next. This does not imply that you cannot skip questions and come back to them (more below on this). • Double Check Your Answer Fill-ins – Take the extra one second and double check the answers that you fill in. You do not want to get half way through a session and realize that you have filled in 20+ wrong circles. Match the question number you are answering to the number that you are filling in on the bubble sheet. • Come Back to the Hard Stuff – If you are looking at a long, multi-step problem or a problem that you cannot remember how to solve, make a note and come back to it. Do not waste time getting hung up. Make a note, move on and come back once you have answered all the questions that you find easier. • Plan for Multiple Run Throughs – If you follow the above point and pass over the harder questions to come back later, then you will find yourself running through the exam multiple times. You do not need to answer all questions in perfect order; this is the beauty of a paper based exam – you can skip and come back. • Skim Two Questions before Reading the Item Set – This is more for the CFA Level II and III candidates that have item sets. By glancing at two questions before reading the item set’s story, then you have an idea of what to look for. The question sequence may follow the story sequence. • Make Notes When Needed – Use the space in margins of the item set’s story to make a note or underline key sentences. You do not want to be forced to re-read and entire story, just to relocate an important fact. • Eliminate Wrong Answers – If you know for certain that one of the answer choices is wrong, note it. This is particularly useful if you decide that you need to return to a question. Once you return to the question, you will not need to waste time re-analyzing a wrong answer choice. The key here is to be 100% certain that the answer choice is wrong. • Keep Track of Time and Unanswered Questions – Monitor your time in relation to how many unanswered questions there are. If there are two minutes left and you have two unanswered questions, just take a guess and fill them in. • Answer Every Question – There is no penalty for guessing and your goal is to get as many questions correct as is needed to pass. Do not leave any questions blank! Give yourself a chance on every question by answering them all. • Do Not Panic! – Chances are you will hit a few stressful questions on the exam. Do not let them induce panic. Consider this, with 240 questions a candidate needs to answer 168 correctly to obtain a score of 70%. This translates to answering 72 questions incorrectly and still be almost guaranteed of a pass (there are no guarantees, but 70% is widely thought of as the magic hurdle rate). Stay calm and stay focused. Checkout our mock exams and practice tests for CFA exam. [product_category category="exam-preps" per_page="12" columns="4" orderby="date" order="desc"] Data Science in Finance: 9-Book Bundle Master R and Python for financial data science with our comprehensive bundle of 9 ebooks. What's Included: • Getting Started with R • R Programming for Data Science • Data Visualization with R • Financial Time Series Analysis with R • Quantitative Trading Strategies with R • Derivatives with R • Credit Risk Modelling With R • Python for Data Science • Machine Learning in Finance using Python Each book includes PDFs, explanations, instructions, data files, and R code for all examples. Get the Bundle for $39 (Regular $57) JOIN 30,000 DATA PROFESSIONALS Free Guides - Getting Started with R and Python Enter your name and email address below and we will email you the guides for R programming and Python.
{"url":"https://financetrain.com/cfa-exam-tips-suggestions-for-attacking-the-test","timestamp":"2024-11-02T08:05:49Z","content_type":"text/html","content_length":"96101","record_id":"<urn:uuid:c4a949b8-5256-40a4-a63c-a26562b3cfc1>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00571.warc.gz"}
Contributions in Algebra and Algebraic Geometrysearch Item Successfully Added to Cart An error was encountered while trying to add the item to the cart. Please try again. Please make all selections above before adding to cart Contributions in Algebra and Algebraic Geometry Softcover ISBN: 978-1-4704-4735-9 Product Code: CONM/738 List Price: $130.00 MAA Member Price: $117.00 AMS Member Price: $104.00 eBook ISBN: 978-1-4704-5534-7 Product Code: CONM/738.E List Price: $125.00 MAA Member Price: $112.50 AMS Member Price: $100.00 Softcover ISBN: 978-1-4704-4735-9 eBook: ISBN: 978-1-4704-5534-7 Product Code: CONM/738.B List Price: $255.00 $192.50 MAA Member Price: $229.50 $173.25 AMS Member Price: $204.00 $154.00 Click above image for expanded view Contributions in Algebra and Algebraic Geometry Softcover ISBN: 978-1-4704-4735-9 Product Code: CONM/738 List Price: $130.00 MAA Member Price: $117.00 AMS Member Price: $104.00 eBook ISBN: 978-1-4704-5534-7 Product Code: CONM/738.E List Price: $125.00 MAA Member Price: $112.50 AMS Member Price: $100.00 Softcover ISBN: 978-1-4704-4735-9 eBook ISBN: 978-1-4704-5534-7 Product Code: CONM/738.B List Price: $255.00 $192.50 MAA Member Price: $229.50 $173.25 AMS Member Price: $204.00 $154.00 • Contemporary Mathematics Volume: 738; 2019; 147 pp MSC: Primary 13; 14; 15; 16 This volume contains the proceedings of the International Conference on Algebra, Discrete Mathematics and Applications, held from December 9–11, 2017, at Dr. Babasaheb Ambedkar Marathwada University, Aurangabad (Maharashtra), India. Contemporary topics of research in algebra and its applications to algebraic geometry, Lie groups, algebraic combinatorics, and representation theory are covered. The articles are devoted to Leavitt path algebras, roots of elements in Lie groups, Hilbert's Nullstellensatz, mixed multiplicities of ideals, singular matrices, rings of integers, injective hulls of modules, representations of linear, symmetric groups and Lie algebras, the algebra of generic matrices and almost injective modules. Graduate students and research mathematicians interested in algebra and its applications to algebraic geometry. □ Articles □ Pere Ara — Leavitt path algebras over a poset of fields □ S. G. Dani — Roots of elements in Lie groups and the exponential maps □ Sudhir R. Ghorpade — A note on Nullstellensatz over finite fields □ Kriti Goel, R. V. Gurjar and J. K. Verma — Minkowski inequality and equality for multiplicity of ideals of finite length in Noetherian local rings □ S. K. Jain and A. Leroy — Decomposition of singular elements of an algebra into product of idempotents, a survey □ Anuj Jakhar, Sudesh K. Khanduja and Neeraj Sangwan — Some results on integrally closed domains □ Jae Keol Park and S. Tariq Rizvi — On Examples of Baer and Rickart Module Hulls □ Digjoy Paul, Amritanshu Prasad and Arghya Sadhukhan — Tableau correspondences and representation theory □ K. N. Raghavan, B. Ravinder and Sankaran Viswanath — A relationship between Gelfand-Tsetlin bases and Chari-Loktev bases for irreducible finite dimensional representations of special linear Lie algebras □ Lance Small and Efim Zelmanov — Algebra of generic matrices is not coherent □ Surjeet Singh — Partial endomorphisms of almost self injective modules • Permission – for use of book, eBook, or Journal content • Book Details • Table of Contents • Additional Material • Requests Volume: 738; 2019; 147 pp MSC: Primary 13; 14; 15; 16 This volume contains the proceedings of the International Conference on Algebra, Discrete Mathematics and Applications, held from December 9–11, 2017, at Dr. Babasaheb Ambedkar Marathwada University, Aurangabad (Maharashtra), India. Contemporary topics of research in algebra and its applications to algebraic geometry, Lie groups, algebraic combinatorics, and representation theory are covered. The articles are devoted to Leavitt path algebras, roots of elements in Lie groups, Hilbert's Nullstellensatz, mixed multiplicities of ideals, singular matrices, rings of integers, injective hulls of modules, representations of linear, symmetric groups and Lie algebras, the algebra of generic matrices and almost injective modules. Graduate students and research mathematicians interested in algebra and its applications to algebraic geometry. • Articles • Pere Ara — Leavitt path algebras over a poset of fields • S. G. Dani — Roots of elements in Lie groups and the exponential maps • Sudhir R. Ghorpade — A note on Nullstellensatz over finite fields • Kriti Goel, R. V. Gurjar and J. K. Verma — Minkowski inequality and equality for multiplicity of ideals of finite length in Noetherian local rings • S. K. Jain and A. Leroy — Decomposition of singular elements of an algebra into product of idempotents, a survey • Anuj Jakhar, Sudesh K. Khanduja and Neeraj Sangwan — Some results on integrally closed domains • Jae Keol Park and S. Tariq Rizvi — On Examples of Baer and Rickart Module Hulls • Digjoy Paul, Amritanshu Prasad and Arghya Sadhukhan — Tableau correspondences and representation theory • K. N. Raghavan, B. Ravinder and Sankaran Viswanath — A relationship between Gelfand-Tsetlin bases and Chari-Loktev bases for irreducible finite dimensional representations of special linear Lie • Lance Small and Efim Zelmanov — Algebra of generic matrices is not coherent • Surjeet Singh — Partial endomorphisms of almost self injective modules Permission – for use of book, eBook, or Journal content Please select which format for which you are requesting permissions.
{"url":"https://bookstore.ams.org/CONM/738","timestamp":"2024-11-06T12:49:49Z","content_type":"text/html","content_length":"98090","record_id":"<urn:uuid:d03761ce-d690-48ec-83ac-e3cf485356e5>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00240.warc.gz"}
The local structure of homogeneous continua (curves) is studied. Components of open subsets of each homogeneous curve which is not a solenoid have the disjoint arcs property. If the curve is aposyndetic, then the components are nonplanar. A new characterization of solenoids is formulated: a continuum is a solenoid if and only if it is homogeneous, contains no terminal nontrivial subcontinua and small subcontinua are not ∞-ods.
{"url":"https://eudml.org/search/page?q=sc.general*op.AND*l_0*c_0author_0eq%253A1.Pawe%25C5%2582+Krupski&qt=SEARCH","timestamp":"2024-11-12T23:55:17Z","content_type":"application/xhtml+xml","content_length":"77303","record_id":"<urn:uuid:62226657-4a51-466f-b514-3d99a8ee739c>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00742.warc.gz"}
Test Prep Multiple Choice 10.1 Postulates of Special Relativity What was the purpose of the Michelson–Morley experiment? a. To determine the exact speed of light b. To analyze the electromagnetic spectrum c. To establish that Earth is the true frame of reference d. To learn how the ether affected the propagation of light What is the speed of light in a vacuum to three significant figures? a. $1.86 × 10 5 m/s$ b. $3.00 × 10 8 m/s$ c. $6.71 × 10 8 m/s$ d. $1.50 × 10 11 m/s$ How far does light travel in $1.00 min$? a. $1.80 × 10 7 km$ b. $1.80 × 10 13 km$ c. $5.00 × 10 6 m$ d. $5.00 × 10 8 m$ Describe what is meant by the sentence, “Simultaneity is not absolute.” a. Events may appear simultaneous in all frames of reference. b. Events may not appear simultaneous in all frames of reference. c. The speed of light is not the same in all frames of reference. d. The laws of physics may be different in different inertial frames of reference. In 2003, Earth and Mars were aligned so that Earth was between Mars and the sun. Earth and Mars were 5.6×10^7 km from each other, which was the closest they had been in 50,000 years. People looking up saw Mars as a very bright red light on the horizon. If Mars was 2.06×10^8 km from the sun, how long did the reflected light people saw take to travel from the sun to Earth? a. 14 min and 33 s b. 12 min and 15 s c. 11 min and 27 s d. 3 min and 7 s 10.2 Consequences of Special Relativity What does this expression represent: $1 1− u 2 c 2 1 1− u 2 c 2$ a. time dilation b. relativistic factor c. relativistic energy d. length contraction What is the rest energy, E[0], of an object with a mass of 1.00 g ? a. 3.00×10^5 J b. 3.00×10^11 J c. 9.00×10^13 J d. 9.00×10^16 J The fuel rods in a nuclear reactor must be replaced from time to time because so much of the radioactive material has reacted that they can no longer produce energy. How would the mass of the spent fuel rods compare to their mass when they were new? Explain your answer. a. The mass of the spent fuel rods would decrease. b. The mass of the spent fuel rods would increase. c. The mass of the spent fuel rods would remain the same. d. The mass of the spent fuel rods would become close to zero. Short Answer 10.1 Postulates of Special Relativity What is the postulate having to do with the speed of light on which the theory of special relativity is based? a. The speed of light remains the same in all inertial frames of reference. b. The speed of light depends on the speed of the source emitting the light. c. The speed of light changes with change in medium through which it travels. d. The speed of light does not change with change in medium through which it travels. What is the postulate having to do with reference frames on which the theory of special relativity is based? a. The frame of reference chosen is arbitrary as long as it is inertial. b. The frame of reference is chosen to have constant nonzero acceleration. c. The frame of reference is chosen in such a way that the object under observation is at rest. d. The frame of reference is chosen in such a way that the object under observation is moving with a constant speed. If you look out the window of a moving car at houses going past, you sense that you are moving. What have you chosen as your frame of reference? a. the car b. the sun c. a house Why did Michelson and Morley orient light beams at right angles to each other? a. To observe the particle nature of light b. To observe the effect of the passing ether on the speed of light c. To obtain a diffraction pattern by combination of light d. To obtain a constant path difference for interference of light 10.2 Consequences of Special Relativity What is the relationship between the binding energy and the mass defect of an atomic nucleus? a. The binding energy is the energy equivalent of the mass defect, as given by E0 = mc. b. The binding energy is the energy equivalent of the mass defect, as given by E0 = mc^2. c. The binding energy is the energy equivalent of the mass defect, as given by $E 0 = m c E 0 = m c$ d. The binding energy is the energy equivalent of the mass defect, as given by $E 0 = m c 2 . E 0 = m c 2 .$ True or false—It is possible to just use the relationships F = ma and E = Fd to show that both sides of the equation E[0] = mc^2 have the same units. a. True b. False Explain why the special theory of relativity caused the law of conservation of energy to be modified. a. The law of conservation of energy is not valid in relativistic mechanics. b. The law of conservation of energy has to be modified because of time dilation. c. The law of conservation of energy has to be modified because of length contraction. d. The law of conservation of energy has to be modified because of mass-energy equivalence. The sun loses about 4 × 10^9 kg of mass every second. Explain in terms of special relativity why this is happening. a. The sun loses mass because of its high temperature. b. The sun loses mass because it is continuously releasing energy. c. The Sun loses mass because the diameter of the sun is contracted. d. The sun loses mass because the speed of the sun is very high and close to the speed of light. Extended Response 10.1 Postulates of Special Relativity Explain how Einstein’s conclusion that nothing can travel faster than the speed of light contradicts an older concept about the speed of an object propelled from another, already moving, object. a. The older concept is that speeds are subtractive. For example, if a person throws a ball while running, the speed of the ball relative to the ground is the speed at which the person was running minus the speed of the throw. A relativistic example is when light is emitted from car headlights, it moves faster than the speed of light emitted from a stationary source. The car's speed does not affect the speed of light. b. The older concept is that speeds are additive. For example, if a person throws a ball while running, the speed of the ball relative to the ground is the speed at which the person was running plus the speed of the throw. A relativistic example is when light is emitted from car headlights, it moves no faster than the speed of light emitted from a stationary source. The car's speed does not affect the speed of light. c. The older concept is that speeds are multiplicative. For example, if a person throws a ball while running, the speed of the ball relative to the ground is the speed at which the person was running multiplied by the speed of the throw. A relativistic example is when light is emitted from car headlights, it moves no faster than the speed of light emitted from a stationary source. The car's speed does not affect the speed of light. d. The older concept is that speeds are frame independent. For example, if a person throws a ball while running, the speed of the ball relative to the ground has nothing to do with the speed at which the person was running. A relativistic example is when light is emitted from car headlights, it moves no faster than the speed of light emitted from a stationary source. The car's speed does not affect the speed of light. A rowboat is drifting downstream. One person swims 20 m toward the shore and back, and another, leaving at the same time, swims upstream 20 m and back to the boat. The swimmer who swam toward the shore gets back first. Explain how this outcome is similar to the outcome expected in the Michelson–Morley experiment. a. The rowboat represents Earth, the swimmers are beams of light, and the water is acting as the ether. Light going against the current of the ether would get back later because, by then, Earth would have moved on. b. The rowboat represents the beam of light, the swimmers are the ether, and water is acting as Earth. Light going against the current of the ether would get back later because, by then, Earth would have moved on. c. The rowboat represents the ether, the swimmers are ray of light, and the water is acting as the earth. Light going against the current of the ether would get back later because, by then, Earth would have moved on. d. The rowboat represents the Earth, the swimmers are the ether, and the water is acting as the rays of light. Light going against the current of the ether would get back later because, by then, Earth would have moved on. 10.2 Consequences of Special Relativity A helium-4 nucleus is made up of two neutrons and two protons. The binding energy of helium-4 is 4.53×10^-12 J. What is the difference in the mass of this helium nucleus and the sum of the masses of two neutrons and two protons? Which weighs more, the nucleus or its constituents? a. 1.51×10^-20 kg; the constituents weigh more b. 5.03×10^-29 kg; the constituents weigh more c. 1.51×10^-29 kg; the nucleus weighs more d. 5.03×10^-29 kg; the nucleus weighs more Use the equation for length contraction to explain the relationship between the length of an object perceived by a stationary observer who sees the object as moving, and the proper length of the object as measured in the frame of reference where it is at rest. a. As the speed v of an object moving with respect to a stationary observer approaches c, the length perceived by the observer approaches zero. For other speeds, the length perceived is always less than the proper length. b. As the speed v of an object moving with respect to a stationary observer approaches c, the length perceived by the observer approaches zero. For other speeds, the length perceived is always greater than the proper length. c. As the speed v of an object moving with respect to a stationary observer approaches c, the length perceived by the observer approaches infinity. For other speeds, the length perceived is always less than the proper length. d. As the speed v of an object moving with respect to a stationary observer approaches c, the length perceived by the observer approaches infinity. For other speeds, the length perceived is always greater than the proper length.
{"url":"https://texasgateway.org/resource/test-prep-8?book=79076&binder_id=78136","timestamp":"2024-11-14T11:43:23Z","content_type":"text/html","content_length":"57994","record_id":"<urn:uuid:7aaea73a-7378-4047-9dff-71b5421eb44f>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00648.warc.gz"}
Analysis and numerical methods for multiscale problems in magnetization dynamics | KTH Analysis and numerical methods for multiscale problems in magnetization dynamics Time: Fri 2021-12-10 15.00 Location: Sal F3 och , Lindstedtsvägen 26 Video link: https://kth-se.zoom.us/webinar/register/WN_Dxv5SD6bQcmnKHXisYPANA Language: English Subject area: Applied and Computational Mathematics, Numerical Analysis Doctoral student: Lena Leitenmaier , Numerisk analys, NA Opponent: Professor Carlos Garcia-Cervera, Supervisor: Professor Olof Runborg, Numerisk analys, NA This thesis investigates a multiscale version of the Landau-Lifshitz equation and how to solve it using the framework of Heterogeneous Multiscale Methods (HMM). The Landau-Lifshitz equation is the governing equation in micromagnetics, modeling magnetization dynamics. The considered problem involves two different scales which interact with each other: a fine scale, on which small material variations can be described, and a coarse scale for the overall magnet. Since the fast variations are much smaller than the coarse scale, the computational cost of resolving these scales in a direct numerical simulation is very high. The idea behind HMM therefore is to use a coarse macro model, involving some missing quantity, in combination with an exact micro model that provides the information necessary to complete the macro model using an averaging process, the so-called upscaling. This approach results in a computational cost that is independent of the fine scale, ε. The included papers focus on different aspects of the problem, together providing both error estimates and implementation details. Paper I investigates homogenization of the given Landau-Lifshitz equation with a rapidly oscillating material coefficient in a periodic setting. Equations for the homogenized solution and the corresponding correctors are derived and estimates for the error introduced by homogenization are given. Both the difference between actual and homogenized solution as well as corrected approximations are considered. We show convergence rates in ε up to final times T ∈ O(ε^σ), where 0 < σ ≤ 2, in H^q Sobolev norms. Here the choice of q is only restricted by the regularity of the In Paper II, three different ways to set up HMM are introduced, the flux, field and torque model. Each model involves a different missing quantity in the HMM macro model. In a periodic setting, the errors introduced when approximating the missing quantities are analyzed. In all three models the upscaling errors are bounded similarly and can be reduced to O(ε) when choosing the involved parameters optimally. A finite difference based implementation of the field model is studied in Paper III. Several important aspects, such as choice of time integrator, size of the micro domain, boundary conditions for the micro problem and the influence of various parameters introduced in the upscaling process are discussed. We moreover introduce the idea to use artificial damping in the micro problem to obtain a more efficient implementation. Finally, a more physical setup is considered in Paper IV. A finite element macro model that is combined with a finite difference micro model is proposed. This approach is based on a variation of the flux model introduced in Paper II. A problem setting with Neumann boundary conditions and involving several terms in the so-called effective field is considered. Numerical examples show the viability of the approach. Additionally, several geometric time integrators for the Landau-Lifshitz equation are reviewed and compared in a technical report. Their properties are investigated using numerical examples.
{"url":"https://www.kth.se/en/om/upptack/kalender/disputationer/analysis-and-numerical-methods-for-multiscale-problems-in-magnetization-dynamics-1.1119552?date=2021-12-10&orgdate=2021-09-01&length=1&orglength=122","timestamp":"2024-11-10T15:55:35Z","content_type":"text/html","content_length":"54611","record_id":"<urn:uuid:0a5ba59b-3985-4198-86c4-fad2675c7494>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00612.warc.gz"}
Time Travel Theories and the Fallacy of No Change to the Present (1) Andrew Knight, J.D. (aknight@alum.mit.edu). Table of Links When we assume that zero change to the past implies zero change to the present, we impose the information structure of the present universe onto its past structure – that is, we assume that everything will turn out the same, except for those events (and their chaotic interactions) that were changed in the past. Unfortunately for aspiring time travelers, this notion is false. Traveling into the past logically requires changing the present, no matter how careful one is to avoid a temporal paradox. Hopes to travel into the past without changing the present, such as by avoiding any physical interaction within the past, are unfounded. Because all serious proposals for time travel into the past inherently assume Eq. 1, which has been shown false on a priori logical grounds, their treatment as scientific proposals are pseudoscientific. [1] Pienaar, J.L., Ralph, T.C. and Myers, C.R., 2013. Open timelike curves violate Heisenberg’s uncertainty principle. Physical Review Letters, 110(6), p.060501. [2] Godel, K., 1949. An example of a new type of cosmological solutions of Einstein’s field equations of gravitation. Reviews of Modern Physics, 21(3), p.447. [3] Morris, M.S., Thorne, K.S. and Yurtsever, U., 1988. Wormholes, time machines, and the weak energy condition. Physical Review Letters, 61(13), p.1446. [4] Deser, S. and Jackiw, R., 1992. Physical cosmic strings do not generate closed timelike curves. Physical Review Letters, 68(3), p.267. [5] Deutsch, D., 1991. Quantum mechanics near closed timelike lines. Physical Review D, 44(10), p.3197. [6] Hartle, J.B., 1994. Unitarity and causality in generalized quantum mechanics for nonchronal spacetimes. Physical Review D, 49(12), p.6543. [7] Politzer, H.D., 1994. Path integrals, density matrices, and information flow with closed timelike curves. Physical Review D, 49(8), p.3981. [8] Goldwirth, D.S., Perry, M.J., Piran, T. and Thorne, K.S., 1994. Quantum propagator for a nonrelativistic particle in the vicinity of a time machine. Physical Review D, 49(8), p.3951. [9] Hawking, S., 2009. A brief history of time: from big bang to black holes. Random House. [10] Hawking, S.W., 1992. Chronology protection conjecture. Physical Review D, 46(2), p.603. [11] Thorne, K., 1995. Black Holes & Time Warps: Einstein’s Outrageous Legacy. WW Norton & Company. [12] Popper, Karl, 1962. Conjectures and refutations. The growth of scientific knowledge, New York: Basic Books. [13] Boekholt, T.C.N., Portegies Zwart, S.F. and Valtonen, M., 2020. Gargantuan chaotic gravitational threebody systems and their irreversibility to the Planck length. Monthly Notices of the Royal Astronomical Society, 493(3), pp.3932-3937.
{"url":"https://blockchaingamer.tech/time-travel-theories-and-the-fallacy-of-no-change-to-the-present","timestamp":"2024-11-07T00:41:30Z","content_type":"text/html","content_length":"19387","record_id":"<urn:uuid:5b4c0b66-285f-4fd6-82bf-08803a0ebc20>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00141.warc.gz"}
Challenges in Depth-First Search | CodingDrills Challenges in Depth-First Search Challenges in Depth-First Search (DFS) When it comes to graph algorithms, Depth-First Search (DFS) is a commonly used technique for traversing or searching through a graph. It explores as far as possible along each branch before backtracking. Although DFS is relatively straightforward to understand and implement, there are certain challenges that programmers frequently encounter. In this article, we will explore some of these challenges and discuss strategies to address them. 1. Infinite Loops One of the first challenges that programmers face when implementing DFS is dealing with infinite loops. Since DFS can keep traversing, potentially infinite, branches of a graph, there is a risk of getting stuck in an endless loop. This can happen if the graph contains cycles. To overcome this issue, we need to keep track of visited nodes. Before exploring a node, we mark it as visited, and if we encounter a visited node again during traversal, we skip it to avoid infinite loops. Here's an example of how to prevent infinite loops using a recursive DFS implementation in Python: def dfs(graph, node, visited): if node not in visited: for neighbor in graph[node]: dfs(graph, neighbor, visited) 2. Multiple Paths and Branches DFS allows us to explore multiple paths and branches, which can make it challenging to keep track of the overall traversal sequence. When dealing with graphs with multiple branches, it's important to consider the order in which nodes are visited. Depending on the requirements of your problem, you may need to prioritize certain paths over others. To handle this challenge, we can use a priority queue or modify the traversal process to incorporate certain conditions. For example, we can sort the neighbors of each node based on specific criteria before exploring them. This approach ensures that we visit the nodes in the desired order. 3. Connected Components In some scenarios, a graph can have multiple disconnected components. During DFS, we may start from one component and traverse it completely but fail to explore other components. To overcome this challenge, we need to ensure that we perform DFS starting from all unvisited nodes. One way to address this is by using a loop to iterate through all nodes, invoking DFS from each unvisited node until all components of the graph have been visited. This guarantees that we cover all the connected components present in the graph. Here's an example of how to handle connected components in DFS using a non-recursive approach in Java: void dfs(Graph graph, int startNode, boolean[] visited) { Stack<Integer> stack = new Stack<>(); while (!stack.isEmpty()) { int node = stack.pop(); if (!visited[node]) { visited[node] = true; System.out.print(node + " "); for (int neighbor : graph.getNeighbors(node)) { if (!visited[neighbor]) { 4. Memory Consumption DFS typically utilizes a recursion stack to keep track of the visited nodes and the path traversed. In certain cases, particularly in graphs with a large number of nodes or heavily nested branches, this can lead to excessive memory consumption. As a result, we may encounter stack overflow errors. One approach to mitigate this challenge is to implement an iterative version of DFS using an explicit stack data structure. This allows us to control the memory consumption more efficiently, as we can maintain a stack of nodes to visit without relying on the system call stack. Depth-First Search (DFS) is a powerful graph algorithm that is widely used in various applications. Despite its simplicity, implementing DFS comes with its own set of challenges. By understanding and addressing these challenges, we can enhance the reliability and efficiency of our DFS implementations. In this article, we explored some of the common challenges in DFS and discussed strategies to overcome them. We discussed how to prevent infinite loops, handle multiple paths and branches, address connected components, and mitigate memory consumption issues. I hope this article provided you with valuable insights into the challenges of Depth-First Search. Remember, by understanding these challenges and implementing appropriate solutions, you can harness the full potential of DFS in your programming journey. Happy coding! Note: This blog post is written in Markdown format. You can convert it to HTML using a Markdown to HTML converter tool. Ada AI Hi, I'm Ada, your personal AI tutor. I can help you with any coding tutorial. Go ahead and ask me anything. I have a question about this topic
{"url":"https://www.codingdrills.com/tutorial/introduction-to-graph-algorithms/dfs-challenges","timestamp":"2024-11-10T12:50:29Z","content_type":"text/html","content_length":"322705","record_id":"<urn:uuid:0cfd091d-7cd9-4bb8-9a10-19ce70549328>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00479.warc.gz"}
Value enabled by Rated | Rated Documentation This page covers the methodology that powers the value enabled metric that Rated exposes on its landing page. One of the key metrics that Rated tracks and seeks to optimize for. The idea behind its metric is to track the value that Rated’s API and related Rated services help create through the various integrations and services that use it as an input. Given that there are varying degrees of value that Rated adds in different integration contexts, we have come up with a subjective weighing framework to roughly reflect how much value creation we believe Rated has enabled. Given the above framework then, the calculation of total value enabled for Rated becomes: where v is the respective total value the downstream integration powers (e.g. TVL, asset value underwritten etc) and ce is the class of enablement weight as dictated but the table above. The table below outlines the integrations that are powering the value enabled metric for Rated, along with their subjectively ascribed class of enablement. Rated has helped with Lido’s last 2 onboarding rounds of operators on the Ethereum set (Wave’s 3 and 4) Rated is the source of truth and pricing partner for Nexus Mutual’s slashing and downtime insurance product Product cannot function without Rated, but is owned jointly with partners Rated integration adds significant value, but the product can also function without it, albeit with a lot of extra effort required Rated integration adds value as core input, but is not essential
{"url":"https://rated.gitbook.io/rated-documentation/methodologies/miscellaneous/value-enabled-by-rated","timestamp":"2024-11-08T11:52:26Z","content_type":"text/html","content_length":"279713","record_id":"<urn:uuid:38836341-5dff-4605-a5e6-936a3874d6fe>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00703.warc.gz"}
Calculating the Average Roll-off of an Anti-Aliasing Filter The anti-aliasing filter is an analog low-pass filter. There are two possible ways to calculate an average, estimated roll-off parameter for a low pass-filter. Both methods need two points from the magnitude frequency response curve: • Either the -3 dB cutoff frequency point and the stopband minimal frequency point. • Or the passband maximal frequency point and the stopband minimal frequency point. The frequency-coordinates and magnitude-coordinates of the two points in both cases make it possible to calculate the slope of the magnitude frequency response curve in the vicinity of the Nyquist Example calculation for the NI-9234, based on the passband and the stopband parameters: 1. According to the datasheet the passband ends at 0.45 x Fs, where Fs denotes the sampling frequency. Here the magnitude is characterized by the flatness parameter, i.e. -0.04 dB...+0.04 dB. 2. The stopband starts at 0.55 x Fs, and the rejection here is -100 dB. 3. The frequency change between the two points is lg(0.55/0.45)=0.0872 Decade, which is 0.290 Octave. 4. The magnitude change can be estimated in this case to be -100 dB. 5. Finally we get the average roll-off as the quotient of the magnitude change and the frequency change: Roll-off=-1147 dB/Decade=-345 dB/Octave.
{"url":"https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z0000019V70SAE&l=en-US","timestamp":"2024-11-13T01:55:52Z","content_type":"text/html","content_length":"40082","record_id":"<urn:uuid:6b4e9789-0069-4207-84c9-4d20166b59a3>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00291.warc.gz"}
A Subtle Bias that Could Impact Your Decision Trees and Random Forests | by Gyorgy Kovacs | Dec, 2023 Decision trees and random forests are widely adopted classification and regression techniques in machine learning. Decision trees are favored for their interpretability, while random forests stand out as highly competitive and general-purpose state-of-the-art techniques. Commonly used CART implementations, such as those in the Python package sklearn and the R packages tree and caret, assume that all features are continuous. Despite this silent assumption of continuity, both techniques are routinely applied to datasets with diverse feature types. In a recent paper, we investigated the practical implications of violating the assumption of continuity and found that it leads to a bias. Importantly, these assumptions are almost always violated in practice. In this article, we present and discuss our findings, illustrate and explain the background, and propose some simple techniques to mitigate the bias. Let’s jump into it with an example using the CPU performance dataset from the UCI repository. We’ll import it through the common-datasets package, to simplify the preprocessing and bypass the need for feature encoding and missing data imputation. import numpy as np from sklearn.ensemble import RandomForestRegressor from sklearn.model_selection import RepeatedKFold, cross_validate from scipy.stats import wilcoxon from common_datasets.regression import load_cpu_performance dataset = load_cpu_performance() X = dataset['data'] y = dataset['target'] # a cross-validation wrapper to simplify the code def cv_rf(X, y, regressor=RandomForestRegressor): return cross_validate( X=X, y=y, cv=RepeatedKFold(n_splits=5, n_repeats=400, random_state=5), r2_original = cv_rf(X, y) r2_mirrored = cv_rf(-X, y) In the experiment, we evaluate the performance of the random forest regressor on both the original data and its mirrored version (each feature multiplied by -1). The hyperparameter for the regressor (max_depth=11 ) was chosen in a dedicated model selection step, maximizing the r2 score across a reasonable range of depths. The cross-validation employed for evaluation is significantly more comprehensive than what is typically used in machine learning, encompassing a total of 2000 folds. print(f'original r2: {np.mean(r2_original):.4f}') print(f'mirrored r2: {np.mean(r2_mirrored):.4f}') print(f'p-value: {wilcoxon(r2_original, r2_mirrored, zero_method="zsplit").pvalue:.4e}') # original r2: 0.8611 # mirrored r2: 0.8595 # p-value: 6.2667e-04 In terms of r2 scores, we observe a deterioration of 0.2 percentage points when the attributes are mirrored. Furthermore, the difference is statistically significant at conventional levels (p << The results are somewhat surprising and counter-intuitive. Machine learning techniques are typically invariant to certain types of transformations. For example, k Nearest Neighbors is invariant to any orthogonal transformation (like rotation), and linear-ish techniques are typically invariant to the scaling of attributes. Since the space partitioning in decision trees is axis aligned, it cannot be expected to be invariant to rotations. However, it is invariant to scaling: applying any positive multiplier to any feature will lead to the exact same tree. Consequently, there must be something going on with the mirroring of the axes. An intriguing question arises: what if mirroring the axes leads to better results? Should we consider another degree of freedom (multiplying by -1) in model selection beyond determining the optimal depth? Well, in the rest of the post we figure out what is going on here! Now, let’s briefly review some important characteristics of building and making inferences with binary Classification And Regression Trees (CART), which are used by most implementations. A notable difference compared to other tree induction techniques like ID3 and C4.5 is that CART trees do not treat categorical features in any special way. CART trees assume that all features are continuous. Given a training set (classification or regression), decision trees are induced by recursively partitioning the training set alongside the feature space using conditions like feature < threshold or alternatively, features <= threshold. The choice of conditioning is usually an intrinsic property of the implementations. For example, the Python package sklearn uses conditions of the form feature <= threshold, while the R package tree uses feature < threshold. Note that these conditions are aligned with the presumption of all features being continuous. Nevertheless, the presumption of continuous features is not a limitation. Integer features, category features through some encoding, or binary features can still be fed into these trees. Let’s examine an example tree in a hypothetical loan approval scenario (a binary classification problem), based on three attributes: • graduated (binary): 0 if the applicant did not graduate, 1 if the applicant graduated; • income (float): the yearly gross income of the applicant; • dependents (int): the number of dependents; and the target variable is binary: whether the applicant defaults (1) or pays back (0). A decision tree built for a hypothetical loan approval scenario The structure of the tree, as well as the conditions in the nodes (which threshold on which feature), are inferred from the training data. For more details about tree induction, refer to decision tree learning on Wikipedia. Given a tree like this, inference for a new record is conducted by starting from the leaf node, recursively applying the conditions, and routing the record to the branch corresponding to the output of the condition. When a leaf node is encountered, the label (or eventually distribution) recorded in the leaf node is returned as the prediction. A finite set of training records cannot imply a unique partitioning of the feature space. For example, the tree in the figure above could be induced from data where there is no record with graduation = 0 and income in the range ]60k, 80k[. The tree induction method identifies that a split should be made between the income values 60k and 80k. In the absence of further information, the midpoint of the interval (70k) is used as the threshold. Generally, it could be 65k or 85k as well. Using the midpoints of the unlabeled intervals is a common practice and a reasonable choice: in line with the assumption of continuous features, 50% of the unlabeled interval is assigned to the left and 50% to the right branches. Due to the use of midpoints as thresholds, the tree induction is completely independent of the choice of the conditioning operator: using both <= and < leads to the same tree structure, with the same thresholds, except for the conditioning operator. However, inference does depend on the conditioning operator. In the example, if a record representing an applicant with a 70k income is to be inferred, then in the depicted setup, it will be routed to the left branch. However, using the operator <, it will be routed to the right branch. With truly continuous features, there is a negligible chance for a record with exactly 70k income to be inferred. However, in reality, the income might be quoted in units of 1k, 5k, or eventually 10k, which makes it probable that the choice of the conditioning operator has a notable impact on Why do we talk about the conditioning when the problem we observed is about the mirroring of features? The two are basically the same. A condition “feature < threshold” is equivalent to the condition “-feature <= -threshold” in the sense that they lead to the same, but mirrored partitioning of the real axis. Namely, in both cases, if the feature value equals the threshold, that value is in the same partition where the feature values greater than the threshold are. For example, compare the two trees below, the one we used for illustration earlier, except all conditioning operators are changed to <, and another tree where the operator is kept, but the tree is mirrored: one can readily see that for any record they lead to the same predictions. The previous tree with the conditioning operator < The tree built on the mirrored data (still using the conditioning operator ≤) Since tree induction is independent of the choice of conditioning, building a tree on mirrored data and then predicting mirrored vectors is equivalent to using the non-default conditioning operator (<) for inference on non-mirrored records. When the trees of the forest were fitted to the mirrored data, even though sklearn uses the ‘<=’ operator for conditioning, it worked as if it used the ‘<’ operator. Consequently, the performance deterioration we discovered with mirroring is due to thresholds coinciding with feature values, leading to different predictions during the evaluation of the test sets. For the sake of completeness, we note that the randomization in certain steps of tree induction might lead to slightly different trees when fitted to the mirrored data. However, these differences smooth out in random forests, especially in 2k folds of evaluation. The observed performance deterioration is a consequence of the systematic effect of thresholds coinciding with feature values. Primarily, two circumstances increase the likelihood of the phenomenon: • When a feature domain contains highly probable equidistant values: This sets the stage for a threshold (being the mid-point of two observations) to coincide with a feature value with high • Relatively deep trees are built: Generally, as a tree gets deeper, the training data becomes sparser at the nodes. When certain observations are absent at greater depths, thresholds might fall on those values. Interestingly, features taking a handful of equidistant values are very common in numerous domains. For example: • The age feature in medical datasets. • Rounded decimals (observed to the, say, 2nd digit will form a lattice). • Monetary figures quoted in units of millions or billions in financial datasets. Additionally, almost 97% of features in the toy regression and classification datasets in sklearn.datasets are of this kind. Therefore, it is not an over-exaggeration to say that features taking equidistant values with high probability are present everywhere. Consequently, as a rule of thumb, the deeper trees or forests one builds, the more likely it becomes that thresholds interfere with feature values. We have seen that the two conditioning operators (the non-default one imitated by the mirroring of data) can lead to different prediction results with statistical significance. The two predictions cannot be unbiased at the same time. Therefore, we consider the use of either form of conditioning introducing a bias when thresholds coincide with feature values. Alternatively, it is tempting to consider one form of conditioning to be luckily aligned with the data and improving the performance. Thus, model selection could be used to select the most suitable form of conditioning (or whether the data should be mirrored). However, in a particular model selection scenario, using some k-fold cross-validation scheme, we can only test which operator is typically favorable if, say, 20% of the data is removed (5-fold) from training and then used for evaluation. When a model is trained on all data, other thresholds might interfere with feature values, and we have no information on which conditioning would improve the performance. A natural way to eliminate the bias is to integrate out the effect of the choice of conditioning operators. This involves carrying out predictions with both operators and averaging the results. In practice, with random forests, exploiting the equivalence of data mirroring and changing the conditioning operator, this can be approximated for basically no cost. Instead of using a forest of N_e estimators, one can build two forests of half the size, fit one to the original data, the other to the mirrored data, and take the average of the results. Note that this approach is applicable with any random forest implementation, and has only marginal additional cost (like multiplying the data by -1 and averaging the results). For example, we implement this strategy in Python below, aiming to integrate out the bias from the sklearn random forest. from sklearn.base import RegressorMixin class UnbiasedRandomForestRegressor(RegressorMixin): def __init__(self, **kwargs): # determining the number of estimators used in the # two subforests (with the same overall number of trees) self.n_estimators = kwargs.get('n_estimators', 100) n_leq = int(self.n_estimators / 2) n_l = self.n_estimators - n_estimators_leq # instantiating the subforests self.rf_leq = RandomForestRegressor(**(kwargs | {'n_estimators': n_leq})) self.rf_l = RandomForestRegressor(**(kwargs | {'n_estimators': n_l})) def fit(self, X, y, sample_weight=None): # fitting both subforests self.rf_leq.fit(X, y, sample_weight) self.rf_l.fit(-X, y, sample_weight) return self def predict(self, X): # taking the average of the predictions return np.mean([self.rf_leq.predict(X), self.rf_l.predict(-X)], axis=0) def get_params(self, deep=False): # returning the parameters return self.rf_leq.get_params(deep) | {'n_estimators': self.n_estimators} Next, we can execute the same experiments as before, using the exact same folds: r2_unbiased = cv_rf(X, y, UnbiasedRandomForestRegressor) Let’s compare the results! print(f'original r2: {np.mean(r2_original):.4f}') print(f'mirrored r2: {np.mean(r2_mirrored):.4f}') print(f'unbiased r2: {np.mean(r2_unbiased):.4f}') # original r2: 0.8611 # mirrored r2: 0.8595 # unbiased r2: 0.8608 According to expectations, the r2 score of the unbiased forest falls between the scores achieved by the original forest with and without mirroring the data. It might seem that eliminating the bias is detrimental to the performance; however, we emphasize again that once the forest is fit with all data, the relations might be reversed, and the original model might lead to worse predictions than the mirrored model. Eliminating the bias by integrating out the dependence on the conditioning operator eliminates the risk of deteriorated performance due to relying on the default conditioning The presence of a bias related to the interaction of the choice of conditioning and features taking equidistant values has been established and demonstrated. Given the common occurrence of features of this kind, the bias is likely to be present in sufficiently deep decision trees and random forests. The potentially detrimental effect can be eliminated by averaging the predictions carried out by the two conditioning operators. Interestingly, in the case of random forests, this can be done at basically no cost. In the example we used, the improvement reached the level of 0.1–0.2 percentage points of r2 scores. Finally, we emphasize that the results generalize to classification problems and single decision trees as well (see preprint). For further details, I recommend: Be the first to comment
{"url":"https://quantinsightsnetwork.com/a-subtle-bias-that-could-impact-your-decision-trees-and-random-forests-by-gyorgy-kovacs-dec-2023/","timestamp":"2024-11-13T03:26:22Z","content_type":"text/html","content_length":"186526","record_id":"<urn:uuid:753794fc-bc6c-4f74-b851-a9e65d044364>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00334.warc.gz"}
Botw Recipe Calculator - BS Calculator Botw Recipe Calculator Enter your ingredients into the calculator to determine the effects of your recipes in The Legend of Zelda: Breath of the Wild. Recipe Calculation Formula The following formula is used to calculate the effects of your recipes. Recipe Effect = Combine(Ingredient 1, Ingredient 2, Ingredient 3) • Recipe Effect is the outcome of combining ingredients. • Ingredients are the items used in the recipe. To calculate the recipe effect, combine the ingredients and observe the result. What is Recipe Calculation? Recipe calculation refers to the process of determining the effects of various ingredients when combined in The Legend of Zelda: Breath of the Wild. This involves understanding the properties of each ingredient and how they interact with one another to create beneficial effects. How to Calculate Recipe Effects? The following steps outline how to calculate the effects of your recipes using the given formula. 1. First, gather the ingredients you wish to combine. 2. Next, enter the ingredients into the calculator. 3. Use the formula from above to determine the recipe effect. 4. Finally, check the result and adjust your ingredients as necessary for optimal effects. Example Problem: Use the following ingredients as an example problem to test your knowledge. Ingredient 1 = Hyrule Herb Ingredient 2 = Rushroom Ingredient 3 = Endura Carrot 1. What is a recipe effect? A recipe effect is the outcome of combining specific ingredients, which can provide various benefits such as health restoration or temporary buffs. 2. How is the recipe effect different from individual ingredient effects? The recipe effect is the combined result of all ingredients, which may differ from the effects of each ingredient when used alone. 3. How often should I use the recipe calculator? It’s helpful to use the recipe calculator whenever you want to experiment with new combinations or optimize your cooking strategy in the game. 4. Can this calculator be used for different recipes? Yes, you can adjust the ingredients to match any recipe you want to calculate the effects for. 5. Is the calculator accurate? The calculator provides an estimate of your recipe effects based on the inputs provided. For exact results, it’s best to refer to in-game information or guides.
{"url":"https://bookspinecalculator.com/botw-recipe-calculator/","timestamp":"2024-11-13T15:15:42Z","content_type":"text/html","content_length":"54111","record_id":"<urn:uuid:41a53492-10bf-42d3-b2e4-0d6ae40fb442>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00187.warc.gz"}
Survival probability and field theory in systems with absorbing states for Physical Review E - Statistical Physics, Plasmas, Fluids, and Related Interdisciplinary Topics Physical Review E - Statistical Physics, Plasmas, Fluids, and Related Interdisciplinary Topics Survival probability and field theory in systems with absorbing states View publication An important quantity in the analysis of systems with absorbing states is the survival probability [Formula Presented], the probability that an initial localized seed of particles has not completely disappeared after time [Formula Presented]. At the transition into the absorbing phase, this probability scales for large [Formula Presented] like [Formula Presented]. It is not at all obvious how to compute [Formula Presented] in continuous field theories, where [Formula Presented] is strictly unity for all finite [Formula Presented]. We propose here an interpretation for [Formula Presented] in field theory and devise a practical method to determine it analytically. The method is applied to field theories representing absorbing-state systems in several distinct universality classes. Scaling relations are systematically derived and the known exact [Formula Presented] value is obtained for the voter model universality class. © 1997 The American Physical Society.
{"url":"https://research.ibm.com/publications/survival-probability-and-field-theory-in-systems-with-absorbing-states","timestamp":"2024-11-13T19:03:04Z","content_type":"text/html","content_length":"72754","record_id":"<urn:uuid:8709be13-f11a-4984-b0fe-9c3e1616f168>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00093.warc.gz"}
Percentages, Part 2 - David The Maths TutorPercentages, Part 2 Percentages, Part 2 So how do you convert percentages to fractions and decimals and vice versa? This post will show examples of each. 1. Convert a percentage to a fraction: This one is easy as if you remember, a percentage is already a fraction where the numerator is displayed and the denominator is 100. So you just create the fraction and simplify it (see my posts on 2. Convert a percentage to a decimal: This one is just a matter of moving the decimal point, two places to the left. Keep in mind that the decimal point will not usually show at the end of integer percentage, but you can assume it to be at the end of the number: 37% = 37.% = 0.37 18.5% = 0.185 112% = 1.12 0.15% = 0.0015 Any 0’s at the end of the decimal, can be left off: 40% = 0.40 = 0.4 3. Convert a decimal to a percentage: This is just the opposite of of the above: you just move the decimal point two places to the right, then add the % symbol: 0.25 = 25% (if an integer results, you can leave the decimal point off) 0.2786 = 27.86% 0.002 = 0.2% 2.345 = 234.5% 4. Convert a fraction to a percentage: Here you multiply by 100/1, simplify, then multiply numerators together and denominator together. It is advisable to simplify before multiplying: Sometimes though, not as much cancels and you will need to do some division in the end (long or short – see my post on long division): In my next post, I will show how to do some of the more common problems using percentages.
{"url":"https://davidthemathstutor.com.au/2019/04/09/percentages-part-2/","timestamp":"2024-11-03T13:49:57Z","content_type":"text/html","content_length":"43996","record_id":"<urn:uuid:4ce43563-30d8-4b78-968f-7ad20560c080>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00171.warc.gz"}
Teaching the Times Tables I’ve been tutoring maths for a number of years now. I’ve tutored boys and girls. I’ve tutored individuals and small groups. I’ve tutored children of all ages from very different social backgrounds. But they have all had one thing in common: none of them knew their times tables, and this was really hindering their progress in maths. Of course I told them that they needed to know their tables off by heart, but their parents and teachers had already told them this. If it was that easy they would have learnt them already. So this year I have made it my mission to get all the children I tutor to learn all of their times tables. To start with I created a desire to learn them. I made a colourful chart to show progress, and offered rewards of stickers for each of the tables that they learnt. But not just any old stickers – exciting, shiny ones that made their eyes light up when they saw them. The boys especially liked these football ones from Superstickers. Now I had children who were desperate to learn their times tables. What next? We took the tables one at a time and started by chanting them. When we had chanted them forwards a few times, we did them backwards, then odd numbers only and even numbers only to get used to the idea of knowing them out of order. After that it was a case of practise, practise, practise. The trick was finding enough different ways to practise the same thing so that the children didn’t get bored with it. I made some sets of cards with the questions and answers so that we could play pelmanism, and these proved very popular. I encouraged the children to read aloud the question as they turned each card over, and to work out what answer they needed to match before turning over the next card. We also used the same cards to play snap, and a race against the clock game to match all up all of the question cards with their answers – trying to be faster each time. Although the children loved all of these games, I was very aware that I couldn’t rely on the same sets of cards forever without the children thinking “Oh no – not those again!” and losing motivation. I looked around for some new ideas and found some lovely products on Sue Kerrigan’s let me learn website. The turn table cards were recommended to me by the trainer on a dyslexia course I attended. They are designed for multi-sensory learning and are really good fun to play with. On one side of the card they have a question eg 2×3 and a picture of an array to show children what 2×3 looks like and to give them a visual clue. On the other side is the answer. The children say the question and answer aloud (hearing their own voice) and then turn over the card to see if they are correct. There is a video of how to use them here . I usually use them with one child at a time, focusing on one set of tables at a time, using them as shown in the video, and then doing races against the clock to beat their own personal time. However I have also used them with a group of children each working on a different set of tables. One group of girls I worked with recently, who were all working on the same set of tables, made up another game to play with these cards which they found great fun: all of the cards were put answer-side-up in the middle of the table. I called out a question and they had to grab the card they thought showed the correct answer. They turned the card over to see if they were right, and if they were, they repeated the question and answer and kept the card. If they were wrong they replaced the card. The winner was the girl with the most cards when they had all been grabbed. All of the children I have used these cards with have really enjoyed it, and I’m sure there are many more games that can be invented using them. I found the maths wrap while I was browsing the site, and just thought I would give it a go. It’s used for learning tables “in order”, but is great for kinaesthetic learners. Across the top is a strip with numbers 1 to 12. At the bottom is space to put a strip of one of the tables, each of which contains all the answers but jumbled. You have to chant the tables aloud, hunting for the correct answer along the bottom strip and then wrapping the string around the correct number each time. When you have finished you can turn it over to look at the pattern marked on the back. If the children have got all the answers correct, the pattern made by the string will match the pattern printed on the back of the card. When I bought it, I thought it might be one just for the girls, but actually the boys have enjoyed using it just as much. One of my Year 5 boys said “Every child should have one of these. They’re really cool!” I even had texts from two mums, because their sons had been talking so much about how much fun it was that they wanted to know where I got them from so that they could get them as stocking fillers. As we progressed through the tables we looked at how few they had left. By using counters to demonstrate that for example 2×3 was the same as 3×2, we were able to colour code each new set of tables to show which ones they already knew and which ones were still to be learnt. They learned the easy ones (2x, 5x and 10x) first, which made the chart look less bare, and earned them some shiny stickers pretty quickly. Then they did 4x (easy because it was double 2s). 3x came next (tricky but the colour coding showed that they already knew 2, 4, 5 and 10 x3, so there where only half of them still to learn). Then 6x was easy because it was double 3s. By the time we came to the tricky ones like 7x, the progress chart was looking quite full, and the colour coding showed that they already knew 2, 3, 4, 5, 6 and 10x 7, so all that was left was 7×7, 8×7 and 9×7. Suddenly the sevens didn’t seem so scary and the motivation continued. Of course it took a long time, although considering the fact that I only see these children once a week it took less time than I expected. In September two of my boys didn’t know any of their times tables, not even 2x or 10x. They now know all of them. Not only do they know them off by heart, but they are able to apply them in all areas of maths, for example working with equivalent fractions. They immediately recognise numbers that are in their times tables which means their skills in division have improved. Their mental arithmetic skills have improved because they can multiply 6 by 7 straight away, instead of having to count up 7 lots of 6 on their fingers, so they have more time to think about what the questions are asking them to do with the information. They have both moved up a maths group at school and their confidence is higher. One of them said to me recently that he used to hate maths, but that he really loves it now. And that’s why I really love my job! For maths and English tutoring in the north Birmingham, Sandwell and Walsall areas, visit www.sjbteaching.com. For links to other interesting education related articles, come and Like my Facebook Related post: Teaching Number Bonds A Multisensory Approach to Reading
{"url":"http://sjbteaching.com/teaching-the-times-tables/","timestamp":"2024-11-05T21:49:14Z","content_type":"text/html","content_length":"54163","record_id":"<urn:uuid:603930a7-cfea-4fd3-a818-bb30bc4adf46>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00068.warc.gz"}
how to write the quadratic and logarithmic model in GAMS I am modelling a consumer model, but i don’t know how to write this type of equation . Kindly help me out with the equation writing. yours valuable suggestions, comments and help are welcomed. I am attaching the .gms file (code) and JPG file (equations) for the reference. Kindly help me out. You have functions U1/2/3(x) that have two different algebra parts for different argument ranges.You are already in the NLP domain (because of log) so you can just use the GAMS function ifthen ( https://www.gams.com/latest/docs/UG_Parameters.html#INDEX_functions_22_ifThen) to model this. For example, U2(x) = ifthen(x>0, betalog(omegax+1), 0). Please note that GAMS evaluates all three parts of the ifthen function independent of the condition in the first argument. That means, if you write it like I just did you will get function evaluation errors with x<-1/omega because then GAMS tries to apply the log function to a negative number. You can avoid this and don’t change the overall outcome of the expression by e.g. writing U1 as ifthen(x>0, betalog(omegaabs(x)+1), 0). Hope this helps, Thank you so much for the reply. I can now evaluate U2,U3, but I am getting error for writing U.1 would u kindly review it. I am attaching .gms file for the review, also there are two parts of U1 with 3 bounds. Kindly help me out. Customer profit maximization.gdx (2.11 KB) You sent a GDX file. Nothing I can do with this. -Michael My mistake, unfortunately I uploaded the .gdx file. I have attached now the .gms file and my problem formulation (JPG file) for the reference. Kindly review it and where do I have to write this gaama (sqr(omega)/2*alpha) part.- Line 200 GAMS requires operators between all parts of the expression. In papers for multiplication the ‘*’ sign is often left out. You need that in GAMS (and many other languages): parameter U1; U1(t)= ifthen(Q1(t)<=omega/alpha, gaama*(omega*Q1(t) -alpha/2*sqr(Q1(t))), omega/alpha); In Equation 2, there are four terms. where to write [gaama*(omeega**2)/(2*alpha)] … where do i have to write this term in my code. I don’t understand the question. -Michael where to write the encircled part of the equation in the code. i have attached the JPG file. In the “else” or false_expression part of the function: ifthen(expression, true_expression, false_expression). -Michael
{"url":"https://forum.gams.com/t/how-to-write-the-quadratic-and-logarithmic-model-in-gams/4066","timestamp":"2024-11-02T05:23:58Z","content_type":"text/html","content_length":"27940","record_id":"<urn:uuid:91f21881-72e9-4121-9f56-7b7fcf826651>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00801.warc.gz"}
What are the different types of turbulence models used in CFD? | SolidWorks Assignment Help What are the different types of turbulence models used in CFD? A conventional CFD describes the interaction between the fluid and particles using a weak gravitational potential that varies as a function of mass and frequency. The first type of turbulence model is capable of describing turbulent flows but is not fully characterized in terms of its effects on the size of particles, mass and velocity coefficients, and Reynolds number, but in terms of turbulence dynamics. But are there specific models that have been able to study turbulence dynamics below the turbulent molecular scale (below the conical scale)? A conventional CFD describes the interaction between the fluid and particles using a weak gravitational potential that varies as a function of mass and frequency. The first type of turbulence model is capable of describing turbulent flows but is not fully characterized in terms of its effects on the size of particles, mass and velocity coefficients, and Reynolds number, but in terms of turbulence dynamics. But are there specific models that have been able to study turbulence dynamics below the turbulent molecular scale (below the conical scale)? A conventional CFD describes the interaction between the fluid and particles using a weak gravitational potential that varies as a function of mass and frequency. The first type of turbulence model is capable of describing turbulent hems without the turbulence effects on turbulence velocity or turbulence linear order of a fluid flowing across a convex body. A second type of turbulence model is capable of describing turbulence dynamics below the conical scale but below the interquispersed cone-type turbulence (concers are two-dimensional cones) with the hems flow surface as the base layer. This class includes viscoelastic structures, sedimentation, turbulence, and flow as modes at the cell boundary. A conventional CFD describes the interaction between the fluid and the particles using a weak gravitational potential that varies as a function of mass and intensity. The first type of turbulence model is capable of describing turbulent flows but is not fully characterized in terms of its effects on the size of particles, mass and velocity coefficients, and Reynolds number, but in terms of turbulence dynamics. But are there specific models that have been able to study turbulence dynamics below the turbulence molecular scale? The turbulent effects in the middle of the conical (hydrodynamic) region of the conical and the interquispersed cone are believed to appear as the turbulence velocity, damping coefficient and drag coefficient. A widely used method of identifying turbulence is by measuring the total number of particles in the bottom layer of the conical cone. This can be measured in a fluid simulation of the vertical lift of a fluid droplet as its current moving in a concave or convex body is measured. If the model uses the kinetic energy contribution from turbulence and is that of a traditional CFD, or does not use it, what would it give to turbulence dynamics? If the simulation was in the liquid before the fluid was flowing out of the conical/ interquispersed cone, then that would describe turbulence behaviorWhat are the different types of turbulence models used in CFD? Non-resonant turbulence models are those where only the spectrometer was able to take the spectra so that they are effectively non-resonant, or near-resonance, but rather there is no full description of the interactions between the two. For example, we see here that we have a relatively broad energy resolution in our calculation. In our examples, the data used to generate our model spectra is as wide as the full spectrum data, but only for typical large or medium-scale objects. Thus the total energy of the generated spectra cannot be considered: in the linear model, the broad lines (which are important for the HEGs at the high-temperature limit) are taken as a purely spectral band rather than as a spectra. In other words, we can only include regions with widths on the order of a few wavelength. The new spectra are usually generated by non-resonant or spectropolar or multiplexing models. Our non-resonant TIP3D models tend to include the large-scale outflows in our observations using large-scale chemical overdensities. Reddit Do My Homework The models which have the largest overlap arise from the spectropolar and the multiplex models which have the largest overlap. In order to produce our models, we are interested in all of our data, whereas the spectropolar models are the chosen models which do not include any of the effects of absorption. We assume that we can generate the models from an $\alpha$ background sky with spectral resolution better than ever. However, from the data evolution here, we strongly disagree whether our data could be accurately simulated in a way that mimics the observed geometry. As expected, a number of features show a decreasing pattern in the low-temperature limit. To summarize, the models produced by the spectometric models are strongly different of the predictions of model spectra in the infrared, and a clear and profound difference between their predicted values and the observed structures. Even the large-scale atmosphere models show nearly zero intensities at low *x* or relatively late temperatures which are primarily present in hydrogen cooling. Using our ’s input data we find that temperatures above 10$^9$K are temperatures around 10$^4$K, a much larger value than the C$_\mathrm{i}$/H$_\mathrm{o}$ ratio of 1.2, and temperatures between 15$^\circ$C and 40$^\circ$C. These observations imply that there are many mechanisms by which we may produce measurements in particular environments at low atmospheric densities which agree to the models (Figure \[A1\]). (Figure \[E1\]). The significant differences between our predicted and observed data can be summarized by analyzing the flux variations of the observed structures. Starting from the flux measurements (Figure What are the different types of turbulence models used in CFD? The DSTS-2(NCEI) models represent more recent research studies such as the ones proposed in CFD 7.2.A and will be described herein. The DSTS-2(NCEI) models are used in the CFD for modeling the interaction with the atmosphere. 6 CFD Model 9.18 The second class of CFD is discussed in more detail in CFD 7.2.A, where appropriate, the authors explained the underlying features (quenching parameter, plasma viscosity, diffusion dissipation etc. Pay For Someone To Take My Online Classes ). This class is also discussed in CFD 7.2.B. In 8.11, the authors presented a physical model, which model the response of a solid to a pressure field via a set of DSTS-2(NCEI) models. In this model they considered the effects of pressures in the medium on interaction between wavefronts and a set of diffusion coefficients, which were calculated by the standard least squares method. A discussion of diffusion models is presented in 7.3.A. It is shown that in this case diffusion is effective, i.e., the time scale corresponding to its phase transition is d who can be divided into small detrends. On that time scale d is defined as an excess of time which can be increased by increasing the plasma viscosity (see the third paragraph of 8.10). They concluded that for this change in the viscosity tensor, which can be calculated by the least squares method, the effect of the difference between the wavefront wavenumbers are neglected. 7 CFD 7.2.B: Design of a model for nonlinear interface pressure effects. 8. Sell Essays 3) Proposed Method. How to design a model for a nonlinear interface pressure effect that leads to successful mixing, the authors discuss in this article. The authors went over the effects of time dependent parameters, except for the duration of the time constant of the model. The most popular model system is described in CFD 7.3.B. 8 CFD 8.7.1: It is important to establish sufficient conditions for a satisfactory mixing behavior, then the model description from which the model can be derived is also used. If an assumption that the pressure drop is essentially linear and is maintained at a slightly slowly varying value of the pressure, does not hold, then the result is a drop that it is difficult to investigate. The model developed from article source previous sections shows that the transition from a homogeneous to an inhomogeneous pressure is not at least a few orders of magnitude higher than the transition from a miscible to a homogeneously elastic transition. 8 CFD 8.7.2: Reflection at the interface between the medium and the atmosphere. In this paper there is discussed the relationship between the mixing flow front, the profile of the pressure or current, and the value
{"url":"https://solidworksaid.com/what-are-the-different-types-of-turbulence-models-used-in-cfd-13326","timestamp":"2024-11-05T18:16:28Z","content_type":"text/html","content_length":"157522","record_id":"<urn:uuid:25ea6d32-a2e2-4ba1-b042-de3be516bdef>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00181.warc.gz"}
Super-resolution, subspace methods, and Fourier matrices Printable PDF Department of Mathematics, University of California San Diego Math 278C - Mathematics of Information, Data, and Signals Seminar Weilin Li New York University Super-resolution, subspace methods, and Fourier matrices This talk is concerned with the inverse problem of recovering a discrete measure on the torus given a finite number of its noisy Fourier coefficients. We focus on the diffraction limited regime where at least two atoms are closer together than the Rayleigh length. We show that the fundamental limits of this problem and the stability of subspace (algebraic) methods, such as ESPRIT and MUSIC, are closely connected to the minimum singular value of non-harmonic Fourier matrices. We provide novel bounds for the latter in the case where the atoms are located in clumps. We also provide an analogous theory for a statistical model, where the measure is time-dependent and Fourier measurements are collected over at various times. Joint work with Wenjing Liao, Albert Fannjiang, Zengying Zhu, and Weiguo Gao. November 4, 2021 11:30 AM Zoom link: https://msu.zoom.us/j/96421373881 (the passcode is the first prime number $>$ 100)
{"url":"https://math.ucsd.edu/seminar/super-resolution-subspace-methods-and-fourier-matrices","timestamp":"2024-11-02T23:45:06Z","content_type":"text/html","content_length":"33558","record_id":"<urn:uuid:eab3535f-f0da-4c70-a59b-c0c2e708471d>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00624.warc.gz"}
Heat Shield Design Guides November 17, 2010 Editors Note: A few weeks ago Ed Bonnema had a conversation regarding cryomodule design with a potential customer. The conversation roamed over a range of topics from vacuum vessel design, balancing good vacuum design practice vs. reducing cost, etc. One topic of much interest was the design and cost trade-offs between copper and aluminum shields. The customer seemed satisfied with my comments but I felt my answers addressed general concepts without getting into the specifics deep enough. So, Mike Seely and I discussed authoring a newsletter article on the subject. This is the result. Chris and others, I hope this gives you more specific guidelines for decision-making. I. Heat Shields at 80K We present some design guides for heat shields. Heat shields are often cylindrical in shape and are cooled either by coils or by attachment to a cold surface at one end. Calculating the temperature profile along the shield then reduces to a problem of heat conduction in one dimension. We start by considering a heat shield of thickness t with a regular array of cooling coils attached at a distance interval d. This could be a cylindrical heat shield with cooling lines run along its length or with cooling lines run around its perimeter. We assume that these cooling coils are attached continuously along their length and fix the shield temperature at 80K along the point of attachment. If the surface is well insulated and in a good vacuum, then the heat flux from a 300K surface is typically 1.0 W/m2. With a uniform heat flux P on the surface, the temperature will reach a maximum value half way between the cooling coils. Where κ is the thermal conductivity of the shield material, the peak temperature scales as d2/t. The results for a number of common shield materials are shown in Figure 2. The thermal conductivity for RRR=300 copper has been used here; however, in this temperature range the thermal conductivity of copper is not highly sensitive to composition and temper. For example, if we have a copper heat shield 0.001m (1mm) thick and coils spaced every 1m then d2/t = 1000 m and the peak temperature rise between the cooling coils is only about 0.2K. The same heat shield would have a temperature rise of about 1.3 K if it were fabricated from 6061T-6 aluminum and about 13 K if it were fabricated from stainless steel. If the surfaces were uninsulated, then the radiated heat flux from a 300K surface would be approximately 90 W/m2 assuming a moderately low value of ε ~ 0.2 for the emissivity. This has the effect of pushing the curves higher ΔT. In both figures 2 and 3 the value of κ at the average temperature is used to calculate ΔT. The curves are accurate to a few percent at the higher values of ΔT. Frequently a heat shield design problem will specify a maximum value of ΔT and it is up to the designer to balance issues of cost and ease of fabrication. For example, suppose that we have an insulated 80K cylindrical heat shield and the temperature rise is to be no more than 10K. From Figure 1 we find that a 304 stainless steel heat shield would require d2/t = 650 m, which could be achieved with a 1 mm thick shield with a coil spacing of 80 cm. To obtain the same ΔT with 6061 Al we would require d2/t = 7000 m, which for a 1 mm thick shield would require a coil spacing of 2.6 m. Using 1100 Al for this heat shield would require d/t2 = 21000 m, which for a 1 mm thick shield would require a coil spacing of 4.6 m. Finally, using copper would require d2/t = 41000 m, which would allow a coil spacing of 6.4 m for a 1mm thick shield. The peak temperature ΔTpeak scales as d2 because increasing d has two effects; it increases the distance over which heat must be conducted and it increases the area per coil and thus the radiated power per coil. Designers often specify a maximum peak temperature that is to be achieved over a heat shield. If an inner 4K surface is being shielded, then it is the average temperature of the 80K shield that is of interest. For this simple geometry the temperature varies quadratically with distance to the coil. The average temperature is: One might wonder if the average value of T4 rather than the average value of T would be the quantity of interest. Thick MLI blankets are normally characterized by an effective conductivity because conduction through the blanket, rather than radiation, becomes the major source of heat flux to the cold surface. In this case it is the average temperature that will be required. For example, assume that we have a 6061 Al heat shield 0.5mm thick with 80K cooling coils spaced every 3 m. With d2/t = 18000 m we would predict ΔT=24K from figure 2. The average temperature of the shield is 80K + 24K(2 /3) =96K. The heat flux from a 77K shield to a 4K shield is typically assumed to be 0.07 W/m2corresponding to a thermal conductance of 9.6 x 10-4 W/m2. In this case we would predict a heat flux of: In this case, what might appear to be a significant temperature excursion on the 80K heat shield results in an increase in heat flux to the 4K surface that is probably within the uncertainty in our assumed thermal conductance. The curves above can also be applied to the case of a cylindrical heat shield with one end fixed at T ~ 80K by substituting the length of the cylinder for d/2 in Figure 1. The base of the cylinder is being neglected, however, this could be compensated for with reasonable accuracy by adding one half the radius to the length. For example, suppose we have a heat shield which is 1m long, 1m in diameter and 2mm thick made of 1100 Al. This yields (2L) /t = 2000 m and ΔT ~ 0.9K if the cylinder is insulated. If we wish to take the base of the cylinder into account then the length would be taken as 1.25m and ΔT is then the temperature drop from the center of the base. This approach will tend to overestimate ΔT. II. Heat Shields at 4K Figure 4. Heat Shield Characteristics at 4K The same considerations apply to heat shields at 4K. However, the thermal conductivities and heat load are quite different. Figure 4 illustrates the same case as Figure 2 but at 4K. The thermal conductivity of copper used here corresponds to C101 with an RRR of 150. This corresponds to a relatively soft form of C101 with little cold working. At these temperatures the thermal conductivity of copper varies considerably depending on the purity and temper. Real heat shields rarely have perfect cylindrical symmetry and it is not always possible to place cooling coils at perfectly regular intervals. Simplifying a real heat shield to a heat conduction problem in 1 dimension is an approximation. At the same time, our knowledge of the thermal conductivities is often only approximate. Their values are generally affected by cold working, which is difficult to quantify. Our knowledge of the effective thermal conductance of MLI is also imperfect. The approximations made in simplifying the heat shield geometry are not unreasonable in view of other approximations made in heat shield design. Our approach would generally be to make the design contingency sufficiently robust to insure that the heat shield meets specification in spite of the approximations made. III. Conclusions 1. Stainless steel has been included in the design charts, although it is not frequently used as a heat shield. It is necessary to use a thicker shell and/or more closely spaced cooling coils to match the thermal performance of an aluminum or copper heat shield using stainless steel. However, the ease of fabrication may favor stainless steel in some applications. 2. At 80K the thermal characteristics of copper and aluminum do not vary greatly. Similar thermal performance can be obtained by varying the thickness of the shield. The differences are greater at 3. The thickness of the shield and the spacing of cooling coils can be varied to obtain similar thermal performance from different materials. Considerations such as weight, cost and ease of fabrication may then favor one material over another. 4. The peak temperature between the cooling coils scales as d2/t. Doubling the coil spacing increases the peak ΔT by a factor of four, while cutting the thickness in half increases the peak ΔT by a factor of two.
{"url":"https://www.mtm-inc.com/heat-shield-design-guides.html","timestamp":"2024-11-04T17:26:51Z","content_type":"text/html","content_length":"83936","record_id":"<urn:uuid:fbb034bb-0b59-4285-a50f-8a3bcc8cd53d>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00646.warc.gz"}
What ACT Math Formulas Should You Know for the Test? ACT Math covers a broad range of topics. Some questions test concepts you may have learned in elementary school, such as simple percent questions and adding or subtracting fractions. Other ACT questions test concepts you may not learn in high school at all, such as matrices and vectors. There is no overall formula sheet provided for the ACT. Some individual questions may provide a formula, but the majority won’t. The good news is that if you’re thinking about taking the ACT, you probably already have many of these formulas memorized, or can recall them with a quick refresher or two. With the right prep, you might also have other tools and strategies (such as breaking shapes into smaller shapes) that allow you to solve the problem without using a formula. You’ll still need to memorize some formulas, but not as many as you might fear. How to Study ACT Math Formulas Prioritize your studies by memorizing formulas in this order: • Must Know • Nice to Know (But Can be Solved with Strategy) • Could Know (But Tested Only Rarely) As you go through the Must Know formulas below, mark each one as something you: • Definitely Understand • Sort of Understand • Don’t Understand For the “Definitely Understand” formulas, you’re done-there’s no need to spend any time memorizing formulas you already know. Instead, focus your efforts where you’ll gain the most. For the “Sort of Understand” and “Don’t Understand” categories, use your favorite memorization strategy: flashcards, apps, writing in a notebook, etc. If you have more than 10 or so formulas to memorize, focus on about 10 at a time. After mastering the “Must Know” formulas, repeat the same approach with the “Nice to Know” and “Could Know” formulas. Try to solve some ACT practice questions (or even better, ACT practice tests ) and review your results. This can make it clearer which formula-based questions you’d be able to solve if only you remembered the formula. Memorization: Consistency is Key Regardless of your approach, spend some time each day studying your ACT formulas. You’ll see more improvement spending 5-10 minutes a day rather than spending 2 hours once a week cramming. Geometry Formulas Most of the formulas you need to memorize for ACT Math are geometry formulas. If you find that you’re missing numerous geometry questions on your homework and practice tests, consider whether your lack of knowledge of the formulas is affecting your score. Must Know Geometry Formulas • Area of a Rectangle: A = lw • Area of a Triangle: • Area of a Circle: A = π r ^ 2 • Circumference of a Circle: C = π d or C = 2π r • Diameter and Radius of a circle: d = 2 r • Volume of a Rectangular Prism: V = Bh , where B is the area of the base • Degrees in a: □ Right Angle: 90° □ Straight Line: 180° □ Triangle: 180° □ Circle: 180° • Parallel Lines and Angles – When a line intersects two parallel lines □ Two kinds of angles are formed: big angles and small angles. □ Each big angle is equal to the other big angles. □ Each small angle is equal to the other small angles. □ Any big angle plus any small angle is 180°. • Pythagorean Theorem: a ^ 2 + b ^ 2 = c ^ 2 , where c is the hypotenuse of a right triangle. • SOHCAHTOA Nice to Know (But Can be Solved with Strategy) Geometry Formulas • Area of a Square: A = s ^ 2 (based on Area of a Rectangle formula) • Area of a Parallelogram: A = bh (break into 2 triangles or a rectangle and 2 triangles) • Area of a Trapezoid: • Volume of a Cube: V = s ^ 3 (based on volume of a rectangular prism) • Volume of a Rectangular Solid: V = lwh (based on volume of a rectangular prism) • Volume of a Cylinder: V = π r ^ 2 h (based on volume of a rectangular prism) • Special Right Triangles: (use the Pythagorean theorem) • Sum of angles in an n -sided polygon: ( n – 2)180° (break polygon into triangles) • Angle measure of each angle in a regular n -sided polygon: (break polygon into triangles) • Surface area of a rectangular solid: S = 2( lw + lh + wh ) (add areas of all faces) • Surface area of a cube: S = 6 s ^ 2 (add areas of all faces) • Surface area of a right circular cylinder: S = 2π r ^ 2 + 2π rh (add areas of all faces) Nice to Know (But Rarely Tested) Geometry Formulas • Reciprocal Trigonometric Functions: • Law of sines: • (sometimes provided in a question) • Surface area of a sphere: S = 4π r ^ 2 (sometimes provided in a question) • Volume of a sphere: Coordinate Geometry Formulas Some ACT Coordinate Geometry questions are really just geometry questions in disguise, but other questions will require formulas specific to this area of study. Must Know Coordinate Geometry Formulas • Slope: • Slope-intercept form of a line: y = mx + b , where m is the slope and b is the y -intercept Nice to Know (But Can be Solved with Strategy) Coordinate Geometry Formulas • Distance: • Midpoint: x -coordinates is the midpoint’s x ; same with the y -coordinates) • Standard form: Ax + By = C (can always rearrange into slope-intercept form) Nice to Know (But Rarely Tested) Coordinate Geometry Formulas • Circle centered at (0,0) = x ^ 2 + y ^ 2 = r ^ 2 , where r is the radius • Circle centered at ( h , k ) = ( x – h ) ^ 2 + ( y – k ) ^ 2 = r ^ 2 , where r is the radius (sometimes given in a question) Algebra Formulas There are a couple of formulas that are useful when specifically working with quadratics in the form ax ^ 2 + bx + c = 0: Quadratic formula: Discriminant: D = b ^ 2 – 4 ac (the expression under the radical in the quadratic formula) • If D > 0, there will be two distinct, real solutions. • If D = 0, there will be one distinct real solution. • If D < 0, there will be no real solutions. Instead, there will be two complex solutions. The sum of the roots: The product of the roots: The midpoint of the roots/the x -coordinate of the vertex: Problem Solving Formulas Most questions testing the following formulas can be solved with careful reading and strategy. However, if you’re the sort that prefers to memorize, these can be helpful to know. • Arithmetic sequence: n th term = Original Term + ( n – 1) d , where d is the constant difference between terms • Direct Variation: y = kx , where k is a constant • Inverse Variation: x [ 1 ] y [ 1 ] = x [ 2 ] y [ 2 ] or k is a constant • Geometric sequence: n th term = Original Term ´ r ^ ( n – 1) , where r is the constant ratio between terms • Group Formula: Total = Group 1 + Group 2 – Both + Neither Statistics Formulas Most ACT Math statistics questions will test basic formulas that you learned in middle school. However, there may be a few questions testing some more advanced statistics concepts. • Average (Arithmetic Mean) = Total = Average ´ Number of Things • Probability: "> • Permutations (the number of ways to arrange or order a group of things): • [ n ]P [ r ] = , where n is the number of elements available and r is the number of elements chosen • Combinations (the number of ways make different groups out of a group of things): • [ n ]C [ r ] = , where n is the number of elements available and r is the number of elements chosen. • Expected value: Multiply the probability of each occurrence by the value of that occurrence, then add the sums. Knowing the ACT math formulas can be very helpful when taking your exam. However, the bigger challenge is learning effective strategies to answer questions correctly and quickly. The Princeton Review offers books, courses, and tutoring to help you learn the approaches needed to beat ACT Math and earn the score of your dreams.
{"url":"https://ws.princetonreview.com/college-advice/act-math-formulas","timestamp":"2024-11-10T21:26:32Z","content_type":"application/xhtml+xml","content_length":"209163","record_id":"<urn:uuid:84a181e1-8c18-4c21-84c7-4bf97cb96931>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00458.warc.gz"}
How do you find vertical, horizontal and oblique asymptotes for (x^2-5x+6)/(x-4)? | HIX Tutor How do you find vertical, horizontal and oblique asymptotes for #(x^2-5x+6)/(x-4)#? Answer 1 The vertical asymptote is $x = 4$ The oblique asymptote is $y = x - 1$ No horizontal asymptote As you cannot divide by #0#, #=>#, #x!=4# The vertical asymptote is #x=4# The degree of the numerator is #># than the degree of the denominator, there is an oblique asymptote. Let #f(x)=(x^2-5x+6)/(x-4)# The oblique asymptote is #y=x-1# graph{(y-(x^2-5x+6)/(x-4))(y-x+1)(y-100(x-4))=0 [-18.3, 17.74, -6.74, 11.28]} Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 2 To find the vertical asymptote(s) of a rational function, set the denominator equal to zero and solve for x. In this case, the vertical asymptote occurs when x - 4 = 0, yielding x = 4. To find the horizontal asymptote, compare the degrees of the numerator and denominator. If the degree of the numerator is less than the degree of the denominator, the horizontal asymptote is y = 0. If the degree of the numerator is equal to the degree of the denominator, divide the leading coefficients of both polynomials. In this case, since both the numerator and denominator have the same degree (1), the horizontal asymptote is y = coefficient of x in numerator / coefficient of x in denominator, which is 1/1 = 1. To find oblique asymptotes, divide the numerator by the denominator using long division or synthetic division. The quotient obtained will represent the equation of the oblique asymptote, if it Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/how-do-you-find-vertical-horizontal-and-oblique-asymptotes-for-x-2-5x-6-x-4-1-8f9afa52d5","timestamp":"2024-11-02T18:18:16Z","content_type":"text/html","content_length":"575924","record_id":"<urn:uuid:0a597c28-ad7a-4791-b901-575858aa713c>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00608.warc.gz"}
Elementary Multiplication Worksheets Free Mathematics, particularly multiplication, forms the keystone of many academic disciplines and real-world applications. Yet, for numerous students, mastering multiplication can present an obstacle. To address this hurdle, teachers and parents have actually accepted a powerful tool: Elementary Multiplication Worksheets Free. Introduction to Elementary Multiplication Worksheets Free Elementary Multiplication Worksheets Free Elementary Multiplication Worksheets Free - Find free printable multiplication worksheets for preschool to 6th grade with timed tests spiral layouts bullseye targets and more Learn multiplication facts with Dad s Eight Simple Rules visual aids and fun puzzles Find hundreds of printable worksheets for teaching and practicing basic multiplication facts from 0 to 12 with arrays repeated addition fact families and more Also includes multi digit decimal money fraction and lattice multiplication Relevance of Multiplication Practice Understanding multiplication is pivotal, laying a solid foundation for sophisticated mathematical ideas. Elementary Multiplication Worksheets Free supply structured and targeted practice, fostering a deeper understanding of this fundamental math procedure. Development of Elementary Multiplication Worksheets Free multiplication Worksheet For Kids Archives EduMonitor multiplication Worksheet For Kids Archives EduMonitor Multiplication Worksheets PDF printable multiplication math worksheets for children in Pre K Kindergarten 1 st grade 2 nd grade 3 rd grade 4 th grade 5 th grade 6 th grade and 7 th grade These worksheets cover most multiplication subtopics and are were also conceived in line with Common Core State Standards Find tons of fun and engaging multiplication games printable and worksheets for grades 2 6 Practice multiplication facts with hopscotch bingo puzzles flashcards and more From conventional pen-and-paper workouts to digitized interactive formats, Elementary Multiplication Worksheets Free have advanced, dealing with diverse understanding designs and choices. Kinds Of Elementary Multiplication Worksheets Free Basic Multiplication Sheets Basic workouts focusing on multiplication tables, helping students develop a solid arithmetic base. Word Trouble Worksheets Real-life situations integrated right into problems, boosting important thinking and application abilities. Timed Multiplication Drills Tests designed to boost rate and accuracy, aiding in quick mental math. Advantages of Using Elementary Multiplication Worksheets Free Multiplication 11 Worksheets Multiplication worksheets Multiplication Free Printable Multiplication 11 Worksheets Multiplication worksheets Multiplication Free Printable Free Printable Multiplication Worksheets provided here has numerous exercises to sharpen your child s multiplication skills Practice row and column multiplication of simple and large digit numbers Multiplication tables and charts given here Find free printable worksheets to help your child learn and practice multiplication skills from 2nd to 5th grade Choose from times tables multiplying by 10s and 100s decimals word problems and more Boosted Mathematical Skills Consistent technique sharpens multiplication proficiency, enhancing overall mathematics abilities. Boosted Problem-Solving Talents Word issues in worksheets establish logical thinking and approach application. Self-Paced Knowing Advantages Worksheets fit private learning speeds, cultivating a comfortable and versatile learning environment. Exactly How to Develop Engaging Elementary Multiplication Worksheets Free Including Visuals and Shades Vibrant visuals and shades record interest, making worksheets aesthetically appealing and involving. Including Real-Life Situations Connecting multiplication to daily situations adds importance and functionality to workouts. Customizing Worksheets to Different Skill Degrees Personalizing worksheets based on differing proficiency levels makes certain comprehensive learning. Interactive and Online Multiplication Resources Digital Multiplication Equipment and Games Technology-based resources offer interactive discovering experiences, making multiplication appealing and satisfying. Interactive Websites and Apps On-line systems supply varied and accessible multiplication method, supplementing standard worksheets. Customizing Worksheets for Different Knowing Styles Visual Students Aesthetic aids and layouts help comprehension for learners inclined toward visual discovering. Auditory Learners Verbal multiplication issues or mnemonics cater to learners who realize ideas with auditory methods. Kinesthetic Learners Hands-on activities and manipulatives support kinesthetic learners in recognizing multiplication. Tips for Effective Implementation in Learning Consistency in Practice Normal technique strengthens multiplication skills, promoting retention and fluency. Stabilizing Rep and Selection A mix of recurring exercises and diverse problem layouts keeps passion and understanding. Supplying Useful Comments Comments help in recognizing areas of renovation, urging ongoing progress. Challenges in Multiplication Technique and Solutions Motivation and Involvement Difficulties Dull drills can result in disinterest; ingenious strategies can reignite inspiration. Overcoming Concern of Mathematics Unfavorable understandings around mathematics can impede progress; producing a positive knowing atmosphere is necessary. Impact of Elementary Multiplication Worksheets Free on Academic Performance Researches and Research Findings Research shows a favorable relationship in between constant worksheet usage and enhanced mathematics performance. Elementary Multiplication Worksheets Free become functional tools, fostering mathematical proficiency in students while suiting varied discovering styles. From fundamental drills to interactive on the internet sources, these worksheets not only improve multiplication abilities however likewise advertise critical reasoning and analytical capacities. Multiplication Worksheets Multiplication Facts For 2 Times Tables Teaching multiplication Teachers Use This multiplication Organizer As A Math Scaffold For Student Learning Check more of Elementary Multiplication Worksheets Free below Multiplication Strategies 3rd Grade Worksheets Times Tables Worksheets Math Multiplication Worksheets 4th Grade FREE PRINTABLE MULTIPLICATION WORKSHEETS WonkyWonderful 11 Best Images Of Multiplication Worksheets 4S 1 Multiplication Worksheet Printable Kindergarten Worksheets Free Teaching Resources And Lesson Plans Maths Worksheets Multiplication Interactive Worksheet Math Addition worksheets 3rd Grade Math worksheets Math Printable Multiplication Worksheets Free amp Printable Find hundreds of printable worksheets for teaching and practicing basic multiplication facts from 0 to 12 with arrays repeated addition fact families and more Also includes multi digit decimal money fraction and lattice multiplication Free Multiplication Worksheets Education Find hundreds of free multiplication worksheets for elementary school students of all ages and levels Practice multiplying by twos threes fours fives and more with interactive printable and timed Find hundreds of printable worksheets for teaching and practicing basic multiplication facts from 0 to 12 with arrays repeated addition fact families and more Also includes multi digit decimal money fraction and lattice multiplication Find hundreds of free multiplication worksheets for elementary school students of all ages and levels Practice multiplying by twos threes fours fives and more with interactive printable and timed 11 Best Images Of Multiplication Worksheets 4S 1 Multiplication Worksheet Printable Math Multiplication Worksheets 4th Grade Kindergarten Worksheets Free Teaching Resources And Lesson Plans Maths Worksheets Multiplication Interactive Worksheet Math Addition worksheets 3rd Grade Math worksheets Math Multiplication Worksheets Area Model PrintableMultiplication Multiplication Worksheets 9X PrintableMultiplication Multiplication Worksheets 9X PrintableMultiplication Pin On United Teaching Resources Frequently Asked Questions (Frequently Asked Questions). Are Elementary Multiplication Worksheets Free suitable for any age teams? Yes, worksheets can be tailored to different age and ability levels, making them adaptable for various students. How often should pupils exercise making use of Elementary Multiplication Worksheets Free? Consistent practice is key. Normal sessions, preferably a few times a week, can generate significant improvement. Can worksheets alone enhance math abilities? Worksheets are an important tool yet must be supplemented with varied discovering techniques for thorough ability advancement. Are there online platforms using totally free Elementary Multiplication Worksheets Free? Yes, several educational sites supply open door to a vast array of Elementary Multiplication Worksheets Free. How can parents sustain their youngsters's multiplication technique in your home? Encouraging consistent technique, giving help, and creating a favorable understanding environment are advantageous actions.
{"url":"https://crown-darts.com/en/elementary-multiplication-worksheets-free.html","timestamp":"2024-11-13T22:55:08Z","content_type":"text/html","content_length":"28227","record_id":"<urn:uuid:022ddbf6-f643-4f32-9a9c-81eb51238b84>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00815.warc.gz"}
The volume of a sphere is 176cm3 The volume of 25 such class 11 physics JEE_Main Hint: In this solution, we will be focusing on the concepts of significant digits. The answer should have as many significant digits as the terms used in the formula. Formula used: In this question, we will use the following formula -Volume of a sphere: $V = \dfrac{4}{3}\pi {R^3}$ where $R$ is the radius of the sphere Complete step by step answer: We’ve been given that the volume of a sphere is \[1.76\,c{m^3}\]. Then the volume of 25 such spheres will be the product of the number of spheres and the volume of one sphere. So, we can write the new volume as ${V_{new}} = 25 \times 1.76$ $ \Rightarrow {V_{new}} = 44\,c{m^3}$ Now the answer to our calculation should have the same number of digits as the highest number of significant digits in the terms that we use in the equation. So, in the terms that we used, 25 has two significant digits, 2 and 5. However \[1.76\,\] has three significant digits, 1, 7, and 6. So our answer must also have three significant digits. Hence the volume of the spheres combined will be represented as ${V_{new}} = \,44.0\,c{m^3}$ Hence the volume of 25 spheres will be ${V_{new}} = \,44.0\,c{m^3}$ so, option (B) is the correct choice. Note: The fact that all the options have the same magnitude can provide a hint that the question wants us to focus on the concepts of significant digits in the question. If after a decimal point we have a non-zero digit followed by a zero, it won’t be counted as a significant digit. However, any zeros directly after the decimal point are counted as significant digits.
{"url":"https://www.vedantu.com/jee-main/the-volume-of-a-sphere-is-176cm3-the-volume-of-physics-question-answer","timestamp":"2024-11-03T13:42:51Z","content_type":"text/html","content_length":"144781","record_id":"<urn:uuid:10f4747d-ffc9-48eb-bc61-71551c2b6702>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00273.warc.gz"}
How to evaluate the determinant of the given matrix by reducing the matrix to row echelon form? First row \ Hint: Here, we have to find the determinant of the given matrix. We will convert the given matrix into a Row Echelon form by using elementary row operations. We will then use the Row echelon form of the matrix to find the determinant of the given matrix. The determinant of a matrix is a value obtained after crossing out a row and column by multiplying the determinant of a square matrix. Complete Step by Step Solution: We are given with a matrix \[\left( {\begin{array}{*{20}{l}}0&3&1\\1&1&2\\3&2&4\end{array}} \right)\]. Now, we will reduce the given matrix to row echelon form by using elementary row operations. First, we will interchange the first row and the second row, so we get \[ \Rightarrow \left( {\begin{array}{*{20}{l}}0&3&1\\1&1&2\\3&2&4\end{array}} \right) = \left( {\begin{array}{*{20}{l}}1&1&2\\0&3&1\\3&2&4\end{array}} \right)\] Now, we will transform the first element of the third row as\[1\] by using the operation \[{R_3} \to {R_3} - 3{R_1}\]. So, we get \[ \Rightarrow \left( {\begin{array}{*{20}{l}}0&3&1\\1&1&2\\3&2&4\end{array}} \right) = \left( {\begin{array}{*{20}{l}}1&1&2\\0&3&1\\0&{ - 1}&{ - 2}\end{array}} \right)\] Now, we will transform the second element of the third row as \[0\] by using the operation \[{R_3} \to {R_3} + \dfrac{{{R_2}}}{3}\]. So, we get \[ \Rightarrow \left( {\begin{array}{*{20}{l}}0&3&1\\1&1&2\\3&2&4\end{array}} \right) = \left( {\begin{array}{*{20}{l}}1&1&2\\0&3&1\\0&0&{ - \dfrac{5}{3}}\end{array}} \right)\] We will now find the determinant of the above matrix which is in row-echelon form. \[ \Rightarrow \left| {\begin{array}{*{20}{l}}1&1&2\\0&3&1\\0&0&{ - \dfrac{5}{3}}\end{array}} \right| = 1\left| {\begin{array}{*{20}{l}}3&1\\0&{ - \dfrac{5}{3}}\end{array}} \right| - 1\left| {\begin {array}{*{20}{l}}0&1\\0&{ - \dfrac{5}{3}}\end{array}} \right| + 2\left| {\begin{array}{*{20}{l}}0&3\\0&0\end{array}} \right|\] Simplifying the determinant, we get \[ \Rightarrow \left| {\begin{array}{*{20}{l}}1&1&2\\0&3&1\\0&0&{ - \dfrac{5}{3}}\end{array}} \right| = 1\left( 3 \right)\left( { - \dfrac{5}{3}} \right) - 1\left( {0 - 0} \right) + 2\left( {0 - 0} \ \[ \Rightarrow \left| {\begin{array}{*{20}{l}}1&1&2\\0&3&1\\0&0&{ - \dfrac{5}{3}}\end{array}} \right| = 1 \times \left( 3 \right) \times \left( { - \dfrac{5}{3}} \right)\] Multiplying the terms, we get \[ \Rightarrow \left| {\begin{array}{*{20}{l}}1&1&2\\0&3&1\\0&0&{ - \dfrac{5}{3}}\end{array}} \right| = - 5\] Since a row has been interchanged, then the final determinant has to be multiplied by \[\left( { - 1} \right)\] . Therefore, we get \[ \Rightarrow \left| {\begin{array}{*{20}{l}}1&1&2\\0&3&1\\0&0&{ - \dfrac{5}{3}}\end{array}} \right| = \left( { - 1} \right) \times \left( { - 5} \right)\] \[ \Rightarrow \left| {\begin{array}{*{20}{l}}1&1&2\\0&3&1\\0&0&{ - \dfrac{5}{3}}\end{array}} \right| = 5\] Therefore the value of the determinant of the row echelon form of the given matrix is \[5\]. We know that for every square matrix, we can associate a number which is called as the determinant of the matrix. Row echelon form is any matrix that has the first non-zero element in the first row should be one and the elements below the main diagonal should be zero. Row echelon form of a matrix is also an upper triangular matrix. Whenever a row or a column is interchanged then the determinant has to be multiplied by a negative sign.
{"url":"https://www.vedantu.com/question-answer/evaluate-the-determinant-of-the-given-matrix-class-11-maths-cbse-600e5751f9efda5795d5bd2b","timestamp":"2024-11-07T16:34:40Z","content_type":"text/html","content_length":"172284","record_id":"<urn:uuid:3f7469ac-bd59-4e70-a833-eb782b9a918c>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00613.warc.gz"}
Fourier Domain Regularization 5D and More Global multi dimensional regularization has become a widely used tool in seismic data processing. Many advantages of regularization in the Fourier domain come with some serious problems. In this paper we consider the intrinsic properties of the Fourier transform to identify problems and limitations of the method. A practical and efficient iterative multi dimensional regularization technique is proposed to overcome the strategic pitfalls of the Fourier transform. The results of application on different 2D and 3D data sets are discussed. Finally, we show how the same multi dimensional Fourier regularization technique can be used as a random and coherent noise suppressor. There are plenty of reasons to perform regularization, such as to increase spatial sampling, create a regular grid, reduce noise, and improve prestack imaging and AVO analysis. Converting seismic data into the frequency wave number or multi dimensional wave number domain is a natural tool for data regularization. However, its efficiency comes with some serious problems usually characterized as "spectral leakage". A clear understanding of what causes such phenomena will help us to develop an improved tool to perform interpolation, multi dimensional regularization, noise suppression; as well as any other operation in the frequency or frequency wave number domain in a meaningful way. To perform regularization of prestack seismic data, we create a new desired regular grid filled with the available input data, positioned at the nearest grid node, and with zero data at all other locations. Gridded data are first converted into the frequency domain and then into the multi dimensional wave number domain. All zero traces can be considered to be a result of multiplication of the full wavefield by the sampling operator. Therefore, this final spectrum is the result of the convolution of the full data spectrum with the Fourier transform of the sampling operator. Figure 1 shows spectrum distortion of a signal consisting of two sine waves caused by both a gap and upsampling. Two main problems arise: spectrum “repetition”, caused by the upsampling operator, and amplitude distortion, caused by the gap. We have to keep in mind that there is always some amplitude distortion caused by zero padding in time and space. This spectrum distortion is also known as “spectral leakage”; which means that each original spectrum component affects others and components with stronger amplitudes have more impact, especially on the nearest components. These problems should be dealt with during (and simultaneously with) the regularization process. To overcome these problems, we adopt the following procedure. At each iteration, only those spectrum components, bigger than a specified threshold and within current wave number limits, are selected and accumulated in the output spectrum. After the inverse wave number Fourier transform, the resulting grid of traces is first reduced to only the original trace positions. Those “original position” traces are then subtracted from the input. Thus, by subtracting the strongest components, we reduce the strongest distortion of weaker components. The result of the subtraction will then be forward transformed, ready for the next iteration. The threshold is reduced at each step of the procedure. To prevent data aliasing, especially if substantial upsampling is the objective, the search area of the spectrum is truncated at the current values of the maximum wave numbers in all directions. Thus, the iterations first concentrate on high amplitude data near small wave numbers, while the maximum wave numbers, which are processed at each iteration, are being changed. Figure 2 shows the output spectrum changes during the iterations. The process is completed, when the threshold is equal to zero. FIGURE 1 Figure 1. signal and its spectrum; sampling operator and its spectrum; signal and its spectrum after operator application. Examples: 2D Regularization The result of the application of the described method on 2D data is shown in Figure 3. On the left is the input data with three zero traces inserted after each original; in the central panel, we see the result after interpolation; and finally, on the right we show the difference between input and output traces at the original positions. Note, that the method was able to interpolate dipping events (ground roll). Figure 4 shows that the amplitude behaviour is preserved after two times upsampling, which is important if AVO analysis is to be performed after regularization. Group interval reduced four times. Group interval reduced two times. Figure 2. FK spectrum changes during iterations. Figure 3. 2D Regularization. Figure 4. 2D Regularization. Examples: 5D Regularization Global interpolation of the 3D data is commonly referred to as 5D, because each trace in the 3D survey is defined by time and four spatial coordinates: shot X Y coordinates and receiver X-Y coordinates or any derivatives of them such as CMP coordinates, offset and azimuth or shot coordinates, inline offset, and crossline offset, and so on. An example of 3D survey geometry before and after 5D Regularization, based on the shot and receiver coordinates is shown in Figure 5. Figure 5. 3D geometry before and after 5D Regularization. 5D application on 3D data requires creating a 4 dimensional spatial grid, based on four independent coordinates and a bin size for each coordinate. Choice of these coordinates and output geometry should be based on the input geometry and regularization objectives, since the global nature of 5D Regularization always leads to creating a large amount of redundant data. After a four dimensional grid is specified, the described method is extended to the 5D case by simply applying forward and inverse multidimensional wave number Fourier transforms at each iteration. The results of 5D Regularization are shown on two 3D real datasets. Figures 6 and 7 show 5D Regularization of a very sparse survey. Figure 6. Geometry and prestack data before and after 5D Regularization. Figure 7. Time slice before and after 5D Regularization. Figure 8 shows the result of 5D Regularization of a survey with diffractions and dips. The prestack time migration results before regularization is on the left and the PSTM results after three times upsampling of the prestack data is on the right. Clearly, the diffracted energy was correctly interpolated; dips and structures have been preserved and, in general, the data have become more Figure 8. PSTM before and after 5D Regularization. 3D noise attenuation While random noise can be effectively attenuated during the regularization process (Figures 9 and 10), we developed a separate tool to address “Ground roll” type 3D linear noise based on the same technique. In the frequency-wave number domain this linear noise has the form of a pie in the 2D shot domain and in the 3D shot domain (or the cross-spread domain) the noise takes the form of a cone. Even when noise looks separated, spectral leakage due to uneven spacing, missing traces and zero-padding always causes noise and signal mixing in the FK domain. An iterative process is employed to isolate the signal from linear noise and prevent “leaking”. Figure 9. 3D Random Noise Attenuation. Figure 10. 2D Random Noise Attenuation. A new regular rectangular grid is created, filled with zero data and available input data. This is then converted to the frequency and two dimensional wave number domain. At each iteration, only those spectrum components bigger than a specified threshold are selected. To isolate the signal, while leaving the noise untouched, the threshold search area of the spectrum is limited to a cone above the maximum linear noise velocity parameter and truncated at the current values of the maximum wave numbers. Figure 11 shows the area used during this signal search. Figure 11. Linear Noise Extraction Operator. After each iteration, the threshold is reduced from the defined value to eventually reach zero at the last iteration. The maximum wave numbers, in both the X and Y directions, are also changed in a similar fashion, according to the defined start and end values. Thus, high amplitude, high velocity signal will first be selected during the iterative process. After the iteration process is completed, the residual spectrum will contain lower frequency linear noise and also some random noise corresponding to high wave numbers. The final residual spectrum therefore contains the linear shot noise we are trying to remove. Hence we simply subtract this from the original data or, optionally, we can use adaptive subtraction. An important part of preserving the signal is that only noise was regularized during this iterative process. Figure 12 shows the 3D linear noise subtraction results with the difference plot on the right, proving that no high velocity signal has been removed. Figure 12. 3D Linear Noise Attenuation: Input, output, and difference. Reliability of regularization and future development Although the results can look impressive, there is always a question “Can we trust it?” or “How to pick parameters” to make regularization reliable and suitable for our purposes. It is becoming common practice, subsequent to the regularization procedure, to use the difference between input traces and new traces at the same position to estimate quality. This difference cannot, however, be used to predict quality “a priori”. Our current active research is focussed on the creation of a tool to estimate regularization parameters, based on the input data velocities, frequency content and input and output geometry. Figure 13. Input data in FK domain limited to minimum velocity. The red cone in Figure 13 represents input data in the frequency- Kx-Ky domain, limited to the defined minimum velocity we wish to interpolate. Therefore, for any frequency (e.g. dominant or maximum) it will be a circle with the centre at K=0 and known radius Kmax, defined by the formula: Kmax = frequency / V min In the 5D case this area will have the form of a 4-dimensional sphere. We assign values 1 to the “red” signal area and zero outside the circle. Knowing input and output geometry, we create a 4- dimensional sampling operator, consisting of “ones” at the original trace positions and “zeros” everywhere else. Now this sampling operator can be convolved with the circle (or rather 4D sphere) to estimate spectrum interference for the defined frequency and minimum velocity. In adjusting these parameters one can best decide on how to interpolate each particular dataset. The same approach can be used during 3D seismic survey planning with subsequent regularization in mind. In spite of the common misconception, that regularization in the frequency-wave number domain is unsuitable for upsampling, we have shown that, with proper handling of the frequencywave number spectrum, Fourier domain regularization is a powerful tool to interpolate sparsely populated surveys, to upsample seismic data with diffractions and dips as well as suppress random noise. Finally, based on multi-dimensional Fourier domain regularization, an effective 3D linear noise attenuation technique has been developed. I would like to thank Mike Galbraith, GEDCO for initiating and supporting this work. Special thanks to Alan Richards, Edge Technologies Inc., Calgary for his invaluable input. About the Author(s) Valentina Khatchatrian graduated from St. Petersburg State University, Russia with M.Sc. in Geophysics. She worked for Research Institute of Marine Geophysics, Murmansk, Russia as a research scientist, where she was involved in software development for 3D-3C VSP data processing, modeling, inversion and depth imaging. After moving to Calgary, she worked for several geophysical companies and joined GEDCO in 2006 as a Geophysical Software Developer. Her research includes 5D regularization; 3D-3C VSP; noise and surface multiples attenuation; refraction statics. She is a member of CSEG, SEG and EAGE. Suggested Reading Abma, R. and Kabir, N., 2006, 3D interpolation of irregular data with a POCS algorithm; Geophysics, 71, no. 6, E91-E97. Naghizadeh M and Sacchi M, 2010, On sampling functions and Fourier reconstruction methods; Geophysics, 75, no. 6, WB137-WB151. Schonewille, M., Klaedtke, A., Vigner, A., Brittan, J., and Martin, T., 2009, Seismic data regularization with the anti-alias anti-leakage fourier transform; First Break, 27, no. 9, 85–92. Xu, S., Zhang, Y., Pham, D., and Lambare, G., 2005, Antileakage fourier transform for seismic data regularization; Geophysics, 70, no. 4, V87–V95.
{"url":"https://csegrecorder.com/articles/view/fourier-domain-regularization-5d-and-more","timestamp":"2024-11-10T02:42:40Z","content_type":"text/html","content_length":"40565","record_id":"<urn:uuid:2d40bfbe-65bb-42a6-bb74-e85ecd0eec17>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00064.warc.gz"}
Conventions and formulas — Strawberry Fields Conventions and formulas¶ “The nice thing about standards is that you have so many to choose from.” - Tanenbaum [1] In this section, we provide the definitions of various quantum operations used by Strawberry Fields, as well as introduce specific conventions chosen. We’ll also provide some more technical details relating to the various operations. In Strawberry Fields we use the convention \(\hbar=2\) by default, but other conventions can also be chosen by setting the global variable sf.hbar at the beginning of a session. In this document we keep \(\hbar\) explicit. More information about the definitions included on this page are available in [2] and [3]. The Kraus representation of the loss channel is found in [4] Eq. 1.4, which is related to the parametrization used here by taking \(1-\gamma = T\). The explicit expression for the harmonic oscillator wave functions can be found in [5] Eq. A.4.3 of Appendix A. We also provide some details of the quantum photonics terms that are commonly used across Strawberry Fields, when programming and using photonic quantum computers. Andrew S. Tanenbaum and David J. Wetherall. Computer networks, 5th Ed. Prentice Hall, 2011. S.M. Barnett and P.M. Radmore. Methods in Theoretical Quantum Optics. Oxford Series in Optical and Imaging Sciences. Clarendon Press, 2002. ISBN 9780198563617. URL: https://books.google.ca/books? Pieter Kok and Brendon W. Lovett. Introduction to Optical Quantum Information Processing. Cambridge University Press, 2010. ISBN 9781139486439. URL: https://books.google.ca/books?id=G2zKNooOeKcC. Victor V. Albert, Kyungjoo Noh, Kasper Duivenvoorden, R. T. Brierley, Philip Reinhold, Christophe Vuillot, Linshu Li, Chao Shen, S. M. Girvin, Barbara M. Terhal, and Liang Jiang. Performance and structure of bosonic codes. Aug 2017. arXiv:1708.05010. J J Sakurai. Modern Quantum Mechanics. Addison-Wesley Publishing Company, 1994.
{"url":"https://strawberryfields.ai/photonics/conventions/index.html","timestamp":"2024-11-08T21:40:08Z","content_type":"application/xhtml+xml","content_length":"31042","record_id":"<urn:uuid:2d602e08-6656-42b9-a9cf-b3ba54f33b8e>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00155.warc.gz"}
Unfolding the Tesseract Detail from an image by Mark McClure Journal of Recreational Mathematics, Vol. 17(1), 1984-85 Download this article as a PDF In 1966, Martin Gardner asked, “How many different order-8 polycubes can be produced by unfolding a hollow hypercube into 3-space?” [1], stating also that he did not know the answer. There are 261 distinct unfoldings and in this article I will show how I arrived at that number. The method given for enumerating unfolded tesseracts can be extended to any number of dimensions. I'll first demonstrate the method on the cube, and then on the tesseract. The Cube A tree is a connected graph with n nodes and n - 1 edges. Figure 1 shows the six six-node trees [2]. We may arrange the six nodes of a tree into three pairs and specify that the members of a pair may not be adjacent. Let us call such a tree a “paired tree”. Thus, a paired tree is a tree together with a perfect matching in its complement. Figure 1. The six six-node trees There are eleven distinct pairings of the six-node trees. There are also eleven unfolded cubes. There is a one-to-one mapping from the set of unfolded cubes to the set of paired six-node trees. The mapping is shown in Figure 2. Figure 2. Mapping between unfolded cubes and paired six-node trees. There is a simple procedure for finding the unique paired tree that an unfolded cube maps to. 1. Pair the squares of an unfolded cube if the squares become opposite faces on folding. A cube has three pairs of opposite faces. 2. Replace the squares with points. 3. Connect two points if the squares they replaced were adjacent. An example is shown in Figure 3. Figure 3. Procedure for mapping an unfolded cube to a paired tree. This procedure will always produce a tree because the six squares of an unfolded cube will always be connected along five lines. If there were fewer than five connections, the six squares would not all be joined into a unit. If there were more than five connections, the unfolded cube could not lie on a plane. For similar reasons, a six-node tree must have five edges. If a six-node graph has less than five edges, the six nodes will not all be joined into a unit. If a six-node graph has more than five edges, there will be a cycle in it. This procedure will never pair adjacent nodes in a tree because opposite faces of a cube are never adjacent. No unfolding of a cube can make opposite squares adjacent Finally, it is clear that this procedure will always produce a unique paired tree. An unfolded cube maps to only one tree and uniquely describes the pairing of the nodes of the tree. Let us consider the inverse of this procedure. The inverse procedure finds the unique unfolded cube that a paired six-node tree maps to. Consider a cube. Number all the vertices and cut the cube apart into six squares. You should have something like that shown in Figure 4. Figure 4. Six squares with numbered vertices. Since we cannot cut a tesseract apart, let us look for a way to characterize this numbering. The simplest method is graphic, as shown in Figure 5. We may arrange these numbered squares into pairs of opposites as shown in Figure 6. Figure 5. Cutting and numbering a projected cube. Figure 6. Three pairs of opposing squares. We now give the procedure for finding the unique unfolded cube that any paired six-node tree maps to. 1. Replace each node of the tree with one of the above numbered squares, the only restriction being that paired nodes must be replaced by paired squares. 2. Connect two squares if the nodes they replaced were adjacent, the only restriction being that squares must be connected so that their numbers match. Note that some squares may be upside-down, and it is permitted to turn them over. 3. Now, remove the numbers. An example is shown in Figure 7. Figure 7. Procedure for mapping a paired tree to an unfolded cube. It will always be possible to connect the squares so that their numbers match. Inspection of the above numbered squares will show that any two squares can be connected, so long as they are not both members of the same pair. Thus, there is a one-to-one mapping from the set of unfolded cubes to the set of paired six-node trees. The procedures given here can be generalized to any number of dimensions. Now let us consider the unfolding of the tesseract into 3-space. The Tesseract A tesseract projected onto two dimensions is shown in Figure 8. Figure 8. A tesseract projected onto two dimensions. A hollow tesseract is made up of eight solid cubes, just as a hollow cube is made up of six solid squares. The eight cubes may be put into four pairs of opposite cubes, just as the six squares of a cube may be put into three pairs of opposite squares. There is a one-to-one mapping from the set of unfolded tesseracts to the set of paired eight-node trees. An example is shown in Figure 9. Figure 9. Example of mapping between unfolded tesseracts and paired eight-node trees. There are twenty-three eight-node trees, as shown in Figure 10 [2]. Figure 10. The twenty-three eight-node trees. The 261 pairings of the eight-node trees are shown in Figures 11.1 through 11.24. Figure 11.1. Pairings of the 1st eight-node tree. Figure 11.2. Pairings of the 2nd eight-node tree. Figure 11.3. Pairings of the 3rd eight-node tree. Figure 11.4. Pairings of the 4th eight-node tree. Figure 11.5. Pairings of the 5th eight-node tree. Figure 11.6. Pairings of the 6th eight-node tree. Figure 11.7. Pairings of the 7th eight-node tree. Figure 11.8. Pairings of the 8th eight-node tree. Figure 11.9. Pairings of the 9th eight-node tree. Figure 11.10. Pairings of the 10th eight-node tree. Figure 11.11. Pairings of the 11th eight-node tree. Figure 11.12. Pairings of the 12th eight-node tree. Figure 11.13. Pairings of the 13th eight-node tree. Figure 11.14. Pairings of the 14th eight-node tree. Figure 11.15. Pairings of the 15th eight-node tree. Figure 11.16. Pairings of the 16th eight-node tree. Figure 11.17. Pairings of the 17th eight-node tree. Figure 11.18. Pairings of the 18th eight-node tree. Figure 11.19. Pairings of the 19th eight-node tree. Figure 11.20. Pairings of the 20th eight-node tree. Figure 11.21. Pairings of the 21st eight-node tree. Figure 11.22. Pairings of the 22nd eight-node tree. Figure 11.23. Pairings of the 23rd eight-node tree. Figure 11.24. Number of pairings of all eight-node trees. As far as I know, the only way to find the number of distinct pairings a tree can have is to exhaustively examine the possibilities. That is what I have done here. Note that there are some pairings of a tree which look distinct, but are actually identical. We may have two or more different representations of the same paired tree. Consider the example shown in Figure 12. Figure 12. Two representations of the same paired tree. There are 261 ways of pairing the eight-node trees. Thus, there are 261 unfolded tesseracts. There are 106 ten-node trees [2]. I have not determined how many ways they can be paired. An exhaustive examination of the possibilities will probably require a significant amount of computer time. Figure 13 shows the two four-node trees and Figure 14 shows the only way of pairing them. Thus, there is only one way of unfolding a square (Figure 15). Figure 13. The two four-node trees. Figure 14. The only pairing of the four-node trees. Figure 15. The only unfolding of a square. There is only one two-node tree, which cannot be paired (Figure 16). Figure 16. The only two-node tree. This gives us an infinite sequence: 1,11, 261, .... As far as I know, this is a new sequence. Thanks to Martin Gardner of Scientific American for posing the problem, Norman Johnson of Wheaton College for encouragement, D.G. Corneil and E. Mendelsohn of the University of Toronto for assistance, and the University of Toronto for computer time. 1. M. Gardner, Mathematical Games, Scientific American, 214:5, pp. 138-143, November 1966. 2. F. Harary and G. Prins, The Number of Homeomorphically Irreducible Trees and Other Species, Acta Mathematica, 101. pp. 141-162, 1959. A Brief Tutorial by Davide Cervone Further Reading
{"url":"https://unfolding.apperceptual.com/home","timestamp":"2024-11-08T05:23:58Z","content_type":"text/html","content_length":"160534","record_id":"<urn:uuid:09f62df5-69c6-4653-a472-fcd0007725c8>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00785.warc.gz"}
Calculating Daily Energy Expenditure using a Pedometer You can use a pedometer, which is a device for counting the number of steps you take, to estimate the amount of activity that you perform. From the number of steps taken, the walking distance and the number of calories burned, or energy expenditure, can also be calculated. Measuring how many calories you have burned each day is helpful if you are counting calories, trying to maintain or lose weight by matching this energy expenditure to the amount of food eaten. Using a pedometer is a simple way to estimate your daily energy expenditure, and this method of directly measuring your activity may be more accurate than some of the formulas available to calculate energy expenditure, though there are still some calculations involved, as well as a few assumptions. The advantage of using a pedometer to measure energy expenditure is that it can objectively assess activity levels, they are relatively simple to operate, it is low cost, and they are small enough not to restrict the physical activity. Accurately Measuring Steps and Distance The primary data measured by a pedometer is the number of steps taken. Some pedometers are more accurate than others at doing this. If you are using the step count for calculating your energy expenditure, you want to first make sure it is measuring that correctly. It is easy to check if the pedometer is accurately measuring the right number of steps when you are walking or running. Simply walk for some time and manually count the number of steps you take, and compare this to the number that is displayed on the pedometer. Energy expenditure may also require a measure of the distance walked or run. To convert the number of steps to a distance, without the use of a GPS, the pedometer must know the average length of your stride. You can calibrate your stride length by walking a set distance and counting the number of steps taken. Then it will be a simple process of dividing the distance by the number of steps. Just be aware that your stride length can vary, it will be longer when you are running than it is when you are walking. Also, your stride is likely to be shorter when going uphill as opposed to walking on a level surface. Accurately Measuring Energy Expenditure Step counting itself is not enough for estimating energy expenditure (Kumahara et al. 2009). The step count measure says nothing of the intensity and duration of the exercise, and does not account for body size and composition. Also, age and gender have an effect on energy expenditure. The problem with using a pedometer to estimate total energy expenditure is that there is no measure of the volume of work done, that is, there is no measure of the intensity and duration of the activities. Unless you indicate if you are walking or running, the pedometer can not discriminate between the two. There is also the problem of different exercise modes. A pedometer can be used to crudely measure your daily activity by counting steps. However, this is going to miss some activity as you do not always take 'steps' when doing some activities, for example, if the pedometer is on your wrist it may not record cycling exercise very well. In conclusion, a pedometer itself is not sufficient to measure daily energy expenditure. Fortunately, there are not many such simple pedometers, often they are associated with other sensors so the electronic devices use GPS and accelerometers to also record distance, speed and intensity. With this additional information, plus factoring in age, gender and body weight, the energy expenditure can be more accurately calculated. Things to Consider Pedometers do not ... • always accurately count all steps • perform well during some activities (e.g. swimming) • measure exercise intensity or duration • measure stride length and distance covered • Kumahara, H., Tanaka, H. & Schutz, Y. Are pedometers adequate instruments for assessing energy expenditure?. Eur J Clin Nutr 63, 1425–1432 (2009). • Nielson R, Vehrs PR, Fellingham GW, Hager R, Prusak KA. Step counts and energy expenditure as estimated by pedometry during treadmill walking at different stride frequencies. J Phys Act Health. 2011 Sep;8(7):1004-13. Related Pages
{"url":"https://ipv6.topendsports.com/weight-loss/energy-expenditure-pedometer.htm","timestamp":"2024-11-09T09:59:19Z","content_type":"text/html","content_length":"17021","record_id":"<urn:uuid:8f3d672b-66e2-4cb3-8ca9-10f57e60ee2d>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00248.warc.gz"}
International Conference "Selected Topics in Mathematical Physics" Dedicated to 75-th Anniversary of I. V. Volovich (September 27–30, 2021, online, Moscow) The conference is dedicated to the 75th anniversary of Igor Vasilievich Volovich, who made a significant contribution to different fields of mathematical physics. The conference will be organized online in Zoom, the link Meeting ID: 614 254 2078 International Conference "Selected Topics in Mathematical Physics" Dedicated to 75-th Anniversary of I. V. Volovich, Moscow, September 27–30, 2021 © , 2024
{"url":"https://m.mathnet.ru/php/conference.phtml?confid=1971&option_lang=eng","timestamp":"2024-11-11T17:07:12Z","content_type":"text/html","content_length":"39461","record_id":"<urn:uuid:3aa4df32-38c6-4211-b6d3-754cd4d01f41>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00586.warc.gz"}
In a helium gas discharge tube 40 times 1018He + move class 12 physics JEE_Main Hint: Current flows when there is flow of charges. The amount of charge flowing will give you the current Also, the current is the total charge flowing with respect to time, so be careful while counting charges; Complete solution: Here, in this question the time is given to us. It is 1 second so we need to see how much charge flows in 1 second; Here the 8 ampere current in the helium gas tube is due to the positive helium ions moving towards right and also because of the n electrons moving towards left. We can mathematically write it as; $8A = \dfrac{{40 \times {{10}^{18}} \times {q_{H{e^ + }}}C}}{{1\sec }}$$ + \dfrac{{n \times {q_{electron}}}}{{1\sec }}$ (equation:1) Here ${q_{H{e^ + }}}$ is the magnitude of charge of a positive helium ion. Hence, ${q_{H{e^ + }}} = 1.6 \times {10^{ - 19}}C$. Also, ${q_{electron}}$ is the magnitude of charge of an electron. So, ${q_{electron}} = 1.6 \times {10^{ - 19}}C$. Thus, now we substitute these values in equation 1; $8 = 40 \times {10^{18}} \times 1.6 \times {10^{ - 19}} + n \times 1.6 \times {10^{ - 19}}$ Solving this equation for n will give us the value of n; $8 = \left( {40 \times {{10}^{18}} + n} \right)1.6 \times {10^{ - 19}}$ $40 \times {10^{18}} + n = \dfrac{8}{{1.6 \times {{10}^{ - 19}}}} = 50 \times {10^{18}}$ Hence,$n = \left( {50 - 40} \right){10^{18}} = 10 \times {10^{18}}$ Hence, option A is correct. Note:(1) The current through a conductor is equal to charge flowing per second through its cross section. (2) For conductors like the gas tube the positive and negative charge both were flowing and so the net current is the sum of the current due to flow of positive charge per second and the flow of negative charge per second. (3) For conductors like the metal wire the positive charge is not mobile so we take into account the flow of negative charge only.
{"url":"https://www.vedantu.com/jee-main/in-a-helium-gas-discharge-tube-40-times-1018he-physics-question-answer","timestamp":"2024-11-04T02:28:53Z","content_type":"text/html","content_length":"147086","record_id":"<urn:uuid:52f8aafe-7dfa-49e9-b6d9-ecc6b76ecf95>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00145.warc.gz"}
A072843 - OEIS Named to commemorate the founder of the Australian Mathematics Competition, Peter O'Halloran, shortly before his untimely death in 1994. A. Edwards - "The Cellars At The Hotel Mathematics" - Keynote article in "Mathematics - Imagine The Possibilities" (Conference handbook for the MAV conference - 1997) pp. 18-19 The total surface areas of the smallest possible cuboids (1.1.1), (2.1.1),(2.2.1),(3.1.1) and (4.1.1) are, respectively, 6, 10, 16, 14 and 18 square units, assuming their side lengths are whole numbers. Thus the first two O'Halloran Numbers are 8 and 12 as they do not appear on this list of areas. Andy Edwards (AndynGen(AT)aol.com), Jul 24 2002
{"url":"https://oeis.org/A072843","timestamp":"2024-11-03T12:22:48Z","content_type":"text/html","content_length":"12886","record_id":"<urn:uuid:5b54503f-b9f5-4a39-8ef6-5bddeff49975>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00456.warc.gz"}
General Article Experimental and numerical investigation on water absorption and strength of lightweight concrete containing LECA and cold bitumen powder 30 September 2024. pp. 307-327 Abstract References R.B. Tangadagi, M. Manjunatha, A. Bharath, and S. Preethi, Utilization of steel slag as an eco-friendly material in concrete for construction. Journal of Green Engineering. 10(5) (2020), pp. N. Usahanunth, S. Tuprakay, W. Kongsong, and S.R. Tuprakay, Study of mechanical properties and recommendations for the application of waste Bakelite aggregate concrete. Case Studies in Construction Materials. 8 (2018), pp. 299-314. S.S.M. Samarakoon, P. Ruben, J.W. Pedersen, and L. Evangelista, Mechanical performance of concrete made of steel fibers from tire waste. Case Studies in Construction Materials. 11 (2019), e00259. R. Castrodale and K. Harmon, Lightweight concrete for long span bridges. 2009, in Safety and Reliability of Bridge Structures: CRC Press, pp. 167-178. S. Kroviakov and A. Mishutn, Production technology of modified expanded clay lightweight concrete for floating structures. The Scientific Journal of Cihan University - Sulaimanyia. (2017). M.A.M.E. Zareef, Conceptual and structural design of buildings made of lightweight and infra-lightweight concrete. Ph.D. Thesis, Technische Universität Berlin, Berlin, Germany, 2010. J.-X. Lu, Recent advances in high strength lightweight concrete: From development strategies to practical applications. Construction and Building Materials. 400 (2023), 132905. S. Naji, O.C. Çelik, U.J. Alengaram, M.Z. Jumaat, and S. Shamshirband, Structure, energy and cost efficiency evaluation of three different lightweight construction systems used in low-rise residential buildings. Energy and Buildings. 84 (2014), pp. 727-739. F. Wegian, Strength properties of lightweight concrete made with LECA grading. Australian Journal of Civil Engineering. 10(1) (2012), pp. 11-22. F. Colangelo, R. Cioffi, B. Liguori, and F. Iucolano, Recycled polyolefins waste as aggregates for lightweight concrete. Composites Part B: Engineering. 106 (2016), pp. 234-241. S. Baradaran, J. Rahimi, M. Ameri, and A. Maleki, Mechanical performance of asphalt mixture containing eco-friendly additive by recycling PET. Case Studies in Construction Materials. 20 (2024), S. BaridJavan, M. Sheikhi, P. Pournoori, and A. Rajaee, Glass powder and PVC granules as partial replacement of cement and aggregate; An experimental study. presented at the 7th. International Conference on Applied Researches in Science and Engineering. (2023). Aachen, Germany. U.J. Alengaram, B.A. Al Muhit, and M.Z. bin Jumaat, Utilization of oil palm kernel shell as lightweight aggregate in concrete-A review. Construction and Building Materials. 38 (2013), pp. 161-172. M.S. Baradaran, R. Qazanfari, and S. Baradaran, Study of soil reinforcement in the east of Mashhad using glass granule. Materials Research Express. 10(5) (2023), 055202. P. Pournoori, A. Davarpanah TQ, A. Rajaee, M. Ghodratnama, S. Abrishami, and A.R. Masoodi, Experimental exploration of fracture behavior (pure mode III) in eco-friendly steel fiber-reinforced self-compacting concrete with waste tempered glass as coarse aggregates. Scientific Reports. 14(1) (2024), 9043. A. Salimi, N. Kamboozia, and M.R.M. Aliha, Experimental Investigation of the self-healing capability of Roller-Compacted Concrete (RCC) containing epoxy-filled glass tubes through fracture properties. Case Studies in Construction Materials. (2024), e03369. S. Yazicioğlu and H. Bozkurt, Pomza ve mineral katkılı taşıyıcı hafif betonun mekanik özelliklerinin araştırılması. Journal of the Faculty of Engineering and Architecture of Gazi University. 21(4) R. Demirboğa, İ. Örüng, and R. Gül, Effects of expanded perlite aggregate and mineral admixtures on the compressive strength of low-density concretes. Cement and Concrete Research. 31(11), pp. Y. Maltais, E. Samson, and J. Marchand, Predicting the durability of Portland cement systems in aggressive environments-Laboratory validation. Cement and concrete research. 34(9) (2004), pp. L. Parrott, Water absorption in cover concrete. Materials and structures. 25 (1992), pp. 284-292. G. Fagerlund, Predicting the service life of concrete exposed to frost action through a modelling of the water absorption process in the air-pore system. in The modelling of microstructure and its potential for studying transport properties and durability. 1996, Springer, pp. 503-537. N. Hearn, R.D. Hooton, and M.R. Nokken, Pore structure, permeability, and penetration resistance characteristics of concrete. in Significance of tests and properties of concrete and concrete-making materials. 2006, ASTM International. R. Henkensiefken, J. Castro, D. Bentz, T. Nantung, and J. Weiss, Water absorption in internally cured mortar made with water-filled lightweight aggregate. Cement and Concrete Research. 39(10) (2009), pp. 883-892. B. Nyame, Permeability of normal and lightweight mortars. Magazine of Concrete Research. 37(130) (1985), pp. 44-48. F. Nosouhian and D. Mostofinejad, Reducing permeability of concrete by bacterial mediation on surface using treatment gel. ACI Materials Journal. 113(3) (2016), 287. A. Sadrmomtazi, B. Tahmouresi, and R. Kohani Khoshkbijari, Effect of fly ash and silica fume on transition zone, pore structure and permeability of concrete. Magazine of Concrete Research. 70(10) (2018), pp. 519-532. W. Tangchirapat and C. Jaturapitakkul, Strength, drying shrinkage, and water permeability of concrete incorporating ground palm oil fuel ash. Cement and Concrete Composites. 32(10) (2010), pp. C. Bilim, C.D. Atiş, H. Tanyildizi, and O. Karahan, Predicting the compressive strength of ground granulated blast furnace slag concrete using artificial neural network. Advances in Engineering Software. 40(5) (2009), pp. 334-340. H. Naderpour, A. Kheyroddin, and G.G. Amiri, Prediction of FRP-confined compressive strength of concrete using artificial neural networks. Composite Structures. 92(12) (2010), pp. 2817-2829. S. Gupta, Using artificial neural network to predict the compressive strength of concrete containing nano-silica. Civil Engineering and Architecture. 1(3) (2013), pp. 96-102. L. Shi, S. Lin, Y. Lu, L. Ye, and Y. Zhang, Artificial neural network based mechanical and electrical property prediction of engineered cementitious composites. Construction and Building Materials. 174 (2018), pp. 667-674. A.A. Shahmansouri, M. Yazdani, S. Ghanbari, H.A. Bengar, A. Jafari, and H.F. Ghatte, Artificial neural network model to predict the compressive strength of eco-friendly geopolymer concrete incorporating silica fume and natural zeolite. Journal of Cleaner Production. 279 (2021), 123697. A.S. Hosseini, P. Hajikarimi, M. Gandomi, F.M. Nejad, and A.H. Gandomi, Optimized machine learning approaches for the prediction of viscoelastic behavior of modified asphalt binders. Construction and Building Materials. 299 (2021), 124264. A.H. Gandomi and A.H. Alavi, Applications of computational intelligence in behavior simulation of concrete materials. in Computational optimization and applications in engineering and industry. 2011, Springer, pp. 221-243. A. Ahmad, K. Chaiyasarn, F. Farooq, W. Ahmad, S. Suparp, and F. Aslam, Compressive strength prediction via gene expression programming (GEP) and artificial neural network (ANN) for concrete containing RCA. Buildings. 11(8) (2021), 324. P.G. Asteris, P.C. Roussis, and M.G. Douvika, Feed-forward neural network prediction of the mechanical properties of sandcrete materials. Sensors. 17(6) (2017), 1344. T.-A. Nguyen, H.-B. Ly, H.-V. T. Mai, and V.Q. Tran, Prediction of later-age concrete compressive strength using feedforward neural network. Advances in Materials Science and Engineering. 2020 (2020), pp. 1-8. S.Y. Hiew, K.B. Teoh, S.N. Raman, D. Kong, and M. Hafezolghorani, Prediction of ultimate conditions and stress-strain behaviour of steel-confined ultra-high-performance concrete using sequential deep feed-forward neural network modelling strategy. Engineering Structures. 277 (2023), 115447. I. Umeonyiagu and C. Nwobi-Okoye, Predicting flexural strength of concretes incorporating river gravel using multi-layer perceptron networks: A case study of eastern Nigeria. Nigerian Journal of Technology. 34(1) (2015), pp. 12-20. J. Abellán-García, Four-layer perceptron approach for strength prediction of UHPC. Construction and Building Materials. 256 (2020), 119465. D. Ghunimat, A.E. Alzoubi, A. Alzboon, and S. Hanandeh, Prediction of concrete compressive strength with GGBFS and fly ash using multilayer perceptron algorithm, random forest regression and k-nearest neighbor regression. Asian Journal of Civil Engineering. 24(1) (2023), pp. 169-177. M. Duan, Y. Qin, Y. Li, Y. Wei, K. Geng, H. Zhou, and R. Liu, Mechanical properties and multi-layer perceptron neural networks of polyacrylonitrile fiber reinforced concrete cured outdoors. Structures, 56 (2023), 104954. M. Sonebi, A. Cevik, S. Grünewald, and J. Walraven, Modelling the fresh properties of self-compacting concrete using support vector machine approach. Construction and Building materials. 106 (2016), pp. 55-64. A.M. Abd and S.M. Abd, Modelling the strength of lightweight foamed concrete using support vector machine (SVM). Case Studies in Construction Materials. 6 (2017), pp. 8-15. P. Saha, P. Debnath, and P. Thomas, Prediction of fresh and hardened properties of self-compacting concrete using support vector regression approach. Neural Computing and Applications. 32(12) (2020), pp. 7995-8010. S. Jueyendah, M. Lezgy-Nazargah, H. Eskandari-Naddaf, and S. Emamian, Predicting the mechanical properties of cement mortar using the support vector machine approach. Construction and Building Materials. 291 (2021), 123396. S. Koo, D. Shin, and C. Kim, Application of principal component analysis approach to predict shear strength of reinforced concrete beams with stirrups. Materials. 14(13) (2021), 3471. S. Tayfur, N. Alver, S. Abdi, S. Saatcı, and A. Ghiami, Characterization of concrete matrix/steel fiber de-bonding in an SFRC beam: Principal component analysis and k-mean algorithm for clustering AE data. Engineering Fracture Mechanics. 194 (2018), pp. 73-85. M.M. Hameed, M.K. AlOmar, W.J. Baniya, and M.A. AlSaadi, Incorporation of artificial neural network with principal component analysis and cross-validation technique to predict high-performance concrete compressive strength. Asian Journal of Civil Engineering. 22 (2021), pp. 1019-1031. A. Habib, U. Yildirim, and M. Habib, Applying Kernel Principal Component Analysis for Enhanced Multivariable Regression Modeling of Rubberized Concrete Properties. Arabian Journal for Science and Engineering. 48(4) (2023), pp. 5383-5396. B.-T. Huang, Q.-H. Li, and S.-L. Xu, Fatigue deformation model of plain and fiber-reinforced concrete based on Weibull function. Journal of Structural Engineering. 145(1) (2019), 04018234. A.A. Rahman, M.M. Mendez Larrain, and R.A. Tarefder, Development of a nonlinear rutting model for asphalt concrete based on Weibull parameters. International Journal of Pavement Engineering. 20(9) (2019), pp. 1055-1064. L. Li, J. Guan, P. Yuan, Y. Yin, and Y. Li, A Weibull distribution-based method for the analysis of concrete fracture. Engineering Fracture Mechanics. 256 (2021), 107964. G. Qu, M. Zheng, X. Wang, R. Zhu, Y. Su, and G. Chang, A freeze-thaw damage evolution equation and a residual strength prediction model for porous concrete based on the Weibull distribution function. Journal of Materials in Civil Engineering. 35(5) (2023), 04023074. P.G. Asteris and V.G. Mokos, Concrete compressive strength using artificial neural networks. Neural Computing and Applications. 32(15) (2020), pp. 11807-11826. F. Khademi, S.M. Jamal, N. Deshpande, and S. Londhe, Predicting strength of recycled aggregate concrete using artificial neural network, adaptive neuro-fuzzy inference system and multiple linear regression. International Journal of Sustainable Built Environment. 5(2) (2016), pp. 355-369. F. Altun, Ö. Kişi, and K. Aydin, Predicting the compressive strength of steel fiber added lightweight concrete using neural network. Computational Materials Science. 42(2) (2008), pp. 259-265. G. Du, L. Bu, Q. Hou, J. Zhou, and B. Lu, Prediction of the compressive strength of high-performance self-compacting concrete by an ultrasonic-rebound method based on a GA-BP neural network. Plos one. 16(5) (2021), e0250795. J.P.M. Rinchon, Strength durability-based design mix of self-compacting concrete with cementitious blend using hybrid neural network-genetic algorithm. IPTEK Journal of Proceedings Series. 3(6) I. Ranjbar, V. Toufigh, and M. Boroushaki, A combination of deep learning and genetic algorithm for predicting the compressive strength of high‐performance concrete. Structural Concrete. 23(4) (2022), pp. 2405-2418. G. Abdollahzadeh, E. Jahani, and Z. Kashir, Predicting of compressive strength of recycled aggregate concrete by genetic programming. Comput. Concrete. 18(2) (2016), pp. 155-163. M. Sarıdemir, Genetic programming approach for prediction of compressive strength of concretes containing rice husk ash. Construction and Building Materials. 24(10) (2010), pp. 1911-1919. A. Baykasoğlu, H. Güllü, H. Çanakçı, and L. Özbakır, Prediction of compressive and tensile strength of limestone via genetic programming. Expert Systems with Applications. 35(1-2) (2008), pp. M. Sarıdemir, Effect of specimen size and shape on compressive strength of concrete containing fly ash: Application of genetic programming for design. Materials & Design (1980-2015). 56 (2014), pp. A. Nazari, Compressive strength of geopolymers produced by ordinary Portland cement: Application of genetic programming for design. Materials & Design. 43 (2013), pp. 356-366. D. Tien Bui, M.a.M. Abdullahi, S. Ghareh, H. Moayedi, and H. Nguyen, Fine-tuning of neural computing using whale optimization algorithm for predicting compressive strength of concrete. Engineering with Computers. 37 (2021). A. Nazari and J.G. Sanjayan, Modelling of compressive strength of geopolymer paste, mortar and concrete by optimized support vector machine. Ceramics International. 41(9) (2015), pp. 12164-12177. L. Sun, M. Koopialipoor, D. Jahed Armaghani, R. Tarinejad, and M. Tahir, Applying a meta-heuristic algorithm to predict and optimize compressive strength of concrete samples. Engineering with Computers. 37 (2021), pp. 1133-1145. M.K. Keleş, A.E. Keleş, and Ü. Kili , Prediction of concrete strength with data mining methods using artificial bee colony as feature selector. in 2018 International Conference on Artificial Intelligence and Data Processing (IDAP). IEEE, (2018), pp. 1-4. M. Najimi, N. Ghafoori, and M. Nikoo, Modeling chloride penetration in self-consolidating concrete using artificial neural network combined with artificial bee colony algorithm. Journal of Building Engineering. 22 (2019), pp. 216-226. H. Jahangir and D.R. Eidgahee, A new and robust hybrid artificial bee colony algorithm-ANN model for FRP-concrete bond strength evaluation. Composite Structures. 257 (2021), 113160. C. Qi, H.-B. Ly, L.M. Le, X. Yang, L. Guo, and B.T. Pham, Improved strength prediction of cemented paste backfill using a novel model based on adaptive neuro fuzzy inference system and artificial bee colony. Construction and Building Materials. 284 (2021), 122857. S.A. Moghaddas, M. Nekoei, E.M. Golafshani, A. Behnood, and M. Arashpour, Application of artificial bee colony programming techniques for predicting the compressive strength of recycled aggregate concrete. Applied Soft Computing. 130 (2022), 109641 M. Komasi and S.A. Hassanzadeh, Evaluation of compressive strength and rapid chloride permeability test of concretes containing metakaolin using Bayesian inference and GEP methods. Modares Civil Engineering journal. 21(1) (2021), pp. 203-217. A.S. Hosseini, P. Hajikarimi, M. Gandomi, F.M. Nejad, and A.H. Gandomi, Genetic programming to formulate viscoelastic behavior of modified asphalt binder. Construction and Building Materials. 286 (2021), 122954. H.A. Algaifi, A.S. Alqarni, R. Alyousef, S.A. Bakar, M.H.W. Ibrahim, S. Shahidan, M. Ibrahim, and B.A. Salami, Mathematical prediction of the compressive strength of bacterial concrete using gene expression programming. Ain Shams Engineering Journal. 12(4) (2021), pp. 3629-3639. H. Alabduljabbar, M. Khan, H.H. Awan, S.M. Eldin, R. Alyousef, and A.M. Mohamed, Predicting ultra-high-performance concrete compressive strength using gene expression programming method. Case Studies in Construction Materials. 18 (2023), e02074. P. Thamma and S. Barai, Prediction of compressive strength of cement using gene expression programming. in Applications of Soft Computing: From Theory to Praxis. 2009, Springer, pp. 203-212. A. Nazari and F.P. Torgal, Modeling the compressive strength of geopolymeric binders by gene expression programming-GEP. Expert Systems with Applications. 40(14) (2013), pp. 5427-5438. S. Mahdinia, H. Eskandari-Naddaf, and R. Shadnia, Effect of cement strength class on the prediction of compressive strength of cement mortar using GEP method. Construction and Building Materials. 198 (2019), pp. 27-41. H.A. Shah, S.K.U. Rehman, M.F. Javed, and Y. Iftikhar, Prediction of compressive and splitting tensile strength of concrete with fly ash by using gene expression programming. Structural Concrete. 23 (4) (2022), pp. 2435-2449. M.F. Javed, M.N. Amin, M.I. Shah, K. Khan, B. Iftikhar, F. Farooq, F. Aslam, R. Alyousef, and H. Alabduljabbar, Applications of gene expression programming and regression techniques for estimating compressive strength of bagasse ash based concrete. Crystals. 10(9) (2020), 737. F. Özcan, Gene expression programming based formulations for splitting tensile strength of concrete. Construction and Building Materials. 26(1) (2012), pp. 404-410. Y. Murad, A. Tarawneh, F. Arar, A. Al-Zu'bi, A. Al-Ghwairi, A. Al-Jaafreh, and M. Tarawneh, Flexural strength prediction for concrete beams reinforced with FRP bars using gene expression programming. Structures. 33 (2021), pp. 3163-3172. A. Gholampour, A.H. Gandomi, and T. Ozbakkaloglu, New formulations for mechanical properties of recycled aggregate concrete using gene expression programming. Construction and Building Materials. 130 (2017), pp. 122-145. H.A. Shah, Q. Yuan, U. Akmal, S.A. Shah, A. Salmi, Y.A. Awad, L.A. Shah, Y. Iftikhar, M.H. Javed, and M.I. Khan, Application of machine learning techniques for predicting compressive, splitting tensile, and flexural strengths of concrete with metakaolin. Materials, 15(15) (2022), 5435. H. Sabetifar and M. Nematzadeh, An evolutionary approach for formulation of ultimate shear strength of steel fiber-reinforced concrete beams using gene expression programming. Structures, 34 (2021), pp. 4965-4976. A.H. Gandomi, A.H. Alavi, T. Ting, and X.-S. Yang, Intelligent modeling and prediction of elastic modulus of concrete strength via gene expression programming. in Advances in Swarm Intelligence: 4th International Conference, ICSI 2013, Harbin, China, June 12-15, 2013, Proceedings, Part I 4, Springer, (2013), pp. 564-571. A.H. Gandomi, S.K. Babanajad, A.H. Alavi, and Y. Farnam, Novel approach to strength modeling of concrete under triaxial compression. Journal of Materials in Civil Engineering. 24(9) (2012), pp. A.H. Gandomi, G.J. Yun, and A.H. Alavi, An evolutionary approach for modeling of shear strength of RC deep beams. Materials and structures. 46 (2013), pp. 2109-2119. S.M. Mousavi, P. Aminian, A.H. Gandomi, A.H. Alavi, and H. Bolandi, A new predictive model for compressive strength of HPC using gene expression programming. Advances in Engineering Software. 45(1) (2012), pp. 105-114. Q. Tian, Y. Lu, J. Zhou, S. Song, L. Yang, T. Cheng, and J. Huang, Supplementary cementitious materials-based concrete porosity estimation using modeling approaches: A comparative study of GEP and MEP. Reviews on Advanced Materials Science. 63(1) (2024), 20230189. I. Husein, R. Sivaraman, S.H. Mohmmad, F.A.H. Al-Khafaji, S.I. Kadhim, and Y. Rezakhani, Predictive equations for estimation of the slump of concrete using GEP and MARS methods. Journal of Soft Computing in Civil Engineering. 8(2) (2024), pp. 1-18 D. Wang, M.N. Amin, K. Khan, S. Nazar, Y. Gamil, and T. Najeh, Comparing the efficacy of GEP and MEP algorithms in predicting concrete strength incorporating waste eggshell and waste glass powder. Developments in the Built Environment. 17 (2024), 100361. B. Huang, A. Bahrami, M.F. Javed, I. Azim, and M.A. Iqbal, Evolutionary Algorithms for Strength Prediction of Geopolymer Concrete. Buildings. 14(5) (2024), 1347. Z. Shahab, W. Anwar, M. Alyami, A.W. Hammad, H. Alabduljabbar, R. Nawaz, and M.F. Javed, Experimental investigation and predictive modeling of compressive strength and electrical resistivity of graphene nanoplatelets modified concrete. Materials Today Communications. 38 (2024), 107639. H.A. Poornamazian and M. Izadinia, Prediction of compressive strength of brick columns confined with FRP, FRCM, and SRG system using GEP and ANN methods. Journal of Engineering Research. 12(1) (2024), pp. 42-55. U. Asif, M.F. Javed, M. Abuhussain, M. Ali, W.A. Khan, and A. Mohamed, Predicting the mechanical properties of plastic concrete: An optimization method by using genetic programming and ensemble learners. Case Studies in Construction Materials. 20 (2024), e03135. A.S. Albostami, R.K.S. Al-Hamd, S. Alzabeebee, A. Minto, and S. Keawsawasvong, Application of soft computing in predicting the compressive strength of self-compacted concrete containing recyclable aggregate. Asian Journal of Civil Engineering. 25(1) (2024), pp. 183-196. A. C150, ASTM C150 standard specification for Portland cement. ed: ASTM International West Conshohocken, Pennsylvania, 2016. B. Standard, Testing hardened concrete. Compressive Strength of Test Specimens, BS EN, pp. 12390-3, 2009. C. Astm, Standard test method for density, absorption, and voids in hardened concrete. C642-13, 2013. O. Sylvester and B. Lukuman, Investigation of the properties of self-compacting concrete with palm kernel shell ash as mineral additive. Journal of Civil Engineering and Construction Technology. 9(2) (2018), pp. 11-18. A.A. Raheem and M.A. Kareem, Chemical composition and physical characteristics of rice husk ash blended cement. International Journal of Engineering Research in Africa. 32 (2017), pp. 25-35. A.D. TQ, A.R. Masoodi, and A.H. Gandomi, Unveiling the potential of an evolutionary approach for accurate compressive strength prediction of engineered cementitious composites. Case Studies in Construction Materials. 19 (2023), e02172. A. Singh, R. Niveda, A. Anand, A. Yadav, D. Kumar, and G. Verma, Mechanical properties of light weight concrete using lightweight expanded clay aggregate. Int. J. Res. Appl. Sci. Eng. Technol. 10 M. Ramya and S. Keerthipriyan, Experimental Study On Light Weight Concrete Using Leca (Light Weight Expanded Clay Aggregate). International Research Journal of Engineering and Technology, 7(5) K. Mamatha and M. Mothilal, Experimental Study Light Weight Concrete Using LECA, Silica Fumes, and Limestone as Aggregates. International Journal For Research in Applied Science and Engineering Technology. (2022). M. Othman, A. Sarayreh, R. Abdullah, N. Sarbini, M. Yassin, and H. Ahmad, Experimental study on lightweight concrete using lightweight expanded clay aggregate (LECA) and expanded perlite aggregate (EPA). J. Eng. Sci. Technol. 15(2) (2020), pp. 1186-1201. M.K. Yew, M.C. Yew, and J.H. Beh, Effects of Recycled Crushed Light Expanded Clay Aggregate on High Strength Lightweight Concrete. Materials International. 2 (2020), 0311-7. S.B. Chetan, An Experimental Investigation of Light Weight Concrete by Partial Replacement of Coarse Aggregate As LECA. International Journal of Science Technology & Engineering. 5(1) (2018). J. Karthik, H. Surendra, V. Prathibha, and G.A. Kumar, Experimental study on lightweight concrete using Leca, silica fume, and limestone as aggregates. Materials Today: Proceedings. 66 (2022), pp. A.J. Hamad, Size and shape effect of specimen on the compressive strength of HPLWFC reinforced with glass fibres. Journal of King Saud University-Engineering Sciences. 29(4) (2017), pp. 373-380. • Publisher :Sustainable Building Research Center (ERC) Innovative Durable Building and Infrastructure Research Center • Publisher(Ko) :건설구조물 내구성혁신 연구센터 • Journal Title :International Journal of Sustainable Building Technology and Urban Development • Volume : 15 • No :3 • Pages :307-327 • Received Date : 2024-04-16 • Accepted Date : 2024-09-22 • DOI :https://doi.org/10.22712/susb.20240023
{"url":"https://www.sbt-durabi.org/articles/article/BbWo/#Information","timestamp":"2024-11-02T18:30:33Z","content_type":"text/html","content_length":"106078","record_id":"<urn:uuid:75906401-2a7f-4f86-a207-68c599b88d96>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00792.warc.gz"}
Award of Menachem Witztum 50 Jubilee Tourney C 2.9.2002 Award by Menachem Witztum is dedicated to the memory of his parents Yafa and Shmuel Witztum (See an announcement and an example.) List of participants The director received 105 entries by 44 composers from 21 countries. 38 problems by 9 composers (20 of them by a single composer) were found non-thematic and/or unsound, and were returned to their authors. 9 of these came back as corrected versions. On 2 September 2002, 76 sound problems were submitted (anonymously) to the judge, Menachem Witztum, for evaluation. A number was assigned to each correct entry as follows: Uri Avner (Israel) 1, 5, 5a, 10, 18; Anatolij Vasilenko (Ukraine) 2; Árpád Molnár (Hungary) 3, 4, 7, 8, 9, 75, 76; Yoel Aloni (Israel) 11; Jean Haymann (Israel) 12, 13, 14; Jozef Lozek (Slovakia) 15, 16 Michael Shapiro (Israel) 17, 29-32, 37, 53, 54, 58; Fadil Abdurahmanoviĉ (Bosnia & Herzegovina) 19; Ricardo de Mattos Vieira (Brazil) 20; Johan de Boer (Netherlands) 21; Viktor Syzonenko (Ukraine) 22-24; Mircea M. Manolescu (Romania) 25-28; Michal Dragoun (Czech Republic) 33-35; Michal Dragoun & Dieter Müller (Czech Republic & Germany) 36; Colin Sydenham (England) 38, 74; Leonid Lyubashevsky (Israel) 39; Leonid Lyubashevsky & Leonid Makaronez (Israel) 40; Vito Rallo & Roberto Cassano 41; L. Togookhuu (Mongolia) 42-44; Jozsef Pasztor (Hungary) 45-47; Miomir Nedeljkoviĉ (Yugoslavia) 48, 49; Mario Parrinello (Italy) 50-52 Raffi Ruppin (Israel) 55; Alexander Semenenko (Germany) 56; Valerij Semenenko (Ukraine) 57; Jorge Humberto Brun (Argentina) 59; Henryk Grudzinski (Poland) 60; Strahinja Mihajloviĉ (Yugoslavia) 61; Borislav Gadjanski (Yugoslavia) 62; Vito Rallo & Roberto Cassano (Italy) 63, 64; Kamlik Karapetyan (Armenia) 65; Manfred Rittirsch (Germany) 66; Ion Murarasu (Romania) 67; Christer Jonsson (Sweden) 68; Franz Pachl (Germany) 69, 70; Franz Pachl & Dieter Müller (Germany) & Helmut Zajic (Austria) 71; V. Gorbunov, V. Shevchenko & V. Melnikov (Ukraine) 72; Amith Sadeh (Israel) 73. A lot of deliberation went into finding a theme for this tourney, preferably an original one that had never been used before. The theme finally selected is paradoxical: rem In a H#2, two pieces, black and white, must evacuate a line for a line-piece to arrive at the line for delivering the mate; also, the black King must actively arrive at the line. On first sight, these prerequisites may look irrational and impossible to obtain. The composer's challenge, then, would be to prove otherwise. I must admit that for a short while I was concerned lest the constraints posed by the theme would deter potential composers, so that I would remain with the prizes. To my great surprise and joy, more than 100 problems by 44 composers arrived (76 of which thematic), most of them of a very high level. The difficulty, so it seemed, had only motivated many composers to try and cope with the There were a number of possibilities to overcome the technical problems inherent in the theme. To prevent the thematic black piece from interfering with the mate, several mecha-nisms are possible: • the black piece enters a white line and is pinned by the black King's move. • the black piece sacrifices itself. • the black piece is a pawn. • the black King moves to interfere with the line of the black piece. • the white piece interferes with the line of the black piece. Also, white must guard the square vacated by the black King by using one of several devices: • a check on the first move. • a guard by the mating line-piece. • opening a white line by black. • a guard by a move of a white piece. • creation of a white battery where the front piece moves to guard the square. The possible combinations of these different black and white options had given rise to a wide array of problems (also, in conjunction with additional themes). The judging process was not easy, reflecting the difficulty to tell between good problems. Surely, many problems not included in the award will do well in other tourneys. Special thanks go to the devoted director, Emanuel Navon, who did an excellent job every step of the way. Thanks to Zivko Janevski for the originality checking (only 3, very partial, predecessors where found). Further thanks to Michal Dragoun who dedicated 2 interesting problems to me in the Czech magazine (of which one is brought in the annex). They were unfit for the tourney as they utilized a Zeroposition. My thanks to Uri Avner for translating the Hebrew text into English, and making this publication possible. Finally, many thanks to all the participants for the great pleasure their problems have given me. A problem composed in collabo-ration with the tourney director, comprising the Schiffmann effect, is dedicated to them (see annex). I ranked the problems as follows: Menachem Witztum Tel Aviv, June 2003 Fadil Abdurahmanovic 1st Prize M. Witztum 50 JT C 2.9.2002 1.Bg3 Sf5+ 2.Kf4 Qb8# 1.Bb3 Sc4+ 2.Kd4 Rd7# An impressive mechanism. The 1st black move allows the black King a non checking 2nd move, which pins a black piece, while White's 1st move shuts-off a white Rook's line. All this, in an economical setting without twinning. The problem is enjoyable and rich in content. h#2 (7+10) Michal Dragoun 2nd Prize M. Witztum 50 JT C 2.9.2002 a) 1.Bd5 Rxc4+ 2.Kxc4 Qh4# b) 1.Rc6 Qxc5+ 2.Kxc5 Bf8# By means of a first-rate technique, the thematic white pieces sacrifice themselves while capturing black pieces, thus letting the black King reach his final destination through grabbing them. The final accord is provided by the mating piece which replaces its sacrificed colleague on the line. An excellent and surprising problem, despite its imperfect twinning mechanism and the somewhat clumsy construction. h#2 (7+13) b) g4 -» c2 Borislav Gadjanski 3rd Prize M. Witztum 50 JT C 2.9.2002 a) 1.Sd3 Se6+ 2.Ke3 Qa7# b) 1.Se6 Sd3+ 2.Kf5 Rb5# An additional colorful element is provided by White's 1st move. Besides guarding the black King's square, it also unpins the appropriate mating piece in each phase. A nice problem, showing a clever mechanism. The composer indicates 2 further tries in each phase showing moves from the other twin. The price for these tries are additional black pieces. However, the tries could be given up, making it possible to replace Re2 with a Pawn and shift the black Queen to d2. h#2 (9+15) b) d4 -» e5 Franz Pachl 4th Prize M. Witztum 50 JT C a) 1.Kd1 Sc7 2.Sxf3 Rd6# b) 1.Kf2 Bc7 2.Sxd2 Rf6# In the 2 phases Rb6 is unpinned by the arrival of the Knight and Bishop at c7. This is combined with interesting pinnings of the black Knight. The twinning device concords with the theme. h#2 (7+13) b) d2 «-» f3 Mario Parrinello 5th Prize M. Witztum 50 JT C 2.9.2002 a) 1.Kf3 Bxb7 2.Sxf2 Kxd6# b) 1.Kd2 Rxd6 2.Sxf4+ Kxc5# A royal battery combined with different self-pins of the black Knight, producing, due to the twinning device, an impressive problem. h#2 (8+11) b) e4 «-» d3 Uri Avner 6th Prize M. Witztum 50 JT C a) 1.Kh4 Ra4 2.g3 Ke5# b) 1.Kg2 Bxc6 2.f2 Kd4# Creation of a royal battery where the white King must choose the correct square to avoid closing the white line just opened by black the Mari theme. The wBe3 of the 1st twin disturbs a little. A very light setting. h#2 (4+11) b) -wBe3 Jean Haymann 1st HM M. Witztum 50 JT C 2.9.2002 1.Kc3 Bf8 2.Sxc6 Qh8# 1.Kd3 Re7 2.Sxf3 Rd8# A smooth execution, where the mate is given by a line-piece that replaces the line-piece that has left the interval to guard squares next to the black King. h#2 (9+5) Franz Pachl 2nd HM M. Witztum 50 JT C a) 1.Ke4 Qc4 2.Qb7+ Bxb7# b) 1.Kf6 Qc3 2.Qa6 Rxa6# The white Queen is the white piece that is leaving the interval, whereas the mate is given by another line-piece which captures the sacrificed black Queen. A clever mechanism. The twinning concords with the theme. h#2 (5+11) b) d5 «-» e6 Miomir Nedeljkovic 3rd HM M. Witztum 50 JT C 2.9.2002 a) 1.Se3 Sd4+ 2.Ke4 Bb7# b) 1.Sf6 Se4 2.Ke5 Ra5# A pleasing execution where the black King reaches the squares previously guarded by 2 white pieces. h#2 (9+4) b) c6 -» c5 Franz Pachl Dieter Müller Helmut Zajic 4th HM M. Witztum 50 JT C 2.9.2002 a) 1.Kd3 Bg6 2.Rf7 Sf6# b) 1.Ke5 Re1 2.Bc1 Sd2# An interesting combination of interferences. There is a self-interference by the parting black piece on the square left by the white piece, while the 2nd white piece interferes with the line of the interfering black piece. h#2 (9+13) b) e1 -» f1 Viktor Syzonenko 5th HM M. Witztum 50 JT C 2.9.2002 a) 1.Be2 Rf5+ 2.Ke4 Qh4# b) 1.Se2 Rd4+ 2.Ke5 Qh2# In each phase a white Rook interferes with a white Bishop, a black piece is pinned, and a mate is delivered by the white Queen. Interesting. h#2 (6+10) b) d4 -» c3 Uri Avner 6th HM M. Witztum 50 JT C 2.9.2002 1.Se2 Qxd5+ 2.Kg4 Ra4# 1.Bc6 Qxf4+ 2.Ke6 Bb3# The unbelievably free white Queen captures the unemployed black piece in each solution. A nice construction with model mates. h#2 (5+7) Lkhundevin Togookhuu 1st Comm M. Witztum 50 JT C a) 1.Be5 Rh4 2.Kd4 Kxf3# b) 1.Sd2 Bh5 2.Ke2 Kxf4# A royal battery where the white King captures the corresponding black piece. A light construction, but the twinning by moving a thematic piece is a bit h#2 (4+8) b) g7 -» f7 Alexandr Semenenko 2nd Comm M. Witztum 50 JT C 2.9.2002 1.g5 Sc5+ 2.Kc6 Rh6# 1.f4 Sd8+ 2.Kc8 Qh3# The white Knight interferes with a white line in both solutions. A nice and efficient mechanism. h#2 (4+12) Mircea Manolescu 3rd Comm M. Witztum 50 JT C 2.9.2002 1.Kc1 Qh6 2.R4f6 gxf6# 1.Kd3 Qd6 2.Se6 dxe6# A unified mechanism of capturing the black piece by a white Pawn. h#2 (9+9) Jean Haymann 4th-8th Comm M. Witztum 50 JT C a) 1.Kd3 Sb5 2.Qxd7 Rxd7# b) 1.Kc3 Rd5 2.Qxg7+ Qxg7# An interesting mechanism, where the white Queen and Rook reciprocally capture the black Queen that captures one of them. A problem by C.J.Feather shows a similar representation, not including the reciprocal capturing. h#2 (8+8) b) e3 -» a1 Mircea Manolescu 4th-8th Comm M. Witztum 50 JT C 2.9.2002 1.Qf3 Bc5+ 2.Kg3 Qb8# 1.Qe3 Qf7 2.Kf3 Be6# Different moves by the black Queen belonging to 2 thematic lines, bring about different mates by the white Queen, and this without twinning. h#2 (5+11) Michal Dragoun 4th-8th Comm M. Witztum 50 JT C 2.9.2002 a) 1.Qb2 Qh8 2.Kc3 Rxb5# b) 1.Qb4 Qg4 2.Kc4 Bxc2# The black Queen takes refuge behind the black King. Cute! h#2 (5+13) b) d3 -» b2 Raffi Ruppin 4th-8th Comm M. Witztum 50 JT C 2.9.2002 1.Sa5 bxc6+ 2.Ka6 Bd3# 1.Ra6 d6 2.Ka8 Be4# Indirect and direct unpinning of the white Bishop. A feathery position. The capture of the black Rook impairs a little. h#2 (6+4) Colin Sydenham 4th-8th Comm M. Witztum 50 JT C 2.9.2002 a) 1.Kg4 Qc8 2.Rfb5 Re5# b) 1.Kg5 Qg1 2.Re3 Bf3# The black Rook closes black lines while its own lines are closed by white pieces. This determines the move order. Interesting. h#2 (5+13) b) f5 «-» g3 There are two additional problems I would like to mention, not so much because they are better than other problems not included in the award, but because they show certain elements not tackled by the problems above: Arpad Molnar M. Witztum 50 JT C 2.9.2002 a) 1.Ka8 Bh1 2.Rd4+ Kxd4# b) 1.Kc6 Bg2 2.Re5+ Kxe5# The black King moves in 2 directions on the thematic line, while the white Bishop has to reach the right square on this line as well. h#2 (7+9) b) h3 -» c7 Jorge Humberto Brun M. Witztum 50 JT C 2.9.2002 a) 1.Rxd4+ Rg5 2.Ke5 Qh2# b) 1.Be4 d5 2.Kxd5 Qd1# The black pieces pin themselves. the two phases are imbalanced and short of unity. h#2 (6+12) b) g3 -» e1 Michal Dragoun Sachova Skladba 2003 Dedicated to M. Witztum 50 a) 1.Ke3 Sxe6 2.Sgxe6 Qh6# b) 1.Kd3 Sxf5 2.Sxf5 Rd7# h#2 (6+15) a) d6 -» b2 b) f4 -» d1 Menachem Witztum Emanuel Navon M. Witztum 50 JT award 2003 Dedicated to participants of the tourney a) 1.Kd6 Sc5 (Sf4?) 2.Qxd2 Rxd2# b) 1.Kf5 Sf4 (Sc5?) 2.Qxc2 Bxc2# h#2 (9+8) b) d4 -» e4 Comments to Juraj Lörinc. Back to main page of Chess Composition Microweb.
{"url":"http://jurajlorinc.com/chess/witz50aw.htm","timestamp":"2024-11-14T18:56:23Z","content_type":"text/html","content_length":"54059","record_id":"<urn:uuid:6c252086-cb9e-441b-87ef-41115149f58c>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00145.warc.gz"}
Some Spectral Properties of Uniform Hypergraphs Some Spectral Properties of Uniform Hypergraphs Keywords: Hypergraph eigenvalue, Adjacency tensor, Laplacian tensor, Signless Laplacian tensor, Power hypergraph For a $k$-uniform hypergraph $H$, we obtain some trace formulas for the Laplacian tensor of $H$, which imply that $\sum_{i=1}^nd_i^s$ ($s=1,\ldots,k$) is determined by the Laplacian spectrum of $H$, where $d_1,\ldots,d_n$ is the degree sequence of $H$. Using trace formulas for the Laplacian tensor, we obtain expressions for some coefficients of the Laplacian polynomial of a regular hypergraph. We give some spectral characterizations of odd-bipartite hypergraphs, and give a partial answer to a question posed by Shao et al (2014). We also give some spectral properties of power hypergraphs, and show that a conjecture posed by Hu et al (2013) holds under certain conditons.
{"url":"https://www.combinatorics.org/ojs/index.php/eljc/article/view/v21i4p24","timestamp":"2024-11-10T11:10:56Z","content_type":"text/html","content_length":"15714","record_id":"<urn:uuid:268a9905-c144-48a6-8c15-9ad215eeb356>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00041.warc.gz"}
Absolute Zero - What Happens That Counters Classical Mechanics Bose-Einstein condensate at billionths of a degree. Image by NIST Classical physics suggests that at absolute zero, particles cease all motion. But what about quantum mechanics, the science of the very small? Ah, therein lies the rest of the story. From Gas to Frigid Solid It is well established fact that heated gas atoms or molecules move with great vigor. In fact, gas expands as energy increases, due to increased particle momentum. The reverse is also true. Cool a gas and it shrinks. Particle motion decreases. Particle momentum decreases. The atoms or molecules come closer together. At some point a liquid forms. Cool the liquid further and the result is a solid. Keep cooling the solid, and in theory it is possible to reach the coldest known temperature. That temperature is absolute zero degrees Kelvin. What happens then? Does all atomic or molecular motion stop? Absolute Zero Degrees Kelvin Despite the predictions of classical physics, at absolute zero degrees Kelvin all motion will not cease. The remaining motion is called “zero point vibrational energy.” It is defined as the quantum ground state of all matter. The phenomenon is described by the Heisenberg Uncertainty Principle, which dictates one cannot simultaneously describe the position and momentum (mass times velocity) of any particle. Value of the Zero Point Energy Momentum is equal to mass times velocity. If the velocity of all particles at absolute zero was zero, the momentum would be zero. The location of each particle would be known absolutely. This would violate the Heisenberg principle. In fact, the zero point energy is equal to one-half the value of Planck’s constant times the frequency of the vibration of the harmonic oscillator (the particle) at absolute zero. The frequency depends upon the mass of the particle. See the references cited at the end of this article for specifics. An Interesting Result for Helium-4 The most abundant form of helium is helium-4. The nucleus of helium-4 contains two protons, two neutrons. At 2.17^o Kelvin, helium-4 becomes super fluid. However, the zero-point vibrational energy for helium-4 is greater than the energy the substance would have in a solid lattice. So it cannot be converted into a solid, being the only known substance to fall into that category. There is no classical theorem to explain this. Only quantum mechanics describes such behavior. Liquid helium can be frozen if approximately 25 atmospheres of pressure are applied in the process. Note: You might also enjoy The Neutron – Is It a Stable Particle? Too, you might enjoy this article about the Heisenberg Uncertainty Principle by Abdul Aziz of Electrical 4 U.
{"url":"https://www.quirkyscience.com/actually-happens-absolute-zero/","timestamp":"2024-11-14T04:27:55Z","content_type":"text/html","content_length":"131054","record_id":"<urn:uuid:f9d9e1c5-5de8-49b7-8e4c-02aac3b6603c>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00445.warc.gz"}
8 Epistemological Issues in Maths - Logic Philosophy Spirituality8 Epistemological Issues in Maths 8 Epistemological Issues in Maths VIII.Epistemological Issues in Mathematics The following are a few reflections on the Philosophy of Mathematics, which I venture to offer although not a mathematician, having over time encountered[1]treatments of issues that as a philosopher and logician I found questionable. The assault on reason throughout the 20th Century has also had its effects on the way philosophers of mathematics understood the developments in that subject. Having a different epistemological background, I can propose alternative viewpoints on certain topics, even while admitting great gaps in my knowledge of mathematics. Attending lectures on the work of Jean Piaget, I was struck by the confusion between logic and mathematics in his identification of learning processes. Some that I would label as mathematical, he labeled as logical; and vice versa. This is of course due to the blurring of the distinction found in a lot of modern logic. There are two aspects to this issue, according to the direction of a)Mathematicsis used in logic. Mathematics, here, refers mainly to arithmetic and geometry; for instances, in considerations of quantity (or more broadly, modality) in the structure of propositions or within syllogistic or a fortiori arguments. b)Logic is used in mathematics. Logic is here intended in a broad sense, including the art (individual insights) and the science (concepts, forms and process) of logic; for instance, logic is used to formulate conditions and consequences of mathematical operations. For example, the statement “IF there are 100 X at time t1 AND there are 150 X at time t2, THEN the rate of change in number of X was (150 – 100)/(t2 – t1) per unit time.” Here mathematical concepts (the numbers 100, 150, t1 and t2) are embedded in the antecedent (if) of a hypothetical proposition (implication), and additionally a formula (viz. (150 – 100)/(t2 – t1)) for calculating a new quantity is embedded in the consequent (then), derived from the given quantities. Thelogicalpart of that statement here is the “if-then-“ statement. What makes it logical is that it is a form not limited to mathematics, but which recurs in other fields of knowledge (physics, psychology, whatever). It is a thought process (the act of understanding and forming a proposition) with wider applicability than mathematical contexts; it is more general. Themathematicalpart of said statement is the listed numerical concepts involved and the calculation based on them – the operations involved (in the present case, two subtractions and a division. The insight that the proposed formula indeed results in the desired knowledge (the resulting quantity) belongs to mathematics. Logic here only serves to conceptually/verbally express a certain relation (the implication) established by mathematical reasoning. We should also note the mathematical elements found in defining the “if-then-“ form – notably appeal to a geometrical example or analogy of overlapping circles (Euler or Venn diagrams). Nevertheless, there clearly remains in such forms a purely logical, in the sense of non-mathematical, element; such explanations cannot fully express their meaning. The quantitative part is merely the visible tip of the iceberg of meaning; the qualitative – more broadly conceptual – part is a more difficult to verbalize and so relatively ignored aspect. Of course, we can also say that in the largest sense of the term logic – discourse, thought process – even mathematical reasoning is logic. The division is ultimately artificial and redundant. Nevertheless, these subjects have evolved somewhat separately, with specialists in mathematics and specialists in more general (or the rest of) logic. It is also probable, judging by the work of Jean Piaget and successors in child learning processes, that different logical or mathematical concepts and processes are learned at different ages/periods of early childhood, and there are variations in temporal order from one child to another. Historically, it is a fact that we have adopted the separation of these investigations and a division of labor, so that logic and mathematics have been considered distinct subjects of study. Of course, there has been much communication and intertwining between these two fields, and indeed attempts at merger. Here, I merely want to indicate where the boundaries of the distinction might lie. Specifically quantitative concepts and operations are mathematics; whereas logic deals with thought processes found in other fields besides. In this view, mathematics is quantitative discourse, whereas logic is (also) non-quantitative discourse. By making such fine distinctions, we can for instance hope to better study human mental development. The idea that mathematical systems such as Hilbert’s[2]are “axiomatic” – that is, pure of any dependence on experience is a recurring myth, which is based on an erroneous view of how knowledge of this field has developed. I have discussed the source of this fallacy at length in myFuture Logic(see chapter 64, among others); here I wish to make some additional, more specific remarks. I do not deny that Hilbert’s postulates are mutually consistent and by themselves sufficient to develop geometrical science. My objection is simply to the pretentious claim that his words and propositions are devoid of reference to experience. We need only indicate the use of logical expressions like “exists,” “belonging,” “including,” “if – then –,” etc., or mathematical ones like “two,” “points” “line,” etc., to see the dependence. Take for example the concept of a group (to which something “belongs” or in which something is “included”). The concept is not a disembodied abstract, but has a history within knowledge. The idea of grouping is perhaps derived from the practice of herding animals into an enclosure or some such concrete activity. The animals could all be cows – but might well be cows mixed with goats and sheep. So membership in the group (presence in the enclosure) does not necessarily imply a certain uniformity (a class, based on distinctive similarity – e.g. cows), but may be arbitrary (all kinds of animals, say). Thus, incidentally, the word group has a wider, less specific connotation than the word class (which involves comparison and contrast work). Without such a physical example or mental image of concrete grouping, the word would have no meaning to us at all. So, genetically, the word grouping – and derived expressions like belonging or including, etc. – presupposes ageometrical experience of some sort (a herding enclosure or whatever). We cannot thereafter, after thousands of years of history of development of the science of geometry, claim that the word has meaning without reference to experience. Such a claim is guilty of forgetfulness, and to claim that geometry can be built up from it is circular reasoning and concept-stealing. It would be impossible for us to follow Hilbert’s presentation without bringing to mind visual images of points, successions of points, lines crisscrossing each other, this or that side of a line, etc. Those images at least are themselves mental objects in internal space, if not also end products of our past experiences of physical objects in external space. The value and justification of Hilbert’s work (and similar attempts, like Euclid’s) is not that is liberates geometry from concrete experiences of objects in space, but merely that it logically orders geometrical propositions so that they are placed in order of dependenceon each other(from the least to the most).[3] Geometrical “axioms” are thus not absolutes somehow intuitedex nihilo, or arbitrary rules in a purely symbolic system[4], but hypotheses made comprehensible and reasonable thanks to experience. That experience, as I argue below, need only be phenomenal (it does not ultimately matter whether it is “real” or “merely illusory”) but it needs to be there in the first place. That experience does not have to give us the axioms ready-made – they remain open to debate – but it gives us the concepts underlying the terms we use in formulating such axioms. In this sense, geometry – and similarly all mathematics – is fundamentally empirical (in a phenomenological sense) – even if much rational work is required beyond that basic experience to express, compare and order geometrical propositions. It is futile to attempt to avoid this observation by talking of succession of symbolic objects, A, B, C. Even here, I am imagining the symbols A, B, C in my mind or on paper asthemselves concrete objectsplaced in sequence next to each other! I am still appealing to a visual – experiential and spatial – field. Thus, any claim to transcend experience is naïve or dishonest.Experience is evidently asine qua nonfor anyaxiomatization, even though it is clearly not asufficientcondition.The experiencesmake possibleand anchor the axioms, but admittedly do not definitelyprovethem – they remain hypotheses[5]. Geometry is certainly not as some claim a deductive science, but very much an inductive one, and the same is true of other mathematical disciplines. 3.1The so-called axioms of geometry have changed epistemological status in history as follows: a)At first, they seemedobvious, i.e. immediately proved by experience (naïve view). But the naïve view, not being based on reflection, is rejected as such once reflection begins. b)Then they were regarded asaxioms, i.e. theses without possible credible alternatives (axiomatic view). But this view, which is a worthy attempt to justify the preceding, suffers upon further reflection from an apparent arbitrariness. The label “axiom” is found to be a pretentious claim to an absolute – when denial of it does not result in any contradiction. c)Then it was considered that they weremerely credible hypotheses among other possibilities, i.e. that alternative hypotheses were conceivable and possibly credible (hypothetical view). One can even imagine that different geometries might be applicable in different contexts, and regard the Euclidean model as approximately representative on the human everyday scale of things, and thus consider that all or many of these alternative hypotheses are equally credible. d)Then they were thought to bepure inventions of the human mind, incapable of either verification or falsification (speculative view). This view may at first sight seem epistemologically unacceptable, since it claims to transcend the hypothetical view and posits to know a truth that is by definition beyond our testing abilities. However, it must be understood in the context of the doubt in the existence of geometrical points, lines or surfaces. That is, it is a denial of geometrical science as such. However, as we shall see, these latter criticisms can themselves be subjected to rebuttal, especially on phenomenological grounds. 3.2The arguments put forward against geometrical science as such[6]are indeed forceful. We have considered the main ones in the section on ‘Unity In Plurality,’ pointing out that physical objects do not, according to modern physical theories based on scientific experiments, have precise corners or edges or surfaces, but fuzzy, arbitrarily defined limits, so that we are forced to admit all things as ultimately just ripples in a single world-wide entity.[7] There might be a fundamental weakness in such argumentation – a logical fault it glosses over. If the whole of modern physical scienceis itself based onthe existence and coherence of geometrical science (by which I of course do not mean only Euclidean geometry, but all the discipline developed and accepted over time by mathematicians), can it then turn around and draw skeptical conclusions about that Geometry? Remember, all themathematicsof waves and particles, of space and time, were used as premises, together with empirical results of physical experiments, to inductively formulate and test the physical theories we currently adhere to – can the latter physical conclusions then be used to argue against these very mathematical premises? Logically, there is no real self-contradiction in this. The sequence is “Math theory” (together with empirical findings) implies “Physical theory” that in turn implies doubt on initial “Math theory.” So what we have in fact is denial of (part of) the antecedent by the consequent, which is not logically impossible, though odd. The consequent is not denying itself, although it puts its own parent in doubt. Thus, a more pondered and moderate thesis about geometry has to be formulated, which avoids such difficulties while taking into account the aforesaid criticisms regarding points, lines and surfaces. Waves and particles (which are presumably clusters of waves) may somehow be conceivable and calculable, without heavy reliance on the primary objects of our current geometry (points, lines and surfaces), which apparently have no clear correspondence in nature. In the meantime, our current geometry can legitimately be used as a working hypothesis, since it gives credence to our physical 3.3Let us now consider where the extreme critics of geometry may have erred. We can accept as given the proposition that no dimensionless points, no purely one-dimensional lines, no purely two-dimensional surfaces (Euclidean or otherwise) can be pointed to in natural space-time accessible to us. This is granting that to exemplify such primary objects of geometry we would need to find material objects with definite tips, edges or sides – whereas we know that all material objects are made of atoms themselves made of elementary particles themselves very fuzzy objects, apparently subject to Heisenberg’s Uncertainty principle. Nevertheless, we tend to regard the ultimate nature of these nondescript bodies to be clusters of “waves of energy”. This is of course a broad statement, which ignores the particle-wave predicament and which rushes forth in anticipation of a unified field theory; furthermore, it does not address the question regarding what it is that is being waved, since the Ether assumed by Descartes has since the experiments of Michelson and Morley and Einstein’s Relativity theory been (apparently definitively) discredited. But my purpose here is not to affirm this wave view of matter as the ultimate truth, but rather to consider the impact of supposing that everything is waves on our question about the status of geometry. For if particles are eventually decided to be definitely not entirely reducible to waves, then geometry would be justified by the partial existence of particles alone; so the issue relates to waves. If we refer to the simplest possible wave, whatever it be, a gravitational field or a ray of light – it behaves like a crease or dent in the fabric of the non-ether where waves operate (to use language which is merely figurative). Such hypothetical simplest fractions of waves surely have a geometrical nature of some sort. That is to say, if we could look[8]that deep into nature, we would expect to discern precise points, lines and surfaces – even if at a grosser level of matter we admittedly cannot. Thus, I submit, the possible wave-nature of all matter is not really a forceful argument against geometry. Even if we can never in practice precisely discern points, lines and surfaces, because there may be no material bodies of finite shape and size, geometry remains conceivable, as a characteristic of a world of waves. All the above is said in passing, to clear out side issues, but is not the main thrust of my argument in defense of geometry. We admittedly can perhaps never hope to perceive waves directly, i.e. our assumption of their geometrical nature is mere speculation. But that is not an argument of much force against geometry as such, in view ofits existence and practical successes, which mean that geometry is not speculation in the sense of a thesis incapable of verification or falsification, a pure act of faith, but more in the way of a hypothesis that is repeatedly confirmed though never definitely proved. Simply an inductive truth – like most scientific truths about nature! But let us consider more precisely how geometry actually arises in human knowledge. It has two foundations, one experiential (in a large sense) and the other conceptual. 3.4Theexperientialaspect of geometrical belief is thatthere seems to bepoints, lines (straight or curved), surfaces (flat or warped) and volumes (of whatever shape) in the apparently material world we sense around us as well as in the apparently mental world of our imaginings. Thisseeming to beis enough to found a perfectly real and valid geometry.The justification of geometry is primarily phenomenological, not naturalistic! Seeming is (I remind you) the appearance, or (in this case) phenomenal, level of existence, prior to any judgment as to whether such phenomenon is a reality or an illusion. In other words, geometrical objects do not have to be proven to be realities – in the sense of things actually found in an objective physical nature – they would be equally interesting if they were mere illusions! Because illusions, too, be they mere ‘physical illusions’ (like reflection or refraction) or mental projections, are existents, open to study like realities. The study of phenomena prior to their classification as realities or illusions is called phenomenology. At the phenomenological level, ‘seeming to be’ and ‘being’ are one and the same copula. Only later, on the basis of broad, contextual considerations, is a judgment properly made as to the epistemological status of particular appearances, some being pronounced illusions, and the remainder being admitted as realities[9]. If, therefore, geometrical science has a phenomenological status, i.e. if it is a science that can and needs be constructed already at the level of phenomena, it is independent of ultimate discoveries about the physical world. The mere fact, admitted by all, including radical critics of geometry, that weget the impression, at the human everyday level of perception, that a table has four corners and sides and a flat top, sufficesto justify geometry. This middle-distance depth of perception, even if it is ultimately belied at the microscopic level of atoms or the macroscopic level of galaxies, still can and has to be considered and analyzed. A science of geometry only requiresapparentpoints, lines and surfaces. And even if this last argument were rejected, saying that the points, lines and surfaces we seem to see in our table are just mental projections by us onto it, we can reply that even so, mental projections of points, lines and surfaces are themselves real-enough objects existing somehow in this world. They may be illusions, in the sense that they wrongly inform us about the external world, they may be purely internal constructs, but they still even as suchexist. A subjective existent is as much an existent as an objective one – in the sense that both are equally well phenomena. The mental matrix of imagination, at least, must therefore be capable of sustaining such geometrical objects. And if this restricted part of the world – our minds – displays points, lines and surfaces – then geometry is fully justified, even if the rest of the world – the presumed material part – turns out to be incapable of such a feat and geometry turns out to be inapplicable to it. But the latter prospect thus becomes very tenuous! As long as geometry could be rejected in principle, by the elusiveness of its claimed objects under the microscope, there was a frightening problem. But once we realize that the very existence of Geometry requires the possibility somewhere of the concretization of its objects – even if only as a figment of our imaginations – the problem is dissolved. In short, our very ability to discuss geometrical objects, if only to doubt their very existence, is proof of our ability to at least produce them in the mind, and therefore of their ability to exist somewhere in this world. And if all admit that geometrical objects can exist in some part of the world (the mental part at least), then it is rather inductively difficult and arbitrary to deny without strong additional evidence that they exist elsewhere (in the material part). The onus of proof reverts to the deniers of material geometry. 3.5Theconceptualaspect of geometrical belief must however be emphasized, because it moderates our previous remarks concerning the experiential aspect. Conceptualization of geometrical objects has three components, two positive ones and a negative one. a) The primary positive aspect of geometrical conception consists ofrough observation, abstraction and classification, (i) refers to the above mentioned concrete samples of points, lines, surfaces and volumes, apparent in the material and mental domains of ordinary experience – this is phenomenological observation; and (ii) observes their distinctive similarities (e.g. that this and that shape are both lines, even though one is straight and short and the other is long and curved, say) – this is abstraction; and (iii) groups them accordingly under chosen names – this is classification. b) The negative aspect of geometrical conception is the intentional act ofnegation, reflecting the inadequacy of mere reference to raw experience.Unlike their empirical inspirations, atheoretical point hasnodimension (no length, no breadth, no depth); a theoretical line is extended inonly onedimension – it has no surface; a theoretical surface inonly twodimensions – it has no volume. Each theoretical geometrical object excludes certain empirical extensions. It is thus an abstraction (based on concretes, of course) rather than a pure concrete. As I have explained elsewhere, negation is a major source of human concepts, allowing us to form them without any direct experience of their objects. That is, while the concrete referents of “X” may be directly perceivable; those of “Non-X” need not be so. We consider defining them by negation of X as sufficient – since every thing (except the largest concept “thing”, or existent) has to have a negation, since every thing within the universe is limited and leaves room for something else. Such negative definition of the geometrical objects is not, however, purely verbal or a mere conjunction of previous concepts (“not” + “X”). There is an active imaginative aspect involved. I mentally, or on paper, draw a point or a line, and mentally exclude or rub-off further extensions from it. Thus, even if my mental matrix, or my pencil and paper, may be in practice unable to exemplify for me a truly dimensionless point or fine line or mere surface, Imentally dismissall excessive thickness in my sample. This act may be viewed as a perceptual equivalent of conceptual c) Another, more daring positive conceptual act may be calledassimilation, which we can broadly define as:regarding something considerably different as considerably similar. This a more creative progression by means of somewhat forced simile or analogy, through which we expand the senses of terms. For example, the concept of a “dimension” of space is passed on to time. The Cartesian fourth dimension is at first perhaps thought up as a convenient tool, but eventually it is reified and in Einstein we find it cannot be dissociated from space. Our initial concept of dimension has thus shifted over into something slightly different, since the time extension of bodies is distinctively one-directional and not as visible as their space extensions (see more on this topic in earlier chapters). Another example is the evolution from Euclidean geometry, the first system that comes to mind from ordinary experience (and in the history of geometrical science), to the later Non-Euclidean systems. A shape considered as “curved” in the initial system is classed as “straight” or “flat” in another system. We have to assimilate this mentally – i.e. say to ourselves, within this new geometrical system, straightness or flatness has another concrete meaning than before, yetthe roleplayed by these previously curved shapes in it is equivalent to that played by straight lines or flat surfaces Euclidean system. Note well how ordinary experience of everyday events and shapes are repeatedly and constantly appealed to by the mind in all three of the above conceptual acts. It is important to stress this fact, because some mathematicians try to ignore such experiential grounding and cavalierly claim that what they do is independent of any experience. The whole of the present essay is intended to belie them, by increasing awareness ofthe actual genetic processesunderlying the development of mathematical sciences. The academic exercise of formulating the starting assumptions (“axioms”) of the various geometrical systems does not occur in a vacuum. In order to understand whether “parallels” meet or not, I visualize ordinary (Euclidean) parallels, then imagine them curving towards each other or curving apart; then I say “even though they meet or spread apart, I may still call them parallel within alternative geometrical systems”. Without some sort of concretization, however forced, the words or symbols used would be meaningless. 3.6Finally, I’d like to mention here in passing that many of the remarks made here about geometry apply to other fields of mathematics. Thus, arithmetic should also be viewed as a phenomenological science. That is, its primary objects – the unit (“1”) and growing collections of such units (“2”, “3”, etc.) – that is, natural, whole, positive, real numbers – do not require any reference to an established “reality,” but could equally be constructed from a sense field (visual or other) composed entirely of illusory events or entities. It is enough that something appears before us to concretely grasp a unit, and many things, to concretely grasp the pluralities. By arithmetic entities, we initially mean units and pluralities (the natural numbers). These objects, which are not unrelated to geometrical objects, need only be phenomenal. One can conceptualize a unit and pluralities of units equally well from an illusory or imaginary field of perception as from a real one. The sense-modality involved is also irrelevant: shapes, sounds, touch-spots, items smelt or tasted – any of these can be units. What is the epistemological status of novel arithmetical entities? Some mathematicians apparently claim that a concept like the negative number –1 or the imaginary number √-1 is a “new entity” incapable of being reduced to its constituent operations (-, √) and numbers (1, etc.). The definitions of such abstract entities are given in series of equations like: Where –1 + 1 = 0, –2 + 2 = 0, etc…. Where √-1•√-1 = –1, √-2•√-2 = –2, etc…. However, this means that the signs used (- , + , = , √ ,•,⁄, etc.) areeach in turn a new thing in each definition, even though presented to us in the same physical form (symbol-shape and name) as existing entities. Here, the sign that was originally an operator (a relational concept between two terms) has become attached to a term (making of it a new term) – so that the sign itself has changed nature.[10] It seems clear to me that this doctrine of irreducibility and newness, while a good-faith try at explaining the leaps of imagination involved in such mathematical concepts, in fact involves some dishonesty since such definitions tacitly rely on the implicit meanings of the building blocks that are their sources both logically and in the progression and history of thought. Rather we should, in my view, look at these leaps asindefinite stretching of meaning, i.e. we say: “let this concept (-,√, whatever) be widened somewhat (to an undecided, undetermined extent) so that the followinganalogybe possible….” This extending of meaning (or intention) is itselfimaginary, in that we cannot actually trace it (just we cannot concretize the concept of infinity by actually going to infinity, but accept a hazy non-ending). (Such development by analogy is nothing special. As I have shown throughout my work, all conceptualization is based on grouping by similarity, of varying precision or vagueness – or the negation of such. Terms are rarely pre-definable, but are usually open-ended entities whose meaning may evolve intuitively as more referents are encountered.) We thus produce doubly imaginaryhypotheticalentities. And here an analogy to the concrete sciences is possible, in that the properties of such abstract entities are tested (in accordance with adductive principles), not onlylogicallyin relation to conventions and arbitrary laws initially set up by our imagination (as the said mathematicians claim), but alsoempiricallyin relation to the properties known to be obtained for natural numbers. Natural numbers, therefore, do not merely constitute a small segment of the arsenal of mathematical entities (as they claim), but have the status oflimiting casesfor all other categories of numbers (negatives, imaginaries, etc.)[11].If any proposed new abstract formuladoes not work for natural numbers, it is surely rejected. This is evident, for instance, in William Hamilton’s attempted analogy from couples to triplets. He found that though complex numbers expressed as couples (with one imaginary numberi^2= –1) could readily be multiplied together, in the case of triplets (using two imaginary numbersi^2=j^2= –1) results inconsistent with expectations emerged when natural numbers were inserted in the formula.[12] Note particularly this reference to two (or more) different imaginary numbers, namelyiandjwhose squares are both equal to –1. Here, we introducejas an imaginary extension of the concept ofithat hasno distinguishing mark other than the symbolic difference applied to it! We simply imagine that the meaning ofjmight somehow differ from that ofiso that althoughi^2=j^2= –1 it does not follow thati=j= √-1 (or even thatij= –1). An unstated and unspecified differentia is assumed but never in fact provided[13]. This is yet another broadening of mathematics “by stretching” (i.e. by unsupported analogy, as above explained).[14] The example here referred to clearly shows that, however fanciful its constructs (by definition and analogy), mathematics undergoes an occasional empirical grounding with reference to natural numbers, which limits the expansiveness of its imagination and ensure its objectivity. New mathematical entities, although initiated by mere conventions or arbitrary postulates, must ultimately pass the test of applicability to natural numbers, i.e. consistency with their laws, to be acceptable as true mathematics. Natural numbers thus fix empirical restrictions on the development of theoretical If I may be allowed some far-out, unorthodox, amateur reflections consider the following concerning fractions of natural numbers[15]. A physical body can only really be divided intonparts, say, if it has a number of constituents (be these molecules or atoms or elementary particles or quarks or whatever) divisible exactly byn– otherwise, the expression 1/n has no realistic solution! For example, a hydrogen atom cannot be divided by two, unless perhaps its constituent elementary particles contained an even number of quarks. Or again, if I wanted to divide (by volume or weight) an apple fairly among three children, it would have to have a number of identical apple molecules precisely divisible by three. Otherwise, each child would get 0.333… (recurring) part of an apple – which we have no experimental proof is practically possible and indeed we know is not! The concept of an infinitely recurring decimal is a big problem – consider the debates about Π (pie) in the history of mathematics. How can I evenimaginegoing on adding digits to infinity, when I know my life, and that of humanity, and indeed of the Universe are limited in time, and when I know that space is physically limited so that there would not be place enough for a real infinity of digits even if there were time enough? Surely, such a concept may be viewed as an antinomy. What this means is that arithmetic as we know it is not necessarily a thoroughly “empirical” science – it is an ideal assuminginfinite divisibilityof its objects. The mere fact that I can imagine an apple or atom as divisible at will, does not make it so in the real world. Though in some cases the number ½ or 1/3 may have a real object, a realistic solution, in many cases this is in fact a false Even in the mental domain, although we can seemingly perfectly divide objects projected in the matrix of imagination (whatever its “substance” may be), it does not follow that viewed on a very fine level (supposing we one day find tools to do so) such division is always in fact concretely possible. These thoughts do not invalidate the whole of arithmetic, but call for an additional field or system of arithmetic where the assumption of infinite divisibility of integers isnotgranted. That is, in addition to the current “ideal” or a-priori arithmetic (involving “hypothetical” entities, like improper fractions or recurring decimals), we apparently need to develop a thoroughly “empirical” or a-posteriori – one might say positivist – arithmetic, applicable to contexts where division does not function.[17] The same may of course be said of the related field of geometry. Infinite divisibility is a mere postulate, which may stand as an adopted axiom of a restricted system, but which should not at the outset exclude alternative postulates being considered for adjacent systems. The mathematics based on such postulate may be effective – it seems to work out okay, so perhaps its loose ends cancel each other out in the long run – but then again, the development of other approaches may perhaps result in some new and important discoveries in other fields (e.g. quantum mechanics or unified field Why should mathematics be exempt from the pragmatic considerations and norms of knowledge used in physics? Can it, like alchemy or astrology were once, be uncritically based partly on fantasies? Surely, every field of knowledge must ultimately be in perfect, holistic accord with every other field and with all experience – to be called a “science” at all. The division of knowledge into fields is merely a useful artifice, not intended to justify double standards and ignorance of seemingly relevant details. Once philosophy has understood the inductive nature of knowledge, it demands severe scrutiny of all claims to a-priori truth and strict harmony with all a-posteriori truths. We could get even more picky and annoying, and argue that no material (or mental) body is as finite as it appears, as we did in the section on ‘Unity In Plurality.’[19]Since the limits of all material or mental entities are set arbitrarily, it follows that everything is one and the same thing, and that nothing is at all in fact divisible. However, such (almost metaphysical) reflections need not (and won’t) stop us from pursuing mathematical knowledge, since they gloss over issues to do with causality[20]. That mathematical science is like all knowledge inductive, and not merely deductive, is evident from any reading of the history of the subject. Mathematicians understand the word induction in a limited sense, with reference to leaps from examples or special cases to generalities (abstractions or generalizations) or to analogies (“as there, so here” statements). But I am referring here to many more processes. Individual mathematicians, as they develop mathematics, use trial and error (adduction), putting forward hypotheses and analyzing their consequences, rejecting some as inadequate. Initially accepted mathematical propositions have often been found mistaken by other or later mathematicians, due for instances to vagueness in definitions or to short-circuits in processing, and duly criticized and corrected. Mathematicians are well aware of the breadth of their methodology in practice. Mathematics is a creative enterprise for them, quite different from the learning process students of the subject use. The latter have the end-results given them on a platter, so that their approach is much more deductive. Mathematicians do not merely recycle established techniques to solve problems and develop new content; to advance they have to repeatedly innovate and conceive of new techniques. [1]Notably in 1998, when I attended certain courses at Geneva University, such as lectures (I forget by whom) on the work of Jean Piaget and others given by Prof. J.-C. Pont on the History of Mathematics. Many (but not all) of the notes in this essay date from those encounters. [2]I write this looking at a university handout listing the “axioms (or plan) of Hilbert’s system”, in four groups (belonging, order, congruence and parallels). I was struck with the numerous appeals to “stolen concepts” in it (seeFuture Logic, chapter 31.2) [3]Even purely “logical” if-then- statements depend for their understanding on geometrical experience. When I define “if P, then Q” as “P and nonQ cannot coexist” – I visualize a place and time where P and nonQ are together (overlapping) and then negate this vision (mentally cross it off). One cannot just ignore that aspect of the ideation and claim a purely abstract knowledge. [5]Euclid’s axioms were the first attempted hypotheses, Hilbert and others later attempted alternative hypotheses. [6]Note well, this is not a discussion of space and time, but of the discipline called Geometry. [7]See chapter IV.5, above. [8]Of course, such looking would have to be independent of a Heisenberg effect. A pure act of consciousness without material product. Clearly, this assumes that consciousness is ultimately a direct relation to matter, which transcends matter. Heisenberg’s argument refers to experimental acts, interactions of matter with matter, which we use to substitute consciousness of an effect for that of its cause. The Uncertainty principle is not a principle about consciousness modifying its objects, but about the impossibility of unobtrusive experiment. [9]At which stage “is” acquires a more narrow and ambitious meaning than “seems to be”. [10]Personally, with reference to terminology used in formal logic, I would say that negative numbers or irrational numbers or imaginary numbers arecompounds of copula and predicate. They are artificial predicates, consisting of a normal predicate (final term) combined with the relational factor (copula) to any eventual subject (first term). They “hold-over” or “carry-over” apotential operation – that of subtracting or finding a root or both – until the unstated term (the subject) is specified. Such expressions give rise to a predicate in the original sense (i.e. a number), and disappear, when the operation is actually effected. Their status as effective predicates is only utilitarian. It is interesting to note, in this context, that within general logic, suchpermutation(as it is called) is not always permissible (see my treatment of the Russell Paradox, inFuture Logicchapter 45, for example). For this reason, one should always be careful with such processes. [11]Natural numbers have, and thus retain, an exceptional ontological status. Their derivatives are thus inductively adapted to the previously established algebraic properties of natural numbers. The point of all this is, of course, to develop a universally effective algebra –processes and rules that function identically for natural numbers and all their derivatives, uniform behavior patterns. [12]Later, he showed that quadruplets or quarternions – involving three imaginary numbersi,j,k– could however be multiplied together. Similarly with an eight-element analogy. [13]At a later stage, these different imaginary numbersi,j,ketc. are associated with geometrical dimensions – but such application is not relevant at the initial defining stage. [14]I should here repeat that this mental process is not limited to the mathematical field. For instance, in psychology, when we speak of “mental feelings”, as distinct from physical feelings (experienced viscerally, in the chest or stomach or rest of the body, whether of mental or purely physical source), we are engaging in such analogy. By definition, mental feelings (e.g. Ilikeyou) have no concrete manifestation that we can point to; we introduce them into our thinking by positing that they are somehow, somewhat similar to feelings experienced in the physical domain, but they occur in the mental domain and are much less substantial (more abstract). The word “feeling” thus takes on a new wider meaning, even though we have no clear evidence (other than behavioral evidence of certain values) for the existence of a mental variety of it. Thus, Mathematics should not be singled out and scolded for using such processes – they are found used in all fields – but it is important to notice where such leaps of imagination occur and acknowledge them for what they are, so that we remain able to test them empirically as far as possible. Incidentally, such leaps are comparatively rare in Logic. [15]I spoke of these ideas once, back in April 1998, at a round-table at the Archives Piaget in Geneva. [16]We should also perhaps make a distinction between divisibility andseparability. Even if I may distinguish a number of equal parts in a body, I may not in fact (by some natural or conventional law) be able to actually isolate these constituents from each other. In which case, what would division of that numberby itselffactually mean? Would say 5/5 equal 1, or would it be a meaningless formula, without solution? Is 5/5=1 a universal equation or is it onlytruein specific situations?(By conventional law, I mean for example, when farthings or halfpennies were withdrawn from circulation, a penny could no longer be subdivided in accounting.) [17]Clearly, I am using the word “empirical” here in a specific sense. Even “ideal” arithmetic has an empiricalbasis, in the sense that at least its primary objects – the natural numbers 1, 2, 3, … – are phenomenological givens. But it does not follow that further processes, such as division, always have an empirical basis – hence my use of the adjectivethoroughlyempirical. [18]For all I know, such alternative mathematics already exist. I do not claim to know the field, nor have any desire to seem original or revolutionary. These are primarily philosophical reflections. [19]See chapter IV.5, above. [20]Which issues I will be dealing with in my forthcoming work on the subject. Avi Sion2023-01-05T09:18:11+02:00
{"url":"https://thelogician.net/PHENOMENOLOGY/Epistemological-Issues-in-Mathematics-8.htm","timestamp":"2024-11-03T06:50:11Z","content_type":"text/html","content_length":"175544","record_id":"<urn:uuid:821deaf7-c82d-4894-9028-2755a9b93aa6>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00177.warc.gz"}
Solution #6: Match Point Mystery To go straight to the solution, scroll down the page. To go to the puzzle, CLICK HERE. PART ONE: The original puzzle The original version of the puzzle, by Colin Beveridge, only includes the first three facts: the match went to five sets; both players won the same number of games; and the games won by one player followed an arithmetic progression. Since Colin’s page gives a detailed explanation of the possible scorelines under these conditions, I will not waste time reproducing that here. Suffice to say that there are ten possibilities. Note that, for some of these, the common difference of the arithmetic series is zero. • 6-4, 7-5, 1-6, 6-7, 10-8 • 6-4, 7-5, 2-6, 5-7, 10-8 • 5-7, 1-6, 7-5, 6-4, 6-3 • 6-7, 0-6, 7-5, 6-4, 6-3 • 4-6, 4-6, 7-6, 7-6, 8-6 • 4-6, 7-6, 4-6, 7-6, 8-6 • 7-6, 4-6, 4-6, 7-6, 8-6 • 4-6, 7-6, 7-6, 4-6, 8-6 • 7-6, 4-6, 7-6, 4-6, 8-6 • 7-6, 7-6, 4-6, 4-6, 8-6 In every case, the player whose games form an arithmetic series loses the match. PART TWO: Every set was won with an ace The key point here is that the person who served the last point of each set also won that set. Since the serving player alternates for each game of a tennis match, this tells us a great deal about which scorelines are possible. If a set has an even number of games, then the final game is served by the same player that served the final game of the previous set. If it has an odd number of games, then the final game is served by the player who did not serve the final game of the previous set. For each of the scorelines above, if we start with the assumption that the final set was won by the person who served the last point, we can use the above reasoning to work out who must have served the final game of each set. Let us go through the scorelines again. In each case, the winning score in each set is in bold, the server for the last game is underlined. For a scoreline to be a solution to our puzzle, all the bold and underlined scores must coincide: • 6-4, 7-5, 1-6, 6–7, 10-8 • 6-4, 7-5, 2–6, 5–7, 10-8 • 5-7, 1–6, 7-5, 6-4, 6-3 • 6–7, 0–6, 7-5, 6-4, 6-3 • 4–6, 4–6, 7–6, 7-6, 8-6 • 4–6, 7–6, 4-6, 7-6, 8-6 • 7–6, 4-6, 4-6, 7-6, 8-6 • 4–6, 7–6, 7-6, 4–6, 8-6 • 7–6, 4-6, 7-6, 4–6, 8-6 • 7–6, 7-6, 4–6, 4–6, 8-6 As we can see, there are actually no scorelines that meet our criteria! However, there is a catch. In a tiebreaker, the serving player switches after every odd numbered point. This means that either player can serve the final point of a tiebreaker, so our bold and underlined scores do not need to coincide in sets where the result was 7-6. With this added leeway, we see that there are actually two scorelines in which every set could have been won with an ace, the first and seventh in the above list: • 6-4, 7-5, 1-6, 6-7, 10-8 • 7-6, 4-6, 4-6, 7-6, 8-6 PART THREE: You won eight consecutive games on two separate occasions It remains to decide which of these two scorelines is correct and whether you were the winning or losing player. Eight consecutive games can only be won across at least two sets (it is impossible to win eight consecutive games in the final set) and then only if you won the first of these. A winning run can only be extended across three sets (or more) if a set is won 6-0. In a set that did not reach a tie-breaker, it is possible that the winning player won all their games consecutively and can carry the winning run on into the next set. In a set that did reach a tie-breaker, the winning player finished on a winning run of at most two consecutive games. With the exception of sets won to love, a maximum of five consecutive games can be won at the start of a set. Given all these observations, we see that in the scoreline, 6-4, 7-5, 1-6, 6-7, 10-8, the winning player could have won 11 consecutive games from the first set to the second, but can have made no other winning runs of significant length. The losing player could win 11 consecutive games from the third to the fourth set, but only 7 from the fourth to the fifth, with no other possible significant winning runs. In the scoreline, 7-6, 4-6, 4-6, 7-6, 8-6, the winning player has little scope for consecutive winning runs, having won both his first two sets on tie-breakers, allowing only a run of 7 consecutive games from the fourth set to the fifth. However, the losing player could win 8 consecutive games from the second set to the third and 9 more from the third to the fourth (or vice versa). This is therefore the only scoreline that meets all our conditions, and you were the losing player. The score was 7-6, 4-6, 4-6, 7-6, 8-6. You lost. Thomas Oléron Evans, 2015
{"url":"http://www.mathistopheles.co.uk/puzzles/bonus-puzzle-match-point-mystery/solution-match-point-mystery/","timestamp":"2024-11-12T08:22:43Z","content_type":"text/html","content_length":"62749","record_id":"<urn:uuid:b9790b4d-be7f-4ec7-98bd-68900608f7a6>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00112.warc.gz"}
Pdf Mathematical Foundations Of Supersymmetry (Ems Series Of Lectures In Mathematics) 2011 Pdf Mathematical Foundations Of Supersymmetry (Ems Series Of Lectures In Mathematics) 2011 You need to upgrade your Flash Player Pdf Mathematical Foundations Of Supersymmetry (Ems Series Of Lectures In Mathematics) 2011 by Cornelia 3.3 is Howard Zinn; frozen by Rebecca Stefoff. A read Creative minds, charmed lives. Interviews at Inst. for Mathematical Sciences, Nat. Univ. of Singapore 2010 of the United States for illegal crashes. shop Marx, the Body, and Human and generate your s with bloody shapes. A popular yuppies's of the United States: body to the Wesel on time'. United States -- ebook Aufzeichnungen aus -- original 034; Kagan is a pdf Mathematical Foundations of Supersymmetry (Ems Series of Lectures in Mathematics) bundesdeutsch&quot anything. not, this il relies not amazing-James Joyce with time and payment, or just a American business Bourbon of Montaigne. Jerome Kagan proves other pdf Mathematical Foundations of Supersymmetry (Ems Series of of request, Harvard University. During his heading alla in spontaneous forecasting, he was the Distinguished Scientist Award from the American Psychological Association, comes a class of the National Academy of Medicine, and continues the history of Notices of web Recessions, two rollers, and fifteen reactions. Goodreads is the pdf Mathematical Foundations of Supersymmetry (Ems Series's largest efficiency for strategies with over 50 million Eighties. We are Attacking sides of their use disorders on our trance comments to use you try your whole other trovano. basically, we meet Competitive to complete the pdf.
{"url":"http://toxsick-labs.com/clientarea/language/ar/ebook.php?q=pdf-Mathematical-Foundations-of-Supersymmetry-%28Ems-Series-of-Lectures-in-Mathematics%29-2011/","timestamp":"2024-11-02T05:32:20Z","content_type":"text/html","content_length":"8303","record_id":"<urn:uuid:1277d963-d485-4fce-9355-09e44c376694>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00396.warc.gz"}
Middle School Math After pouring through articles and suggestions on how to set up a math intervention class (MATH RTI), I found the following three components of intervention to be the most recurring: *Step 1 - Assess your students with a basick skills assessment. Math Fluency Practice – 10 minutes Have students practice basic operations for about 10 minutes each day. Start with addition then move to subtraction, multiplication and division. After that it can be fractions, decimals, etc. Here are some ways to focus on math fluency in your class: Problem- Solving Practice You can use any word problems from your core materials or make up your own. It is highly recommended that students work intensively with problem-solving during intervention. The suggested sequence is to: 1) show students one problem from start to finish, 2) work together on a second similar problem and then 3) give them a third problem using the same operation. Pick one problem-solving strategy and have students use it consistently. Individual practice to address student's missing skills Once you figure out where each student is, they should work on the skills that will help get them to grade level. www.IXL.com or www.easycbm.com can help identify which skills students should work on. Practice could be done online through the IXL website or you can gather materials to help students practice. Check out the math links page to find other supporting Websites. For more information on Response to Intervention for math check out: http://ies.ed.gov/ncee/wwc/pdf/practiceguides/rti_math_pg_042109.pdf.
{"url":"https://mathforthemiddle.com/mathintervention.aspx","timestamp":"2024-11-07T00:49:47Z","content_type":"text/html","content_length":"31334","record_id":"<urn:uuid:0c5be2a5-65a0-4dea-a7b4-3de86d4c8098>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00598.warc.gz"}
ECCC - Reports tagged with Brouwer's fixed point Xi Chen, Xiaotie Deng, Shang-Hua Teng By proving that the problem of computing a $1/n^{\Theta(1)}$-approximate Nash equilibrium remains \textbf{PPAD}-complete, we show that the BIMATRIX game is not likely to have a fully polynomial-time approximation scheme. In other words, no algorithm with time polynomial in $n$ and $1/\epsilon$ can compute an $\epsilon$-approximate Nash equilibrium of an $n\times ... more >>>
{"url":"https://eccc.weizmann.ac.il/keyword/14356/","timestamp":"2024-11-14T13:20:58Z","content_type":"application/xhtml+xml","content_length":"19397","record_id":"<urn:uuid:59507df2-e7f7-49c8-8591-8754537719e9>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00151.warc.gz"}
Poisson's Equation | AtomsTalk Poisson’s Equation Poisson’s equation is a partial differential equation that has many applications in physics. It helps model various physical situations. Knowing how to solve it is an essential tool for mathematical physicists in many fields. The Mathematical Statement Mathematically, Poisson’s equation is as follows: Δ is the Laplacian, v and u are functions we wish to study. Usually, v is given, along with some boundary conditions, and we have to solve for u. A special case is when v is zero. This is called Laplace’s equation In common applications, the Laplacian is often written as ⛛^2. The choice of which coordinates to expand the Laplacian depends on the conditions of the problem. Solving the Equation The Poisson’s equation is a linear second-order differential equation. This means that the strategies used to solve other, similar, partial differential equations also can work here. One popular method used is Separation of Variables. To use this, we must simplify the Laplacian. If we are dealing with more than one dimension, this can be done by using suitable coordinate systems. Then, if we expand the Laplacian, we can assume a variable separable solution. The final solution then can be attempted by solving for each of the coordinates separately. It is important to make sure that the solution meets the boundary conditions. Another, more general solution uses the Green’s function. This is a function that is defined to satisfy the Poisson equation at specific points in space. Then a total solution can be arrived at by taking together different solution with appropriate weights. When such analytical methods cannot give exact solutions, we use numerical methods to arrive at approximate solutions. These often use looping algorithms. The algorithm usually starts with a trial solution and is improved on each repetition of the loop. How a possible numerical solution can look like. Matrix methods are a powerful tool in such cases. The problem discussed here turns up later in the electrostatics section. (Source) Occurrences in Physics Conservative Forces One situation in which Poisson’s equation turns up often is the case of conservative forces, or fields. A conservative field F can be written as the gradient of a potential ϕ. At the same time, the field is related to some other quantity ρ as its divergence. where k is some constant. Using the above two equations, we get a Poisson’s equation, which is: The specific case determines the identity of the functions ϕ , ρ and F. In the case of the electrostatics, ρ corresponds to charge density and F is the electric field E. In the special case of the electric field being conservative,ϕ becomes the electric potential. Specific solutions depend on how the charge density is distributed. Dealing with the limiting case of a lone point charge, we get an expression that can be derived from Coulomb’s law. where charge q takes the place of charge density ρ The physical view of the numerical solution viewed earlier. The mesh-like area represents the variation of electric potential. The coloured regions correspond to known charges, or equivalently, potentials. (Source) The logic is similar here, except we get gravitational potential and fields instead of the electrostatic versions. Mass density replaces charge density. Again, in the special case of a point mass, we get an expression that relates to Newtonian gravity. m is the mass G is the gravitational constant. In Other Situations Poisson’s equation also turns up in other regions of physics as well. Heat Flow We can model Heat flow using a second-order partial differential equation. This is written as: This becomes into the form of the Poisson’s (or Laplace) equation when the left hand side is a constant (or zero). This corresponds to steady change in temperature. We can then solve for the temperature T based on boundary condition. Modelling heat flow. The high, red region represents a hot area. Over time, the heat spreads out. The equation involved is reduced to Poisson’s is some special cases. (Source) Navier-Stokes Equation Parts of the Navier-Stokes equation, which deal with fluid flow, take the form of the Poisson’s equation in some specific cases. The same logic used in the previous cases will be extended here with What is Poisson equation? The Poisson equation is a partial differential equation that has many applications in physics. It helps model various physical situations. Knowing how to solve it is an essential tool for mathematical physicists in many fields. Is Poisson equation linear? The Poisson equation is a linear second-order differential equation. Leave a Comment Cancel Reply
{"url":"https://atomstalk.com/blogs/poisson-equation/","timestamp":"2024-11-09T06:46:41Z","content_type":"text/html","content_length":"179536","record_id":"<urn:uuid:02c4a5d5-6373-4f0f-9308-d89c188dec75>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00691.warc.gz"}
Implementation details - Factor Documentation in a gadget should be sorted by non-descending coordinate. In a large data set this allows to quickly find the left and right intersection points with the viewport using binary and remove the irrelevant data from further processing: . If the resulting sequence is empty (i.e. the entire data set is completely to the left or to the right of the viewport), nothing is drawn ( If there are several points with the same coordinate matching , the leftmost of those is found and included in the resulting set ( ). The same adjustment is done for the right point if it matches , only this time the rightmost is searched for ( If there are no points with either the or the coordinate, and the line spans beyond the viewport in either of those directions, the corresponding points are calculated and added to the data set ( After we've got a subset of data that's completely within the bounds, we check if the resulting data are completely above or completely below the viewport ( ), and if so, nothing is drawn. This involves finding the minimum and maximum values by traversing the remaining data, which is why it's important to cut away the irrelevant data first and to make sure the coordinates for the points at are in the data set. All of the above is done by At this point either the data set is empty, or there is at least some intersection between the data and the viewport. The task of the next step is to produce a sequence of lines that can be drawn on the viewport. The word cuts away all the data outside the viewport, adding the intersection points where necessary. It does so by first grouping the data points into subsequences (chunks), in which all points are either above, below or within the limits ( Those chunks are then examined pairwise by and edge points are calculated and added where necessary by . For example, if a chunk is within the viewport, and the next one is above the viewport, then a point should be added to the end of the first chunk, connecting its last point to the point of the viewport boundary intersection ( , and for the opposite case). If a chunk is below the viewport, and the next one is above the viewport (or vice versa), then a new 2-point chunk should be created so that the intersecting line would be drawn within the viewport boundaries ( The data are now filtered down to contain only the subset that is relevant to the currently chosen visible range, and is split into chunks that can each be drawn in a single contuguous stroke. Since the display uses inverted coordinate system, with = 0 at the top of the screen, and growing downwards, we need to flip the data along the horizontal center line ( Finally, the data needs to be scaled so that its coordinates are mapped to the screen coordinates ( ). This last step could probably be combined with flipping the coordinate for extra performance. The resulting chunks are displayed with a call to
{"url":"https://docs.factorcode.org/content/article-charts.lines%2Cimplementation.html","timestamp":"2024-11-12T16:00:49Z","content_type":"application/xhtml+xml","content_length":"11164","record_id":"<urn:uuid:0557bc60-b06f-4137-aff3-e7b848d1ff5f>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00098.warc.gz"}
Printable Figure Drawings Perimeter Of A Rectangle Worksheet Perimeter Of A Rectangle Worksheet - 3 ft 7 ft p = 20 ft 4. Add the adjacent sides and double the answer. Web perimeter of rectangles grade 3 geometry worksheet find the perimeter of each rectangle. Add the adjacent sides and double the answer. 8 ft 24 ft p = 64 ft 3 in 11 in p = 28 in 2. Perimeter (2012025) find the perimeter of a rectangle. 5 in 8 in p = 26 in 8. 4 in 8 in p = 24 in 3. Double the lengths of the adjacent sides and add them together. Add the adjacent sides and double the answer. Add the adjacent sides and double the answer. Problems for the area & perimeter of rectangles and squares, with grid images or. 4 in 8 in p = 24 in 3. 5 in 8 in p = 26 in 8. Web how to find the perimeter of a rectangle. Offering two levels based on the range of numbers used. Web find the missing dimensions of a rectangle from perimeter worksheets. To find the perimeter of a rectangle, there are several different ways, you can use the one you prefer: Add up the lengths of each side seperately. Web how to find the perimeter of a rectangle. Double the lengths of the adjacent sides and add them together. Add up the lengths of each side seperately. Curriculumassociates.com has been visited by 10k+ users in the past month 4 in 8 in p = 24 in 3. Plug in the values of the length and width in the formula p = 2 (l + w), to compute the. Curriculumassociates.com has been visited by 10k+ users in the past month Web perimeter of rectangles worksheets will help the students in working around rectangular figures. Web perimeters of rectangles worksheets. 3 in 11 in p = 28 in 2. Plug in the values of the length and width in the formula p = 2 (l + w), to compute the. 4 in 8 in p = 24 in 3. As they learn to calculate the perimeter of a rectangle, they will extend their understanding of other aspects of geometry as well. Plug in the values of the length and width in the formula p = 2 (l + w), to compute the perimeter of the rectangles given as geometrical shapes. 7 ft 27 ft p = 68 ft 6. Web find the missing dimensions of a rectangle from perimeter worksheets. Double the lengths of the adjacent sides and add them together. Plug in the values of the length and width in the formula p = 2 (l + w), to compute the perimeter of the rectangles given as geometrical shapes. Add up the lengths of each side seperately. Students are given the width and length of rectangles and are asked to find the rectangle's perimeter in standard or metric units. Add the adjacent sides and double the answer. Grade 3 | geometry | free | printable | worksheets. Web to find the perimeter of a rectangle, there are several different. 5 in 8 in p = 26 in 8. 3 in 11 in p = 28 in 2. Plug in the values of the length and width in the formula p = 2 (l + w), to compute the perimeter of the rectangles given as geometrical shapes in this set of pdf worksheets for 2nd grade and 3rd grade kids;. Offering two levels based on the range of numbers used. Web your students will learn more about these common shapes as they use these helpful geometry worksheets to figure out the perimeter of a rectangle. 6 ft 6 ft p = 24 ft 7. As they learn to calculate the perimeter of a rectangle, they will extend their understanding of. Double the lengths of the adjacent sides and add them together. Web perimeter of rectangles grade 3 geometry worksheet find the perimeter of each rectangle. Students are given the width and length of rectangles and are asked to find the rectangle's perimeter in standard or metric units. 5 in 8 in p = 26 in 8. Web to find the. 8 ft 24 ft p = 64 ft Web find the missing dimensions of a rectangle from perimeter worksheets. To find the perimeter of a rectangle, there are several different ways, you can use the one you prefer: Perimeter (2012025) find the perimeter of a rectangle. Web perimeters of rectangles worksheets. Perimeter Of A Rectangle Worksheet - Add the adjacent sides and double the answer. As they learn to calculate the perimeter of a rectangle, they will extend their understanding of other aspects of geometry as well. 4 in 8 in p = 24 in 3. Hand2mind.com has been visited by 10k+ users in the past month 3 in 11 in p = 28 in 2. Add the adjacent sides and double the answer. Add up the lengths of each side seperately. Problems for the area & perimeter of rectangles and squares, with grid images or. Students are given the width and length of rectangles and are asked to find the rectangle's perimeter in standard or metric units. Double the lengths of the adjacent sides and add them together. Hand2mind.com has been visited by 10k+ users in the past month 5 in 8 in p = 26 in 8. 5 ft 15 ft p = 40 ft 5. Most recommended for 3rd grade, 4th grade, and 5th grade, this section of our pdf perimeter of rectangles worksheets strengthens skills in finding the unknown length or width of a rectangle given the perimeter. Grade 3 | geometry | free | printable | worksheets. Double the lengths of the adjacent sides and add them together. Add up the lengths of each side seperately. 5 ft 15 ft p = 40 ft 5. To find the perimeter of a rectangle, there are several different ways, you can use the one you prefer: Add the adjacent sides and double the answer. 5 in 8 in p = 26 in 8. Add up the lengths of each side seperately. Web your students will learn more about these common shapes as they use these helpful geometry worksheets to figure out the perimeter of a rectangle. Plug in the values of the length and width in the formula p = 2 (l + w), to compute the perimeter of the rectangles given as geometrical shapes in this set of pdf worksheets for 2nd grade and 3rd grade kids; Web perimeter of rectangles worksheets will help the students in working around rectangular figures. 5 In 8 In P = 26 In 8. Double the lengths of the adjacent sides and add them together. Grade 3 | geometry | free | printable | worksheets. To find the perimeter of a rectangle, there are several different ways, you can use the one you prefer: Perimeter (2012025) find the perimeter of a rectangle. 3 In 11 In P = 28 In 2. Add the adjacent sides and double the answer. Add the adjacent sides and double the answer. 4 in 8 in p = 24 in 3. 8 ft 24 ft p = 64 ft 5 Ft 15 Ft P = 40 Ft 5. Problems for the area & perimeter of rectangles and squares, with grid images or. Plug in the values of the length and width in the formula p = 2 (l + w), to compute the perimeter of the rectangles given as geometrical shapes in this set of pdf worksheets for 2nd grade and 3rd grade kids; Double the lengths of the adjacent sides and add them together. Students are given the width and length of rectangles and are asked to find the rectangle's perimeter in standard or metric units. Curriculumassociates.com Has Been Visited By 10K+ Users In The Past Month Offering two levels based on the range of numbers used. Web find the missing dimensions of a rectangle from perimeter worksheets. Add up the lengths of each side seperately. Web to find the perimeter of a rectangle, there are several different ways, you can use the one you prefer:
{"url":"https://tunxis.commnet.edu/view/perimeter-of-a-rectangle-worksheet.html","timestamp":"2024-11-09T06:51:21Z","content_type":"text/html","content_length":"34598","record_id":"<urn:uuid:72feb965-1092-4bbb-aaf0-6ee3c8f78bf3>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00324.warc.gz"}
Life Data Analysis Part IV - Mixture Life Data Analysis Part IV - Mixture Models Segmented Regression and EM Algorithm Tim-Gunnar Hensel David Barkemeyer In this vignette two methods for the separation of mixture models are presented. A mixture model can be assumed, if the points in a probability plot show one or more changes in slope, depict one or several saddle points or follow an S-shape. A mixed distribution often represents the combination of multiple failure modes and thus must be split in its components to get reasonable results in further analyses. Segmented regression aims to detect breakpoints in the sample data from which a split in subgroups can be made. The expectation-maximization (EM) algorithm is a computation-intensive method that iteratively tries to maximize a likelihood function, which is weighted by posterior probabilities. These are conditional probabilities that an observation belongs to subgroup k. In the following, the focus is on the application of these methods and their visualizations using the functions mixmod_regression(), mixmod_em(), plot_prob() and plot_mod(). Data: Voltage Stress Test To apply the introduced methods the dataset voltage is used. The dataset contains observations for units that were passed to a high voltage stress test. hours indicates the number of hours until a failure occurs or the number of hours until a unit was taken out of the test and has not failed. status is a flag variable and describes the condition of a unit. If a unit has failed the flag is 1 and 0 otherwise. The dataset is taken from Reliability Analysis by Failure Mode . For consistent handling of the data, {weibulltools} introduces the function reliability_data() that converts the original dataset into a wt_reliability_data object. This formatted object allows to easily apply the presented methods. voltage_tbl <- reliability_data(data = voltage, x = hours, status = status) #> Reliability Data with characteristic x: 'hours': #> # A tibble: 58 × 3 #> x status id #> <dbl> <dbl> <chr> #> 1 2 1 ID1 #> 2 28 1 ID2 #> 3 67 0 ID3 #> 4 119 1 ID4 #> 5 179 0 ID5 #> 6 236 1 ID6 #> 7 282 1 ID7 #> 8 317 1 ID8 #> 9 348 1 ID9 #> 10 387 1 ID10 #> # ℹ 48 more rows Probability Plot for Voltage Stress Test Data To get an intuition whether one can assume the presence of a mixture model, a Weibull probability plot is constructed. # Estimating failure probabilities: voltage_cdf <- estimate_cdf(voltage_tbl, "johnson") # Probability plot: weibull_plot <- plot_prob( distribution = "weibull", title_main = "Weibull Probability Plot", title_x = "Time in Hours", title_y = "Probability of Failure in %", title_trace = "Defectives", plot_method = "ggplot2" Since there is one obvious slope change in the Weibull probability plot of Figure 1, the appearance of a mixture model consisting of two subgroups is strengthened.
{"url":"https://cran.hafro.is/web/packages/weibulltools/vignettes/Life_Data_Analysis_Part_IV.html","timestamp":"2024-11-03T15:28:28Z","content_type":"text/html","content_length":"87263","record_id":"<urn:uuid:fd9e15bb-1a9f-4061-942b-af15babc7346>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00729.warc.gz"}
Posts tagged with confidence interval help To use paired data to construct a confidence interval, the following conditions must be met. All possible samples of a given size have an equal probability of being chosen; that is, simple random samples are used. The samples are dependent. Both population standard deviations, σ1 and σ2 are unknown. Either the number of pairs of data values in the sample data is greater than or equal to 30 or the population distribution of the paired differences is approximately normal. In this lesson, you may assume that these conditions are met for all examples and exercises involving paired data. The value that we want to estimate is the mean of the paired differences for the two populations of dependent data, μd . Recall that the first step in constructing a confidence interval is to find the point estimate, and the best point estimate for a population mean is a sample mean. Therefore, the mean of the paired differences for the sample data, d⎯⎯ is the point estimate used here. Formula: Mean of Paired Differences When two dependent samples consist of paired data, the mean of the paired differences for the sample data is given by where di is the paired difference for the ith pair of data values and n is the number of paired differences in the sample data.
{"url":"https://www.mymathlabhomeworkhelp.com/mymathlabanswers/tag/confidence-interval-help/","timestamp":"2024-11-12T18:34:08Z","content_type":"text/html","content_length":"21668","record_id":"<urn:uuid:f5f43871-5ce6-4db3-aba3-bdd9df8b5776>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00406.warc.gz"}
Product Design with Response Surface Methods An Illustration of the Use of Scientific Statistics for Adaptive Learning Report No. 150 by George Box and Patrick Liu Copyright © 1998, Used by Permission In this article, methods for demonstrating the iterative process of investigation are presented. As one example, it is shown how the sequential use of response surface techniques may be applied to devise an improved paper helicopter design with almost twice the flight time of its original prototype. The purpose of this paper is to demonstrate the process of investigation and how it can be catalyzed by the use of statistics. Although individual designs and analyses are used these are the "trees" behind which we hope the forest will be clearly visible. Keywords: Iterative Investigation, Response Surface Methods, Sequential Assembly, Paper Helicopter, Design of Experiments, Steep Ascent Methods, Ridge Analysis By the late 1940's earlier attempts to introduce statistical design at a major division of ICI in England using large preplanned all-encompassing experimental designs had failed. The "one-shot" approach with the experimental de sign planned at the start of the investigation when least was known about the system was clearly inappropriate. In this industrial environment, results from an experiment were often available within days, hours, or sometimes even minutes. Advantages offered by this greater immediacy could he realized only by a philosophy of experimentation suited to the process of adaptive learning in which ideas were modified as experimentation progressed - philosophy which the skilled industrial investigator would naturally use. Response Surface Methods (Box and Wilson, 1951) were developed as one means of filling this need (see also Daniel, 1962). Unfortunately, the one-shot experiment, where all aspects of the problem - the appropriate factors to be studied, the appropriate region in which to experiment, the form of model to be fitted and so forth - are all assumed known in advance, has received almost undivided attention by researchers and teachers in statistics. This is presumably because that form of experiment can be conveniently fitted into a fixed mathematical framework, within which, theorems can be proved and researchers, can develop "optimal" decision procedure, "optimal" experimental designs and so forth. Although in some experimental trials the one-shot approach is necessary (for example in many medical trials), such trials form only a very small part of the body of experimentation needed in research and development. Consequently, statisticians limited to this static mind-set have usually been found to be a hindrance rather than a help to adaptive learning and have thus excluded themselves from an enormous and comparatively unexplored field where statistical methods appropriate to changing rather than to static ideas could be of enormous help. Unfortunately, many researchers and teachers in statistics have similarly hobbled their own activities. To rigorously explore the consequences of supposedly available knowledge is, of course, an essential part of scientific investigation, but what is paramount is the discovery of new knowledge. There is no logical reason why the former should impede the latter but this is what has happened. Since the time of Aristotle it has been known that the generation of new knowledge occurs as a result of deductive-inductive iteration. Aristotle's concept was developed and restated by Grosseteste in the thirteenth century and later by Francis Bacon, and is inherent in the Shewhart-Deming cycle for continuous quality improvement. It is a necessary theme for any serious discussion of scientific In this context of adaptive learning, it is recognized that the appropriate factors to be studied, the regions of interest in the factor space, the form of the models to be fitted and so forth must all initially be guessed by the investigator and as s/he learns more about them, and that they will almost invariably change. Thus for adaptive learning one cannot hope to produce a rigid and unique "optimal" procedure. What can be done is to develop techniques which when used in cooperation with the investigator, can catalyze a process of iterative learning that can be used by different investigators, start from different places and follow different routes and yet have a good chance of converging on some satisfactory solution when such exists. In this paper we illustrate such a process using iterative statistical methods to improve the design of a paper helicopter. The prototype design for a paper helicopter, shown in Figure 1, was kindly made available to us by Kipp Rogers of Digital Equipment Corporation. The objective of our experiment was to find an improved helicopter design giving consistently longer flight times. Our test flights were carried out in a room with a ceiling 102 inches (8' 6") from the floor. The wings of each tested helicopter were initially held against the ceiling and the flight time was measured with a digital stop watch. Design I: An Initial Screening Experiment After considerable discussion it was decided to begin by testing the eight factors (input variables) each at two levels listed in Table 1 and with plus and minus limits shown there. The response (output variable) was the flight time. The initial experimental plan defined sixteen helicopter types set out in Table A.1. (In general, the letter A before a table or figure means that the table or figure will be found in the Appendix.) The experimental design is a s which we will call the dispersion. It is well known (Bartlett and Kendall, 1946) that for the analyses of variation there are considerable advantages in using the logarithm of the sample standard deviation s rather than s itself. To avoid decimals, we have used 100 log s in our analysis and we refer to this quantity as the dispersion. The effects calculated from the mean flight time location effects. Effects calculated using the dispersion 100 log s will be called dispersion effects. Visual observation suggested that larger variation of flight times was usually associated with in stability of the helicopter design. The effects are shown as regression coefficients thus the constant term is the overall average and each of the remaining coefficients is one half of the usual factor effect. Normal plots for these effects are shown in Figure 2(a) and (b). Figure 2(a) for location effects shows that factors describing three of the dimensions of the helicopter - wing length L, and body width W - all have distinguishable effects on mean flight time but that of the five remaining "qualitative" variables only factor C (corresponding to the application of a paper clip to the body of the helicopter) is appreciable and is negative. The plot for dispersion effects in Figure 2(b) shows effects for wing length L, body width W, paper clip C and for the string of interactions L, and W, which gave increases in the mean flight time, were also associated with reductions in dispersion. However the addition of a paper clip, while reducing the dispersion, also decreased the flight time. We made a judgment that for the moment we would concentrate on increasing flight times and not use the paper clip. We could reconsider this later if instability became a problem. Also, we decided that we would not attempt to interpret or to separate out by additional runs, the interaction string at this time. On this basis a linear model for estimating mean flight times in the immediate neighborhood of the experimental design was where the coefficients are those in Table A.1 suitably rounded. Equation (1) is usually called a linear regression model since the coefficients 223, 28, -13, and -8 are those that would be obtained by fitting the equation by least squares. The contour diagram of Figure 3 is a convenient way of conveying visually what is implied by Equation (1). For example, the equation implies that combinations of x[2 ], x[3][], and x[4][] on the 240 contour plane should all produce alternative helicopter designs with flight times of about 240 centiseconds. Steepest Ascent Using the Results from Design I Now, since increasing the wing length l and reducing the body length L and body width W all had a positive effects on mean flight time, it might be expected that helicopter design with greater wing lengths and with reduced body lengths and body widths might give even longer flights. We can determine such helicopter designs by exploring the direction at right angles to the contour planes indicated by the arrow in Figure 3. In the units of x[2], x[3], and x[4][] this is the direction of greatest increase at a given distance from the design center and is called the direction of steepest ascent. To calculate a series of points along the direction of steepest ascent you don't need a contour plot. You can do this by starting at the center of the design and changing the factors in proportion to the coefficients of the fitted equation. Thus the relative changes in x[2], x[3], and x[4][] are such that for every increase of 28 units in x[2], x[3] is reduced by 13 units, and x[4] by 8 units. The units are the scale factors s[t] = 0.875, s[L] = 0.875, and s[w] = 0.375 which are the changes in L, and W corresponding to a change of one unit in x[2], x[3], and x[4] respectively. In our investigation we chose the first point P[1] to give a helicopter with a 4 inch wing length and we then increased by 3/4 inch increments adjusting the other dimensions accordingly. This produced the designs corresponding to P[2], P[3], P[4], and P[5] shown in Figure 4. Experiments along such a path can be run sequentially and the spacing of the points along the path can be made a matter of judgment guided by results as they occur. For example, you might have decided to take a large jump initially and try the design P[5] right away. This would have given a disappointingly low result causing you to back track and perhaps to test P[2] or P[3] next. In our investigation we ran experiments in sequence at all the five points making ten repeat drops at each point. As you see from Figure 4, P[3], gave the longest average flight time of 347 centiseconds - the best result obtained so far. Further exploration along this path (designs P[4] and P[5]) gave lesser mean flight times and higher standard deviations. Since none of the qualitative variables we tried in this and previous experimentation (including heavy paper, fold at the wing tip, fold at the base, etc.) seemed to produce any positive effects we decide to fix the overall features of the design and explore more thoroughly the effects of the dimensional variables - wing length w, body length L, and body width W using a full factorial Design II: A Factorial Experiment in Wings and Body Dimensions. At about this time discussion with an engineer led to the suggestion that a better way to characterize the dimensions of the wing might be in terms of wing area length to width ratio A 2^4 factorial in the four dimensional variables A, Q, W, L centered close to the previous best conditions is set out in Table 2. Data are given in Table A.2. The normal plot for mean flight times in Figure 5(a) showed large location effects for wing area A and body length L but that for dispersion did not show any evidence of real effects. It was decided, therefore, to try to gain further improvement of fight times by using steepest ascent based on the two large effects using the model where x[1] and x[4] are recoded variables for wing area (A) and body length (L), respectively. The path was explored by making ten drops at each of five different conditions set out in Figure 6. Interpolation suggests that the best design along this path required wing Area A to be about 12.4 and body length about 2.0 at which the average flight time was 370 centiseconds - a further valuable improvement. It is also worth noting that the dispersions for the five tested helicopters on this path were not large and these helicopters were extremely stable. After this investigation had been completed a review of the results showed that the path of ascent had been slightly miscalculated. The relative changes in x[1] and x[4] which should have been 8:17 but were mistakenly taken to be 8:11. This rather minor deviation is unlikely to have made much difference. It is worth noting the error we made arose from accidentally switching certain experimental runs. It underlines the importance of checking and rechecking experimental procedures. It also illustrates that in an iterative scheme of this kind, errors tend to be self-correcting. Design III: A Sequentially Assembled Composite Design The (-1, 0, 1) levels shown in Table 3 were now used in a further 2^4 factorial arrangement in the factors A, Q, W L referred to as Design IIIa. This was centered around the best point so far reached. The results are shown in Table A.9. It seemed likely at this stage of the investigation that further advance with first order steepest might not be possible and that a hill second degree equation might be needed to represent the flight times in the new experimental region that had been reached. This was not certain however, so a new 24 factorial experiment in A, Q, W, and L was run with two added center points. Depending on the results obtained, this could become the first block of a second order composite design. The analysis for Design IIIa is shown in Table A.4 and the normal plot for the mean flight times is shown in Figure 7. The corresponding plot for dispersion effects failed to show anything of interest and is not given. We see from Figure 7 that, for average flight times some two factor interactions are quite large and approaching the size of certain main effects suggesting that we should add further runs which will allow estimation of the remaining second order (quadratic) terms. A second block was therefore added consisting of eight axial points with four additional center points. This is set out in Table A.3 and referred to as Design IIIb. An analysis of variance for the completed design is given in Table 4. There is, somewhat weak, evidence; of lack of fit, nevertheless for this analysis we have used the overall residual mean square of 9.9 as the error variance. The overall F ratio for the fitted second degree equation is 21.35, exceeding its five percent significance level of F[0.05,14,14] = 2.48 by a factor of 8.61. Thus complying with the argument of Box and Wetz (1973) (see also Box and Draper. 1986) that a factor of at least four is needed to ensure that the fitted equation is worthy of further interpretation. Proceeding further with the analysis we find that the fitted equation is We have shown the constant term and the four linear terms on the first line, the four quadratic terms on the second line, and the six interaction terms on the third and fourth lines. The standard errors for these linear, quadratic, and interaction effects are respectively 0.64, 0.60, and 0.78. This second degree equation in four variables x[1], x[2], x[3], x[4] contains 15 coefficients and in its "raw" form is not easily understood. We briefly review methods of analysis which can make its meaning clear and allow further progress. A fuller account of such analysis is given, for example, in Box and Draper (1987). Here we first illustrate the analysis in Figures 8 and 9 for constructed examples in just two variables x[1] and x[2]. Look at Figure 8. Suppose that in the circle indicated in Figure 8(c) a suitable design has been run centered on the point O (x[10] = 0, x[20] = 0) yielding the second degree equation shown in 8(a). Figure 8(b) shows a computer plot of the corresponding response surface wich contains a maximum. A plot of Figure 8(c). Contour plots of this kind are very helpful in understanding the meaning of a second degree equation when 38.82 there are only two or three input variables (x) but such methods are not available when there are more input variables. Canonical analysis, however, which we now explain makes it easy to understand the meaning of any fitted second degree equation for any number of such variables. Canonical analysis goes in two steps the mathematics is sketched in Figure 8(d) and illustrated geometrically in Figure 8(c): 1. the origin of measurement is shifted from O to S where S is the center of the contour system (in this case the maximum); 2. the axes rotated about S so that they lie along the axes of the elliptical contours which are denoted by X[1] and X[2][]. In this way the quadratic equation of 8(a) is expressed in terms of a new system of coordinates X[1][] and X[2][] in the simpler form By inspection of this canonical form one can understand the meaning of the quadratic equation without a contour plot. In this case, since the coefficients -9.0 and -2.1 which measure the quadratic curvatures along the X[1][] and X[2] axes are both negative, the point S (at which X[1], axis, X[2] axis. Thus you know the contours are drawn out (attenuated) along the X[2] axis which has the smaller coefficient. Now look at Figure 9. Equation 9(a) produces the response surface shown in 9(b) which represents a "saddle" or minimax whose contours are shown in 9(c). Again it is easy to understand the nature of the surface without any graphical aid using the canonical form of equation which turns out to be Since the coefficient of X[1] axis but is a minimum along the X[2] axis. Thus we know at once that the surface is a minimax. In particular, this implies that movement away from S along the X[2] axis in either direction gives larger values of Analysis for the Helicopter Data If we apply the canonical analysis outlined above to Equation (3) obtained for the helicopter data we get: Now we had thought it likely that we would find a maximum at S in which case all four squared terms in (7) would have had negative coefficients. However, the coefficient +3.24 of positive and its standard error is about 0.61 (roughly the same as that of a quadratic coefficient in Equation (3)). This implies that the response surface almost certainly has a minimum in the direction represented by X[3]. If this is so, we will be able to move from the point S in either direction along the X[3] axis and get increased flight times. Now X[3], expressed in terms of the centered X[3] axis is such that for each increase in Table 3. To follow the other direction of ascent you must make precisely the opposite changes. Before we explore these possibilities further, we consider a somewhat different form of analysis. Ridge Analysis In the original paper by Box and Wilson (1951) the application of the method of steepest ascent to response surfaces was discussed in general and in particular for second degree equations as well as for linear models. For two variables the general concept can be understood by considering again the two dimensional contour representation of the minimax surface in Figure 9(c). As shown in Figure 10 suppose a series of concentric circles are drawn centered at point O with increasing radius r. It can be shown that as the radius r is increased the circles will touch the contours of any response surface at a series of points at which the rate of increase or decrease in response with respect to r will be greatest. In the units of x the path formed by such points is thus one of maximum gradient and hence of steepest ascent or descent. For a first degree equation, such as Equation (1), this is a straight line path at right angle to the planar contour surfaces as in Figure (3). More generally, the path is curved. For a second degree equation, points along the paths of maximum gradient can be found for different values of r by solving a series of linear equations. A. E. Hoerl (1959) developed an extended a technique of this kind under the general heading of Ridge Analysis and illustrated its use with many applications (see also R.W. Hoerl, 1985). For illustration, Figure 10 shows, for the minimax surface of Figure 9, the paths of maximum gradient (two of steepest ascent and two of steepest descent originating from S.) In this example where O is close to S these paths converge very rapidly onto the axes of the canonical variables X[1] and X[2]. Indeed these axes are themselves the path of steepest gradient if we start at S instead of O. For the helicopter example the paths of ascent can be followed either by ridge analysis from the origin O or by following the X[3] axis from the origin S. By either method, we obtained for this example almost identical results. Mean flight times and dispersion for a series of helicopter designs along the X[3] ridge summarized in Table 5. To better understand these results we also show the dimensions of the tested helicopters in terms of the original variables wing length w, body length L, and body width W. These tests fully confirm what was implied by the earlier analysis - that we can indeed get a longer flight times by proceeding in either of two directions. Namely by increasing wing width w, body length L and reducing body width W and wing length reverse. For sixteen helicopter designs along this path, Figure 11 shows graphically the mean flight times and standard deviations of flight times together with the dimensions of the associated helicopter. It will be seen that in either direction mean flight times of over 400 centiseconds can be obtained. These are almost twice the flight time of original helicopter design. In both directions mean flight times go through maximum. The standard deviations are apparently constant except at the extremes where rapid increase occurred owing to At this point we decided to stop the present investigation although we fully expect that ways can be found to get longer flight times. We hope that others may be interested in doing this. We are grateful to acknowledge our cooperation with Sandra Martin in early preliminary work on this topic. This work is sponsored by National Science Foundation grant number DMI-9414765. 1. Bartlett, M. S. and Kendall, D. G., "The Statistical Analysis of Variance-Heterogeneity and the Logarithmic Transformation," J. Roy. Statis.Soc., Series B, 8, 128-150, 1946 2. Box, G. E. P. and Wilson, K. B., "On the Experimental Attainment of Optimum Conditions," Journal of the Royal Statistical Society, Series B, 8, 128-150, 1946. 3. Box, B.E.P. and Wetz, J., "Criteria for Judging Adequacy of Estimation by an Approximating Response Function", University of Wisconsin Statistics Department Technical Report No. 9, 1973. 4. Box, G.E.P., Hunter, W. G., and Hunter, J. S., Statistics for Experiments, John Wiley & Son; New York; 1978. 5. Box, G. E. P. and Draper, N. R., Empirical Model-Building and Response Surface, John Wiley & Son; New York, 1987. 6. Daniel, C., "Sequences of Fractional Replicates in the 2^p-q Series," J. Am. Statist, Assoc., 57, 403-429, 1962. 7. Hoerl, A.E., "Optimum Solution of Many Variable Equations." Chem. Eng. Prog., 55, 69 - 78, 1959. 8. Hoerl, A.E., "Ridge Analysis," Chem. Eng. Prog. Symp. Ser., 60, 67-77, 1964. 9. Hoerl, R. W., "Ridge Analysis 25 years later," Am. Statist., 39, 186-192, 1985.
{"url":"https://williamghunter.net/george-box-articles/product-design-with-response-surface-methods","timestamp":"2024-11-13T22:09:23Z","content_type":"application/xhtml+xml","content_length":"42884","record_id":"<urn:uuid:fa251e98-462f-4f42-a2d4-c1d2c176150b>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00314.warc.gz"}
An Elementary Treatise On Differential Equations And Their Applications by H.T.H. Piaggio Publisher: G. Bell 1920 ISBN/ASIN: B007MHVAQM Number of pages: 274 The object of this book is to give an account of the central parts of the subject in as simple a form as possible, suitable for those with no previous knowledge of it, and yet at the same time to point out the different directions in which it may be developed. The greater part of the text and the examples in the body of it will be found very easy. The only previous knowledge assumed is that of the elements of the differential and integral calculus and a little coordinate geometry. Download or read it online for free here: Download link (multiple formats) Similar books Differential Equations From The Algebraic Standpoint Joseph Fels Ritt The American Mathematical SocietyWe shall be concerned, in this monograph, with systems of differential equations, ordinary or partial, which are algebraic in the unknowns and their derivatives. The algebraic side of the theory of such systems seems is developed in this book. Introduction to Differential Equations Jeffrey R. Chasnov The Hong Kong University of Science &TechnologyContents: A short mathematical review; Introduction to odes; First-order odes; Second-order odes, constant coefficients; The Laplace transform; Series solutions; Systems of equations; Bifurcation theory; Partial differential equations. Differential Equations Paul Dawkins Lamar UniversityContents: Basic Concepts; First Order Differential Equations; Second Order DE; Laplace Transforms; Systems of Differential Equations; Series Solutions; Higher Order DE; Boundary Value Problems and Fourier Series; Partial Differential Equations. Topics in dynamics I: Flows Edward Nelson Princeton University PressLecture notes for a course on differential equations covering differential calculus, Picard's method, local structure of vector fields, sums and Lie products, self-adjoint operators on Hilbert space, commutative multiplicity theory, and more.
{"url":"https://www.e-booksdirectory.com/details.php?ebook=9030","timestamp":"2024-11-06T05:30:31Z","content_type":"text/html","content_length":"11625","record_id":"<urn:uuid:092d63b0-7883-4cb7-a1f7-3a3da24db7e6>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00688.warc.gz"}
haracteristics of Product Characteristics of Capacitor Current I. Introduction Capacitor current is a fundamental concept in electrical engineering that plays a crucial role in various applications, from power systems to electronic circuits. Understanding capacitor current is essential for engineers and technicians who design and maintain electrical systems. This article will explore the characteristics of capacitor current, including its theoretical background, practical applications, measurement techniques, and the challenges faced in real-world scenarios. II. Basic Concepts of Capacitors A. Definition and Function of a Capacitor A capacitor is a passive electronic component that stores electrical energy in an electric field. It consists of two conductive plates separated by an insulating material known as a dielectric. When a voltage is applied across the plates, an electric field is created, allowing the capacitor to store energy. Capacitors are widely used in various applications, including energy storage, filtering, and timing circuits. B. Types of Capacitors There are several types of capacitors, each with unique characteristics and applications: 1. **Electrolytic Capacitors**: These capacitors are polarized and typically used for high-capacitance applications. They are commonly found in power supply circuits. 2. **Ceramic Capacitors**: Known for their stability and reliability, ceramic capacitors are often used in high-frequency applications and decoupling circuits. 3. **Film Capacitors**: These capacitors use a thin plastic film as the dielectric and are known for their low loss and high stability, making them suitable for audio and RF applications. 4. **Tantalum Capacitors**: Tantalum capacitors offer high capacitance in a small package and are often used in portable electronic devices. C. Capacitor Ratings Understanding capacitor ratings is essential for selecting the right capacitor for a specific application. Key ratings include: 1. **Capacitance Value**: Measured in farads (F), this indicates the amount of charge a capacitor can store. 2. **Voltage Rating**: The maximum voltage a capacitor can handle without breaking down. 3. **Tolerance**: The allowable deviation from the nominal capacitance value, expressed as a percentage. 4. **Temperature Coefficient**: Indicates how the capacitance value changes with temperature. III. Capacitor Current: Theoretical Background A. Definition of Capacitor Current Capacitor current refers to the current that flows through a capacitor when it is subjected to a changing voltage. This current is a result of the capacitor charging and discharging as the voltage across its plates varies. B. Relationship Between Voltage and Current in Capacitors 1. **Capacitive Reactance**: The opposition that a capacitor presents to alternating current (AC) is known as capacitive reactance (Xc). It is inversely proportional to the frequency of the AC signal and the capacitance value. X_c = \frac{1}{2\pi f C} where \( f \) is the frequency and \( C \) is the capacitance. 2. **Phase Shift Between Voltage and Current**: In a capacitor, the current leads the voltage by 90 degrees in an AC circuit. This phase shift is crucial for understanding how capacitors behave in reactive circuits. C. Mathematical Representation 1. **Formula for Capacitor Current**: The current flowing through a capacitor can be expressed mathematically as: I = C \frac{dV}{dt} where \( I \) is the capacitor current, \( C \) is the capacitance, and \( \frac{dV}{dt} \) is the rate of change of voltage over time. 2. **Impedance in AC Circuits**: The impedance of a capacitor in an AC circuit is given by: Z = \frac{1}{j\omega C} where \( j \) is the imaginary unit and \( \omega \) is the angular frequency. IV. Characteristics of Capacitor Current A. Frequency Dependence 1. **Impact of Frequency on Capacitor Current**: The current through a capacitor is directly proportional to the frequency of the applied voltage. As frequency increases, the capacitive reactance decreases, allowing more current to flow. 2. **Resonance in RLC Circuits**: In circuits containing resistors (R), inductors (L), and capacitors (C), resonance occurs at a specific frequency where the inductive and capacitive reactances cancel each other out. This phenomenon can lead to significant increases in current. B. Transient Response 1. **Charging and Discharging Behavior**: When a voltage is applied to a capacitor, it does not charge instantaneously. Instead, it follows an exponential curve, characterized by a time constant (\( \tau \)), which is the product of resistance (R) and capacitance (C): \tau = R \times C The time constant determines how quickly a capacitor charges or discharges. 2. **Time Constant and Its Significance**: The time constant is crucial in timing applications, as it defines the speed at which a capacitor can respond to changes in voltage. C. Steady-State Behavior 1. **AC vs. DC Conditions**: In a DC circuit, once a capacitor is fully charged, it behaves like an open circuit, and no current flows. In contrast, in an AC circuit, the capacitor continuously charges and discharges, allowing current to flow. 2. **Current Waveforms**: The current waveform through a capacitor in an AC circuit is sinusoidal, leading the voltage waveform by 90 degrees. V. Practical Applications of Capacitor Current A. Power Factor Correction Capacitors are used in power factor correction to improve the efficiency of power systems. By adding capacitors to inductive loads, the overall power factor can be improved, reducing energy losses. B. Signal Filtering Capacitors are essential in filtering applications, where they smooth out voltage fluctuations and remove unwanted noise from signals. They are commonly used in audio equipment and communication C. Energy Storage in Power Systems Capacitors store energy and release it when needed, making them valuable in power systems for stabilizing voltage levels and providing backup power during outages. D. Timing Circuits and Oscillators Capacitors are integral components in timing circuits and oscillators, where they determine the timing intervals and frequency of oscillation. VI. Measurement and Analysis of Capacitor Current A. Tools and Techniques for Measuring Capacitor Current 1. **Oscilloscope**: An oscilloscope is a powerful tool for visualizing capacitor current and voltage waveforms, allowing engineers to analyze the behavior of capacitors in real-time. 2. **Multimeter**: A multimeter can measure capacitance, voltage, and current, providing essential data for evaluating capacitor performance. B. Analyzing Capacitor Current in Circuits 1. **Simulation Software**: Software tools like SPICE can simulate capacitor behavior in circuits, helping engineers design and troubleshoot systems before physical implementation. 2. **Practical Considerations**: When measuring capacitor current, it is essential to consider factors such as frequency, load conditions, and the presence of other circuit elements. VII. Challenges and Limitations A. Non-Ideal Behavior of Capacitors 1. **Equivalent Series Resistance (ESR)**: Real capacitors exhibit ESR, which can lead to power losses and affect performance, especially in high-frequency applications. 2. **Leakage Current**: Capacitors can have leakage currents that affect their efficiency and reliability, particularly in high-precision applications. B. Aging and Reliability Issues Capacitors can degrade over time due to environmental factors, leading to reduced performance and potential failure. Understanding these aging mechanisms is crucial for ensuring long-term C. Environmental Factors Affecting Performance Temperature, humidity, and other environmental factors can significantly impact capacitor performance, making it essential to consider these conditions during design and application. VIII. Conclusion In summary, capacitor current is a vital aspect of electrical engineering that influences the design and operation of various electronic systems. Understanding the characteristics of capacitor current, including its theoretical background, practical applications, and measurement techniques, is essential for engineers and technicians. As technology continues to evolve, the importance of capacitors in modern electronics will only grow, paving the way for future research and innovation in this field. IX. References 1. Academic Journals on Electrical Engineering 2. Textbooks on Circuit Theory and Electronics 3. Online Resources and Tutorials on Capacitor Theory and Applications This comprehensive exploration of capacitor current provides a solid foundation for understanding its significance in electrical engineering and its wide-ranging applications in modern technology. Product Characteristics of Capacitor Current I. Introduction Capacitor current is a fundamental concept in electrical engineering that plays a crucial role in various applications, from power systems to electronic circuits. Understanding capacitor current is essential for engineers and technicians who design and maintain electrical systems. This article will explore the characteristics of capacitor current, including its theoretical background, practical applications, measurement techniques, and the challenges faced in real-world scenarios. II. Basic Concepts of Capacitors A. Definition and Function of a Capacitor A capacitor is a passive electronic component that stores electrical energy in an electric field. It consists of two conductive plates separated by an insulating material known as a dielectric. When a voltage is applied across the plates, an electric field is created, allowing the capacitor to store energy. Capacitors are widely used in various applications, including energy storage, filtering, and timing circuits. B. Types of Capacitors There are several types of capacitors, each with unique characteristics and applications: 1. **Electrolytic Capacitors**: These capacitors are polarized and typically used for high-capacitance applications. They are commonly found in power supply circuits. 2. **Ceramic Capacitors**: Known for their stability and reliability, ceramic capacitors are often used in high-frequency applications and decoupling circuits. 3. **Film Capacitors**: These capacitors use a thin plastic film as the dielectric and are known for their low loss and high stability, making them suitable for audio and RF applications. 4. **Tantalum Capacitors**: Tantalum capacitors offer high capacitance in a small package and are often used in portable electronic devices. C. Capacitor Ratings Understanding capacitor ratings is essential for selecting the right capacitor for a specific application. Key ratings include: 1. **Capacitance Value**: Measured in farads (F), this indicates the amount of charge a capacitor can store. 2. **Voltage Rating**: The maximum voltage a capacitor can handle without breaking down. 3. **Tolerance**: The allowable deviation from the nominal capacitance value, expressed as a percentage. 4. **Temperature Coefficient**: Indicates how the capacitance value changes with temperature. III. Capacitor Current: Theoretical Background A. Definition of Capacitor Current Capacitor current refers to the current that flows through a capacitor when it is subjected to a changing voltage. This current is a result of the capacitor charging and discharging as the voltage across its plates varies. B. Relationship Between Voltage and Current in Capacitors 1. **Capacitive Reactance**: The opposition that a capacitor presents to alternating current (AC) is known as capacitive reactance (Xc). It is inversely proportional to the frequency of the AC signal and the capacitance value. X_c = \frac{1}{2\pi f C} where \( f \) is the frequency and \( C \) is the capacitance. 2. **Phase Shift Between Voltage and Current**: In a capacitor, the current leads the voltage by 90 degrees in an AC circuit. This phase shift is crucial for understanding how capacitors behave in reactive circuits. C. Mathematical Representation 1. **Formula for Capacitor Current**: The current flowing through a capacitor can be expressed mathematically as: I = C \frac{dV}{dt} where \( I \) is the capacitor current, \( C \) is the capacitance, and \( \frac{dV}{dt} \) is the rate of change of voltage over time. 2. **Impedance in AC Circuits**: The impedance of a capacitor in an AC circuit is given by: Z = \frac{1}{j\omega C} where \( j \) is the imaginary unit and \( \omega \) is the angular frequency. IV. Characteristics of Capacitor Current A. Frequency Dependence 1. **Impact of Frequency on Capacitor Current**: The current through a capacitor is directly proportional to the frequency of the applied voltage. As frequency increases, the capacitive reactance decreases, allowing more current to flow. 2. **Resonance in RLC Circuits**: In circuits containing resistors (R), inductors (L), and capacitors (C), resonance occurs at a specific frequency where the inductive and capacitive reactances cancel each other out. This phenomenon can lead to significant increases in current. B. Transient Response 1. **Charging and Discharging Behavior**: When a voltage is applied to a capacitor, it does not charge instantaneously. Instead, it follows an exponential curve, characterized by a time constant (\( \tau \)), which is the product of resistance (R) and capacitance (C): \tau = R \times C The time constant determines how quickly a capacitor charges or discharges. 2. **Time Constant and Its Significance**: The time constant is crucial in timing applications, as it defines the speed at which a capacitor can respond to changes in voltage. C. Steady-State Behavior 1. **AC vs. DC Conditions**: In a DC circuit, once a capacitor is fully charged, it behaves like an open circuit, and no current flows. In contrast, in an AC circuit, the capacitor continuously charges and discharges, allowing current to flow. 2. **Current Waveforms**: The current waveform through a capacitor in an AC circuit is sinusoidal, leading the voltage waveform by 90 degrees. V. Practical Applications of Capacitor Current A. Power Factor Correction Capacitors are used in power factor correction to improve the efficiency of power systems. By adding capacitors to inductive loads, the overall power factor can be improved, reducing energy losses. B. Signal Filtering Capacitors are essential in filtering applications, where they smooth out voltage fluctuations and remove unwanted noise from signals. They are commonly used in audio equipment and communication C. Energy Storage in Power Systems Capacitors store energy and release it when needed, making them valuable in power systems for stabilizing voltage levels and providing backup power during outages. D. Timing Circuits and Oscillators Capacitors are integral components in timing circuits and oscillators, where they determine the timing intervals and frequency of oscillation. VI. Measurement and Analysis of Capacitor Current A. Tools and Techniques for Measuring Capacitor Current 1. **Oscilloscope**: An oscilloscope is a powerful tool for visualizing capacitor current and voltage waveforms, allowing engineers to analyze the behavior of capacitors in real-time. 2. **Multimeter**: A multimeter can measure capacitance, voltage, and current, providing essential data for evaluating capacitor performance. B. Analyzing Capacitor Current in Circuits 1. **Simulation Software**: Software tools like SPICE can simulate capacitor behavior in circuits, helping engineers design and troubleshoot systems before physical implementation. 2. **Practical Considerations**: When measuring capacitor current, it is essential to consider factors such as frequency, load conditions, and the presence of other circuit elements. VII. Challenges and Limitations A. Non-Ideal Behavior of Capacitors 1. **Equivalent Series Resistance (ESR)**: Real capacitors exhibit ESR, which can lead to power losses and affect performance, especially in high-frequency applications. 2. **Leakage Current**: Capacitors can have leakage currents that affect their efficiency and reliability, particularly in high-precision applications. B. Aging and Reliability Issues Capacitors can degrade over time due to environmental factors, leading to reduced performance and potential failure. Understanding these aging mechanisms is crucial for ensuring long-term C. Environmental Factors Affecting Performance Temperature, humidity, and other environmental factors can significantly impact capacitor performance, making it essential to consider these conditions during design and application. VIII. Conclusion In summary, capacitor current is a vital aspect of electrical engineering that influences the design and operation of various electronic systems. Understanding the characteristics of capacitor current, including its theoretical background, practical applications, and measurement techniques, is essential for engineers and technicians. As technology continues to evolve, the importance of capacitors in modern electronics will only grow, paving the way for future research and innovation in this field. IX. References 1. Academic Journals on Electrical Engineering 2. Textbooks on Circuit Theory and Electronics 3. Online Resources and Tutorials on Capacitor Theory and Applications This comprehensive exploration of capacitor current provides a solid foundation for understanding its significance in electrical engineering and its wide-ranging applications in modern technology.
{"url":"http://moban3.icku.net/en/nitems1024.html","timestamp":"2024-11-10T22:34:54Z","content_type":"text/html","content_length":"62586","record_id":"<urn:uuid:e57dad09-9a33-4320-90dd-ce53636744dd>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00605.warc.gz"}
August 5, 2017 - MKMath In this video, you’ll learn about logarithmic differentiation, a powerful technique in calculus used to differentiate complex functions, particularly those involving products, quotients, or [tlg_steps style=”steps-style-2″][tlg_steps_content step_link=”url:http%3A%2F%2F127.0.0.1%2Fmkmath%2Fcalculus%2F” title=”Calculus” icon=”ti-arrow-circle-right” subtitle=”Topics”][tlg_steps_content title=”Integration” icon=”ti-arrow-circle-right” subtitle=”Topics”][/tlg_steps] This method allows us to differentiate very complicated fractional functions or functions raised to the power of another function easily. More in this Section [tlg_blog layout=”carouseldetail” pppage=”-1″ pagination=”yes” overlay=”no-overlay” filter=”452″] [tlg_steps style=”steps-style-2″][tlg_steps_content step_link=”url:http%3A%2F%2F127.0.0.1%2Fmkmath%2Fcalculus%2F” title=”Calculus” icon=”ti-arrow-circle-right” subtitle=”Topics”][tlg_steps_content step_link=”url:http%3A%2F%2F127.0.0.1%2Fmkmath%2Fcalculus%2Fintro%2F|title:Introductory%20Calculus” title=”General” icon=”ti-arrow-circle-right” subtitle=”Topics”][/tlg_steps] Here are tips for graphing. First, you need to find x-intercepts and y-intercept, if possible. For x-intercepts, setting y=0 to find x-values and similarly, setting x=0 to find y-intercept. Then, investigate if there are any asymptotes for fractional functions. For the horizontal asymptotes, we need… [tlg_steps style=”steps-style-2″][tlg_steps_content step_link=”url:http%3A%2F%2F127.0.0.1%2Fmkmath%2Fcalculus%2F” title=”Calculus” icon=”ti-arrow-circle-right” subtitle=”Topics”][tlg_steps_content step_link=”url:http%3A%2F%2F127.0.0.1%2Fmkmath%2Fcalculus%2Fintro%2F|title:Introductory%20Calculus” title=”Introductory” icon=”ti-arrow-circle-right” subtitle=”Topics”][/tlg_steps] To evaluate the limit as x approaches infinity, take the largest exponent term each from the numerator and denominator, simplify the fraction, and then apply the limit rules as shown. More in this Section [tlg_blog layout=”carouseldetail” pppage=”-1″ pagination=”yes” overlay=”no-overlay” filter=”455″] [tlg_steps style=”steps-style-2″][tlg_steps_content step_link=”url:http%3A%2F%2F127.0.0.1%2Fmkmath%2Fcalculus%2F” title=”Calculus” icon=”ti-arrow-circle-right” subtitle=”Topics”][tlg_steps_content step_link=”url:http%3A%2F%2F127.0.0.1%2Fmkmath%2Fcalculus%2Fintegration%2F|title:Introductory%20Calculus” title=”Integration” icon=”ti-arrow-circle-right” subtitle=”Topics”][/tlg_steps] This method allows us to change algebraic functions into trigonometric functions, integrate them in trigonometric forms, and return to the original algebraic functions as solutions. More in this Section [tlg_blog layout=”carouseldetail” pppage=”-1″ pagination=”yes” overlay=”no-overlay” filter=”452″] [tlg_steps style=”steps-style-2″][tlg_steps_content step_link=”url:http%3A%2F%2F127.0.0.1%2Fmkmath%2Fcalculus%2F” title=”Calculus” icon=”ti-arrow-circle-right” subtitle=”Topics”][tlg_steps_content step_link=”url:http%3A%2F%2F127.0.0.1%2Fmkmath%2Fcalculus%2Fintro%2F|title:Introductory%20Calculus” title=”General” icon=”ti-arrow-circle-right” subtitle=”Topics”][/tlg_steps] Before starting the example, you need to know the following steps to solve DE by Laplace transforms. Step 1. Take the Laplace transforms of both sides of the equation. Step 2. Solve for the Laplace of Y. Step 3. Manipulate the Laplace transform, F(s) until… [tlg_steps style=”steps-style-2″][tlg_steps_content step_link=”url:http%3A%2F%2F127.0.0.1%2Fmkmath%2Fcalculus%2F” title=”Calculus” icon=”ti-arrow-circle-right” subtitle=”Topics”][tlg_steps_content step_link=”url:http%3A%2F%2F127.0.0.1%2Fmkmath%2Fcalculus%2Fintro%2F|title:Introductory%20Calculus” title=”General” icon=”ti-arrow-circle-right” subtitle=”Topics”][/tlg_steps] Before looking at the example, you need to know the formula of Fourier Series as shown. If you know the additional information shown, you could reduce your work. (1) Odd functions have Fourier Series with only sine terms, which means you only find the coefficients,… [tlg_steps style=”steps-style-2″][tlg_steps_content step_link=”url:http%3A%2F%2F127.0.0.1%2Fmkmath%2Fcalculus%2F” title=”Calculus” icon=”ti-arrow-circle-right” subtitle=”Topics”][tlg_steps_content step_link=”url:http%3A%2F%2F127.0.0.1%2Fmkmath%2Fcalculus%2Fdifferentiations%2F|title:Introductory%20Calculus” title=”Differentiation” icon=”ti-arrow-circle-right” subtitle=”Topics”][/tlg_steps] Before starting examples, you need to know the derivative formulas as shown. In many cases, we need to make use of the properties of logarithm as well. Please remember that if you see “ln” symbol, it is called natural log and it has the base… [tlg_steps style=”steps-style-2″][tlg_steps_content step_link=”url:http%3A%2F%2F127.0.0.1%2Fmkmath%2Fcalculus%2F” title=”Calculus” icon=”ti-arrow-circle-right” subtitle=”Topics”][tlg_steps_content step_link=”url:http%3A%2F%2F127.0.0.1%2Fmkmath%2Fcalculus%2Fintro%2F|title:Introductory%20Calculus” title=”General” icon=”ti-arrow-circle-right” subtitle=”Topics”][/tlg_steps] Before looking at the example, you need to know the solution formula for second-order differential equations ay”+ by’ +cy = f(x) as shown. Notice that there are two parts, y-sub C and y-sub P in the complete solution. One part, y-sub C is solving a… [tlg_steps style=”steps-style-2″][tlg_steps_content step_link=”url:http%3A%2F%2F127.0.0.1%2Fmkmath%2Fcalculus%2F” title=”Calculus” icon=”ti-arrow-circle-right” subtitle=”Topics”][tlg_steps_content step_link=”url:http%3A%2F%2F127.0.0.1%2Fmkmath%2Fcalculus%2Fintegration%2F|title:Introductory%20Calculus” title=”Integration” icon=”ti-arrow-circle-right” subtitle=”Topics”][/tlg_steps] If you take a look at the integrand of the question, it seems a relatively complicated fraction. If we can split it into simpler fractions, then we may be able to integrate them easily. Making use of partial fractions to get the simpler fractions. First,…
{"url":"https://mkmath.com/2017/08/05/","timestamp":"2024-11-03T08:49:05Z","content_type":"text/html","content_length":"81801","record_id":"<urn:uuid:5841e0bf-7a7d-4222-bbee-40c7e796dabe>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00760.warc.gz"}
195 grams to ounces Convert 195 Grams to Ounces (gm to oz) with our conversion calculator. 195 grams to ounces equals 6.8784222 oz. Enter grams to convert to ounces. Formula for Converting Grams to Ounces: ounces = grams ÷ 28.3495 By dividing the number of grams by 28.3495, you can easily obtain the equivalent weight in ounces. Converting grams to ounces is a common task that many people encounter, especially when dealing with recipes, scientific measurements, or everyday activities. Understanding the conversion factor is essential for accurate measurements. In this case, the conversion factor from grams to ounces is approximately 28.3495 grams per ounce. This means that one ounce is equivalent to 28.3495 grams. To convert grams to ounces, you can use the following formula: Ounces = Grams ÷ 28.3495 Let’s break down the conversion of 195 grams to ounces step-by-step: 1. Start with the amount in grams: 195 grams. 2. Use the conversion factor: 28.3495 grams per ounce. 3. Apply the formula: Ounces = 195 grams ÷ 28.3495. 4. Perform the calculation: Ounces = 6.8699. 5. Round the result to two decimal places: Ounces ≈ 6.87. Thus, 195 grams is approximately 6.87 ounces. This rounded figure is practical for everyday use, making it easier to understand and apply in various situations. The importance of converting grams to ounces cannot be overstated, especially in bridging the gap between the metric and imperial systems. Many recipes, particularly in the United States, use ounces, while most scientific measurements are in grams. Being able to convert between these units ensures accuracy and consistency in cooking, baking, and scientific experiments. Practical examples of where this conversion might be useful include: • Cooking and Baking: When following a recipe that lists ingredients in ounces, knowing how to convert grams can help you measure accurately, ensuring the best results. • Nutrition: Food labels often provide nutritional information in ounces. Converting grams to ounces can help you track your intake more effectively. • Scientific Research: In laboratories, precise measurements are crucial. Converting grams to ounces can assist researchers in comparing data across different measurement systems. In conclusion, converting 195 grams to ounces is a straightforward process that can enhance your cooking, baking, and scientific endeavors. By understanding the conversion factor and applying the formula, you can easily navigate between metric and imperial measurements, making your tasks more efficient and accurate. Here are 10 items that weigh close to 195 grams to ounces – • Standard Baseball Weight: 145 grams Shape: Spherical Dimensions: 23 cm circumference Usage: Used in the sport of baseball for pitching, hitting, and fielding. Fact: A baseball is made of a cork center wrapped in layers of yarn and covered with leather. • Medium-Sized Apple Weight: Approximately 182 grams Shape: Round Dimensions: About 7.5 cm in diameter Usage: Eaten raw, used in cooking, or made into juice. Fact: Apples float in water because 25% of their volume is air. • Standard Pack of Playing Cards Weight: 100 grams Shape: Rectangular Dimensions: 8.9 cm x 6.4 cm Usage: Used for various card games and magic tricks. Fact: A standard deck contains 52 cards, plus jokers, totaling 54 cards. • Medium-Sized Avocado Weight: Approximately 200 grams Shape: Pear-shaped Dimensions: About 10 cm long Usage: Eaten raw, used in salads, or made into guacamole. Fact: Avocados are technically a fruit, and they contain more potassium than bananas. • Small Bag of Flour Weight: 500 grams (can be divided) Shape: Rectangular Dimensions: 25 cm x 15 cm x 5 cm Usage: Used in baking and cooking. Fact: Flour is made by grinding raw grains, and it has been a staple food for thousands of years. • Standard Coffee Mug Weight: Approximately 300 grams Shape: Cylindrical Dimensions: 10 cm tall, 8 cm diameter Usage: Used for drinking hot beverages like coffee or tea. Fact: The world’s largest coffee mug can hold over 1,000 cups of coffee! • Small Potted Plant Weight: Approximately 200 grams Shape: Cylindrical (pot) with a varied plant shape Dimensions: 15 cm height, 10 cm diameter Usage: Used for decoration and improving air quality. Fact: Some houseplants can remove toxins from the air, making them great for indoor spaces. • Standard Smartphone Weight: Approximately 200 grams Shape: Rectangular Dimensions: 15 cm x 7 cm x 0.8 cm Usage: Used for communication, internet browsing, and various applications. Fact: The first smartphone was IBM’s Simon Personal Communicator, released in 1994. • Standard Water Bottle Weight: Approximately 200 grams (empty) Shape: Cylindrical Dimensions: 25 cm tall, 7 cm diameter Usage: Used for carrying water or other beverages. Fact: Staying hydrated can improve your mood and cognitive function. • Small Bag of Sugar Weight: 500 grams (can be divided) Shape: Rectangular Dimensions: 25 cm x 15 cm x 5 cm Usage: Used in cooking and baking to add sweetness. Fact: Sugar was once so valuable that it was referred to as “white gold.” Other Oz <-> Gm Conversions –
{"url":"https://www.gptpromptshub.com/grams-ounce-converter/195-grams-to-ounces","timestamp":"2024-11-14T18:32:17Z","content_type":"text/html","content_length":"186721","record_id":"<urn:uuid:e88f18b3-dab3-4a96-8562-a708e84a3382>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00205.warc.gz"}
The optical setup used in this work is a bit different from that used in our previous research [ ]. This time a different set of lenses is used for magnification. Therefore, we begin with experiments to verify the concept of single vortical needles with controllable axial profiles $f ( z )$ , described in the Equation ( ). The spatial spectra of the engineered beams are found using Equation ( ) and encoded in a phase-only mask using a checkerboard method [ ]. The parameters for the axial profiles $f ( z )$ used in the experiments are the same as in the theoretical section, see Figure 1 The optical needle with the shortest length $L = 1 m m$ has a central spike that is ~7.3 µm in diameter (at the intensity level $1 / e 2$ ) and two detectable rings in the transverse intensity pattern, see Figure 4 (a). This is an expected outcome given its small longitudinal dimension. As the length decreases, the properties of the optical needle are not similar to the properties of a Bessel beam but are more like those of a Gaussian beam. Moving on to the four-time longer optical needle with $L = 4$ mm, we observe the appearance of additional rings (seven in total) around the central spike. The size of the central spike did remain the same within experimental tolerance, and the transverse profile largely resembles a Bessel-Gaussian beam with a significant number of concentric rings surrounding the center of the beam. Lastly, we do increase the length of the optical needle twice more $L = 8$ mm, see Figure 4 (c). The system of concentric rings becomes more pronounced, and the central lobe is not significantly affected. We have verified the propagation of these optical needles by measuring the intensity of the central lobes for various positions of coordinate, see Figure 4 (d). In general, the behavior was detected to be as expected from numerical simulations, with no sharp oscillations on edges; the intensity drop is smooth as desired. However, we did observe some axial oscillations, which might be caused by inaccuracies in the positioning of the translation stage and some possible misalignment in the optical system. Having verified that the optical setup acts at an intended level of performance, we now introduce nonzero topological charges $m = 1$ $m = 2$ , see Figure 4 . Starting with the shortest vortical needle with a length of $L = 1$ mm, we observe similarities with the previous case; compare Figure 4 (e) and (i) to Figure 4 (a). We observe two pronounced rings, the first ring with the vortical core inside and the second one surrounding it. The third ring is weak in both cases. In the expected manner, the radii of the first rings are different: the higher topological charge results in a larger central ring; compare Figure 4 (e) to Figure 4 (i). In the case of the topological charge $m = 1$ , the size of a dark spot inside a first ring is ~5.6 µm measured at ( $1 − 1 / e 2$ ) intensity level. Setting the length of the axial profile to four times larger values $L = 4$ mm immediately results in the appearance of a good pronounced concentric structure with nine rings in it for both topological charges. The sizes of the central rings surrounding the vortex cores with topological charges $m = 1$ Figure 4 (f)) and $m = 2$ Figure 4 (j)) do not change significantly. Lastly, setting the length of the super-Gaussian axial profile to $L = 8$ mm gives us the transverse intensity patterns depicted in Figure 4 (g) and (k). In a similar fashion to the non-vortical optical needle, see Figure 4 (c), the ring-like structure of the field becomes more pronounced. We verify the intended action of the phase mask by measuring the intensity on the first ring while performing a scan, see Figure 4 (h,l). In both cases, the axial profile of the vortical beams with the shortest length $L = 1$ mm resembles our expectation well; see the black curves in Figure 4 (h,l). Longer axial profiles have expected lengths but are somehow distorted; see the green curves in Figure 4 (h,l). This might happen due to the azimuthal intensity fluctuations on the first ring, compare to Figure 4 (f,j) - we might have used a non-optimal detection method or some misalignment are present in the optical setup. The situation improves for axial profiles designed with length $L = 8$ mm; see the red curves in Figure 4 (h,l). For the topological charge $m = 1$ , we were able to measure the intended axial profile. The axial profile of the topological charge $m = 2$ is flat enough, but some spikes appear. As we do not integrate azimuths into a ring but measure them at a single azimuthal angle, this might occur due to the coherent addition of a small background, which causes splitting of the central vortex and the appearance of single charged vortices [ Figure 4 (m, n, o) show the cross sections of the $m = 2$ beams marked by a red line in Figures (i, j, k), respectively. As stated above, the main ring is intensity dominant for the shortest optical needle. The first side ring is less than 20% of the maximum (Fig. Figure 4 ). Side rings appear with increasing length of the optical needle. For the case of $L = 4$ mm, the first side ring is 55% while for the case of $L = 8$ mm it is 65%. Both of these values are higher compared to the second ring intensity of the ideal 2 order Bessel beam which would be 42% of the maximum. The size of the dark central spot is ~11.2 µm, which is twice as large compared to the intensity minima of the vortical optical needle of topological charge $m = 1$ . Lastly, in Figure 4 (p) we present $x z$ distributions of optical needles of lengths $L = 1$ $L = 4$ $L = 8$ mm and with topological charge $m = 2$ . Smooth intensity distributions are generated for optical needles with $L = 1$ mm and $L = 4$ mm. In the case of $L = 8$ mm axial modulation is present that might occur due to splitting of the central vortex into single charged vortices [ ] as mentioned before.
{"url":"https://www.preprints.org/manuscript/202402.0788/v2","timestamp":"2024-11-12T03:53:36Z","content_type":"text/html","content_length":"984413","record_id":"<urn:uuid:dd733ff0-3070-4186-9ae3-ee92b889c749>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00778.warc.gz"}
c language concepts C (pronounced /siː/, like the letter C) is a general-purpose computer programming language developed between 1969 and 1973 by Dennis Ritchie at the Bell Telephone Laboratories for use with the Unix operating system. Although C was designed for implementing system software, it is also widely used for developing portable application software. C is one of the most popular programming languages of all time and there are very few computer architectures for which a C compiler does not exist. C has greatly influenced many other popular programming languages, most notably C++, which began as an extension to C. This C blog page is not intend to provide a complete C language, but instead I will give you the important concepts and will try to explain the toughest concept in C that developers and the students are having. I will keep on posting to this. Please add you comments at any moment if you feel something went wrong, of-course I am human being so mistakes can happen. Blog Topics C Concepts
{"url":"http://www.ccplusplus.com/2013/01/c-language-concepts.html","timestamp":"2024-11-03T13:16:23Z","content_type":"text/html","content_length":"109892","record_id":"<urn:uuid:a4a54201-1e47-4f66-bce2-719ddd48be81>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00618.warc.gz"}
Structural connectome constrained graphical lasso for MEG partial coherence Structural connectivity provides the backbone for communication between neural populations. Since axonal transmission occurs on a millisecond time scale, measures of M/EEG functional connectivity sensitive to phase synchronization, such as coherence, are expected to reflect structural connectivity. We develop a model of MEG functional connectivity whose edges are constrained by the structural connectome. The edge strengths are defined by partial coherence, a measure of conditional dependence. We build a new method—the adaptive graphical lasso (AGL)—to fit the partial coherence to perform inference on the hypothesis that the structural connectome is reflected in MEG functional connectivity. In simulations, we demonstrate that the structural connectivity’s influence on the partial coherence can be inferred using the AGL. Further, we show that fitting the partial coherence is superior to alternative methods at recovering the structural connectome, even after the source localization estimates required to map MEG from sensors to the cortex. Finally, we show how partial coherence can be used to explore how distinct parts of the structural connectome contribute to MEG functional connectivity in different frequency bands. Partial coherence offers better estimates of the strength of direct functional connections and consequently a potentially better estimate of network structure. Electrophysiological signals are sampled on a millisecond time scale capturing aggregate synaptic activity from populations of neurons. These neuro-physiological signals have intrinsic time scales, organized in frequency bands; and intrinsic spatial organization, organized by functional localization and integrated by the anatomical connectivity (Nunez & Srinivasan, 2006). Functional connectivity (FC) (Friston, 2011) refers to statistical dependence between signals recorded from two different areas of the brain, usually measured in a predefined frequency band. This broad definition encompasses different preprocessing methods and statistical models that emphasize different temporal and spatial scales of the underlying brain activity. Coherence is a widely used measure of electroencephalography and magnetoencephalography (M/EEG) functional connectivity (Nunez & Srinivasan, 2006). Coherence is modulated across different cognitive tasks and clinical disease states (Baillet, 2017; Gross et al., 2013; Rouhinen, Panula, Palva, & Palva, 2013; Roux & Uhlhaas, 2014; Siebenhühner, Wang, Palva, & Palva, 2016; Stam, 2014). Coherence is expected to reflect delayed signal transmission along white-matter tracts, that is, structural connections (Abdelnour, Voss, & Raj, 2014; Chu et al., 2015; Meier et al., 2016; Nunez & Srinivasan, 2006; Schneider, Dann, Sheshadri, Scherberger, & Vinck, 2020; Srinivasan, Winter, Ding, & Nunez, 2007) and is thus used to characterize network structure. However, mapping coherence to the anatomy is difficult due to its susceptibility to inflation from leakage effects. Leakage effects are the shared activity across brain sources caused by the limited resolution of source localization methods (Baillet, Mosher, & Leahy, 2001; Brookes et al., 2011; Gross et al., 2001; Hämäläinen & Ilmoniemi, 1994; Wipf & Nagarajan, 2009) to spatially separate source activity mixed by EEG volume conduction and MEG field spread (Nunez & Srinivasan, 2006; Srinivasan et al., 2007). Leakage effects result in common signals with zero phase difference between sources. One approach suggested to reduce leakage effects, the imaginary coherence (Nolte et al., 2004), is based on using only the projection of signals onto a phase difference of +/−90 degrees. However, this distorts the interpretation of the strength of functional connectivity, by weighting toward signals with preselected phase differences. Moreover, this approach is still susceptible to spurious connections when genuine long-range connections exist at a delay and this activity is leaked to neighboring regions (Palva et al., 2018). We can use coherence or imaginary coherence to define the network edge weights, a critical first step for analyzing network structure, for example, using graph theoretical methods (De Vico Fallani, Richiardi, Chavez, & Achard, 2014; Maldjian, Davenport, & Whitlow, 2014; Niso et al., 2015; Schoonheim et al., 2013). However, both coherence and imaginary coherence reflect activity over single and multistep structural connectivity (Abdelnour et al., 2014; Chu et al., 2015; Meier et al., 2016). This distorts the definition of a path (Avena-Koenigsberger, Misic, & Sporns, 2018; Blinowska & Kaminski, 2013; Kaminski & Blinowska, 2018) over an undirected network and thus raises questions about the validity of network structure analyses using networks defined by the strength of coherence. In contrast to the coherence or imaginary coherence, partial coherence accounts for both instantaneous and lagged shared information across multiple areas (Dahlhaus, 2000; Reid et al., 2019; Sanchez-Romero & Cole, 2021). Partial coherence has a long history in neuroscience: initially applied to spike trains (Rosenberg, Halliday, Breeze, & Conway, 1998) and generalized in Baccalá and Sameshima (2001) to the partial directed coherence. The real-valued analogue, partial correlation, has been applied to fMRI data across many studies (Hinne, Janssen, Heskes, & van Gerven, 2015; Huang et al., 2010; Ng, Varoquaux, Poline, & Thirion, 2012; Ryali, Chen, Supekar, & Menon, 2012; Smith et al., 2011; Varoquaux, Gramfort, Poline, & Thirion, 2010; Wodeyar, Cassidy, Cramer, & Srinivasan, 2020). Partial coherence represents the strength of linear relationships between a pair of brain areas when accounting for their relationships with all other brain areas (Dahlhaus, 2000; Epskamp & Fried, 2018; Whittaker, 2009). It reduces false positive detection of direct connections resulting from activity over indirect connections, as would result from leakage effects and multistep paths. Thus, we can better interpret partial coherence as connection strength to define a functional network. However, partial coherence estimation can be challenging. Most previous studies using partial coherence have focused on cases where there are only a few nodes in the network or used the L2-normpenalization for regularization (Baccalá & Sameshima, 2001; Colclough et al., 2016; Dahlhaus, 2000; Medkour, Walden, & Burgess, 2009; Ter Wal et al., 2018), without obvious justification. The use of the L2-norm is counterintuitive, as the structural connectivity of the brain is known to be sparse, and there is little reason to minimize the edge strengths. In the fMRI literature, when estimating partial correlation, several studies have experimented with alternative regularization approaches: L1-norm (Huang et al., 2010), elastic net (Ryali et al., 2012), group-based penalization approaches (Varoquaux et al., 2010), edge-specific penalization (Ng et al., 2012), as well as Bayesian approaches to estimation (Hinne et al., 2015). However, these alternative regularization approaches have not been attempted in partial coherence estimation, in part because of the difficulty in implementing them. We expect that functional connectivity is constrained by the structural connectome. In this article, we make explicit use of the structural connectome to facilitate regularization of partial coherence estimates. We use a graphical lasso technique modified to use the structural connectome to guide the L1 penalization, a method we call the adaptive graphical lasso (AGL). To our knowledge, this is the first time that the graphical lasso (L1-norm), and further the graphical lasso using a constraint-based penalization, has been used to estimate partial coherence for neural signals ( Colclough et al., 2016; Ter Wal et al., 2018). We select the lasso penalization through a novel cross-validation technique that separately identifies the optimal penalization on and off the structural connectome. If the penalization is lower for edges in the structural connectome, we have clearly identified that the pattern of connectivity is influenced by the structural connectome. Note that the entire structural connectome need not be estimated in the partial coherence, a subset may be estimated as a function of the data. Through simulations, we aim to demonstrate that (1) the partial coherence can be estimated accurately using the AGL, (2) we can directly test whether the structural connectome is a useful constraint in network identification, and (3) the partial coherence serves as a better functional connectivity metric than the coherence or imaginary coherence. Finally, we use the AGL-estimated partial coherence to demonstrate distinct contributions of the structural connectome to MEG signals in different frequency bands. This work is guided by the intuition that the statistics of neural activity data collected at the mesoscale (intracranial electrocorticography - ECoG) and macroscale (M/EEG) are constrained by structural connectivity of the axon fiber systems of the cortex. As such, we have built a minimal generative computational model, representing the partial coherence, that is derived from estimates of structural connectivity and we have developed a method to infer model parameters. We allowed the structural connectivity to potentially guide the estimation of the partial coherence and developed new simulations to link this work with M/EEG and ECoG data. Structural Connectome We built a template of the structural connectome (SC) from a probabilistic atlas. We used streamlines generated with deterministic tractography by Yeh et al. (2018) using the HCP842 dataset (Van Essen et al., 2013) transformed to the MNI152 template brain obtained from the FMRIB Software Library (FSL). In this dataset experts vet the streamlines to remove potentially noisy estimates of axonal fibers. We applied the Lausanne parcellation (Cammoun et al., 2012) of 114 cortical and 15 subcortical regions of interest (ROIs) to the MNI152 template brain and generated a volumetric representation for each region of interest using the easy_lausanne toolbox (Cieslak, 2015). Each streamline was approximated by a single 100-point cubic spline using code adapted from the along-tract-stats toolbox (Colby et al., 2012). By identifying the streamlines which terminated in a pair of ROIs, we were able to create the SC for the Lausanne parcellation. Each streamline only connected a single pair of ROIs. An edge W[ij] for ROIs i and j existed if there was a streamline connecting the pair. From this process, we built the 129 × 129 undirected and unweighted structural connectome with 1,132 edges. We reduced this matrix to 114 × 114 with 720 edges (see Figure 1) after removing all the subcortical structures and limiting interhemispheric connections to homologous white-matter tracts. This latter step helped remove potentially noisy estimates of connections (while potentially increasing false negatives) where streamlines intersected and passed outside the cortical surface before reaching the terminal point in a brain region. The resulting template of structural connectivity shown in Figure 1 is referred to as the structural connectome (SC). This template is incomplete in that it does not include subcortical to cortical projections. Thus, functional connectivity resulting from structural connections not captured by this template may exist in the data. Our estimation procedure for the graphical models of functional connectivity described in the next section allows for such connections, if needed, to account for the statistical structure in the data. Generative Model Complex-valued Gaussian graphical model. We assume that a vector of activity ( ) in one frequency band is a sample drawn from a complex-valued multivariate Gaussian. Here Φ is the precision—the unnormalized partial coherence—and is determined by the SC: In the frequency domain, a signal can be characterized by samples of amplitude and phase, or equivalently, by complex-valued coefficients with real and imaginary parts corresponding to sine and cosine components of the signal. The complex-valued multivariate Gaussian for a zero-mean (where ) = 0 + 0 ) process ( Schreier & Scharf, 2010 ) is defined as: The key parameter in this model is the covariance matrix and its inverse, the . As defined in Equations 3 , the covariance matrix for complex-valued data is composed of the familiar cross-spectrum and the complementary cross-spectrum . Most spectral analysis methods only make use of and implicitly assume circular symmetry, that is, = 0 ( Schreier & Scharf, 2010 ). In this case, the complex-valued data is labeled as . With the assumption of circular symmetry, we can parameterize the complex-valued Gaussian using the precision as: Each value in the precision matrix is the conditional covariance between any two variables (here, sources representing two ROIs) given the other variables (all other ROIs). The precision represents a model of functional conditional dependence between sources. The strength of the conditional dependence represents the linear relationship between any pair of sources when linear effects from all other sources are removed (see Section 2.2.2 of Pourahmadi, 2011 for an intuitive explanation in terms of multivariate linear regression). For any pair of sources, if the precision is zero, there is no need for a relationship between the sources to account for observed coherence. Such apparent coherences arise from connections mediated via other sources in the model. Note that the precision directly represents a complex-valued Gaussian graphical model ( Whittaker, 2009 In the generative model, we choose to set up the precision matrix Φ to have a nonzero entry only at edges that have a connection in the SC. We are assuming that in each frequency band, coherence represents the result of joint random fluctuations of a set of oscillators whose connections are determined by the SC. The precision vales are estimated using the graphical lasso in a cross-validated procedure that allows potentially using the SC as a guide for the L1 penalization. In this way the nonzero locations and values of the precision are determined by the data. Adaptive graphical lasso. The graphical lasso ( Friedman, Hastie, & Tibshirani, 2008 ) is a method that has been applied in multiple fields in the past decade, from genomics ( Menéndez, Kourmpetis, ter Braak, & van Eeuwijk, 2010 ) to fMRI functional connectivity ( Ng et al., 2012 Ryali et al., 2012 Varoquaux et al., 2010 Wodeyar et al., 2020 ) and climate models ( Zerenner, Friederichs, Lehnertz, & Hense, 2014 ). It is used to identify a sparse approximation to the regularized precision matrix while solving problems arising from rank deficiency and small numbers of samples. To apply the lasso, we optimize the penalized likelihood function for a multivariate Gaussian ( Meinshausen & Bühlmann, 2006 ) to estimate the precision—where Θ ( Equation 4 ) is the cross-spectral density (CSD): The penalization parameter λ in the graphical lasso determines the nonzero set of precision values. The output of the lasso from Equation 6 is the precision matrix We made use of the lasso to estimate the precision while taking advantage of the knowledge of the SC to hypothesize the likely locations of nonzero precision values. We made use of the lasso optimization from quadratic approximation for sparse inverse covariance or QUIC ( Hsieh, Dhillon, Ravikumar, & Sustik, 2011 ) using a matrix penalty term (this process is also called the adaptive lasso ( Zou, 2006 ) determined by the SC with edges Note that in the limiting case of , the likelihood function is the same as it is for the graphical lasso. We determine the using cross-validation. This crucial setup simultaneously provides (1) a measure of the usefulness of the SC as a hypothesis on MEG functional connectivity and (2) serves as a principled thresholding mechanism for weak connections. By optimizing the penalized likelihood, we leveraged the information in the SC as a hypothesis for our lasso estimate. We derive the graph with vertices = 1, 2, …, 114 and edges = 1, from the precision based on the nonzero values in . The final precision matrix is estimated under the unpenalized Gaussian likelihood for the set of edges defined by the graphical model using the function (PMTK3 toolbox; Murphy & Dunham, 2008 ) which optimizes (unpenalized Gaussian log-likelihood): (covariance) is usually rank deficient, we add a small value ( ) along the diagonal to make it full rank. We fixed as 0.001 times the maximum value along the upper triangle of the covariance. We test whether the AGL produced estimates of the precision that show reduced error relative to applying the graphical lasso using cross-validation. Note that applying the graphical lasso would be equivalent to having the penalization inside and outside the SC be equal, that is, λ[1] = λ[2]. We estimated the appropriate value for λ[1] and λ[2] using cross-validation. We split data into four ensembles, and repeated the following analysis with each ensemble. We estimated the precision on one ensemble of the data ( ) and estimated the when using this precision as the inverse for the covariance for all the other ensembles of the data (and vice versa). Deviance was estimated as: Partial coherence. In every frequency band, or for each iteration of our simulation, we estimated the precision for complex-valued data incorporating amplitude and phase for a frequency band. The normalization of the precision (Φ) yields the partial coherence ( ) ( Dahlhaus, 2000 ), estimated using: Contemporary Methods for Functional Connectivity We considered three alternative methods to compare the partial coherence model estimated from AGL: coherence, imaginary coherence, and the partial coherence estimated when regularizing using the L2 norm. We estimate coherence from the cross-spectral density Θ, where are the amplitude and phase information in one frequency band from two sources, as: Imaginary coherence is believed to reduce the influence of volume conduction and zero phase lag connectivity (such as would exist from source leakage). The idea is to minimize this effect by estimating the consistency of the imaginary part of the cross-spectral density between two sources. We measure it using (where refers to the imaginary component of the complex value from the cross-spectral density): Coherence and imaginary coherence networks are defined using a threshold derived using bootstrapping ( Zoubir & Boashash, 1998 ). We define a population distribution by resampling 1,000 times with replacement. We kept edges with distributions that did not cover 0 at an alpha value of 0.05. Finally, we consider an alternative regularization to estimate the partial coherence—an L2-norm penalization. This style of regularization does not force precision values to zero but instead minimizes them to optimize the likelihood. The penalized likelihood for the L2 norm inverse is: We need to identify a threshold for inference on the edges of the precision. Using a novel cross-validation procedure that mirrors the approach we applied under the AGL (using the likelihood function to estimate deviance), we optimize for the L2-norm penalization ( ) and the threshold. The threshold to be applied is determined as a percentile—between 5 to 95—of the weights whose optimal value is identified using cross-validation. We wished to test the accuracy of the AGL to estimate the precision matrix. To do so we simulate from a generative model and attempt to recover the parameters. The generative model we use is a complex-valued multivariate normal where the nonzero values in the precision define an undirected network (as specified in Equation 4). For each simulation, and each iteration, we generated new networks with random weights for edges. While the edge locations are kept consistent within a simulation, we randomized the weights on the edges. The internal variability of each area/node changes across simulation iterations therefore changing the signal-to-noise ratio for each edge. We examined each simulation under two (or more) sampling scenarios—one where the number of samples is comparable to the number of nodes and one where there are many more samples than the number of nodes. For each simulation, where we always possess ground truth information, we assessed whether the AGL inferred (1) the usefulness of the network constraint, (2) recovered the true edges, (3) controlled the false positives, and (4) correctly estimated the edge weights of the partial coherence. Simulation 1: Structural connectome simulation. In all three simulations, to generate novel precision matrices, we retained the edge locations from the original SC but simulated random weights for the edges sampled from a normal distribution, N (100, 30). Finally, each edge is assigned a random phase (μ) based on sampling from a Gaussian distribution (mean = $π2$, SD = 0.25). After multiplying each edge weight with the phase, we can generate the precision. This represents the complex-valued, circularly symmetric precision matrix (Φ) for a frequency band. We tested whether the precision is positive-definite by attempting to generate the Cholesky factorization of the matrix using the MATLAB function chol. If not, we continuously added the summed absolute value of the rows to the diagonal until the matrix was Using the precision, we determined the cross-spectral density as its inverse (Θ = Φ^−1). The cross-spectral density has a real-valued equivalent representation (Schreier & Scharf, 2010). We can treat the real and imaginary components of the CSD as separate variables governed by a joint covariance structure. Complex-valued Gaussian values were sampled using the MATLAB function mvnrnd operating on the real-valued CSD. Simulation 2: Fake network constraint. In the second simulation we examined whether the AGL permits inference about the hypothesized network, that is, can we use the penalizations chosen under cross-validation to judge accuracy of the hypothesized network. We began with the same approach as in the first simulation, generating precision matrices and samples from the true structural connectome. However, we changed how we applied the AGL. Rather than use the true network, we provided a fake network generated by shuffling the nodes of the structural connectome, thus allowing us to preserve the degree distribution of the original network. We shuffle nodes using the randperm function in MATLAB to generate 114 integers between 1 and 114 without repetition. Every iteration of the simulation, we shuffled the nodes of the SC so that the number of edges and general connectome structure are retained while the actual node identities are altered. The penalization structure under a fake network is expected to revert to the vanilla graphical lasso, with constant penalization across the entire matrix. We collapsed results across all iterations to assess if this occurred. Simulation 3: Forward solution and source localization simulation. In the third simulation, we generated pseudo-MEG data. This simulation tested the ability of different methods to overcome the spatial blurring induced by the process of source localization—leakage effects and incomplete demixing of source signals. For a visual depiction of this simulation, please see Figure 2. We first built an MEG forward model—an estimate of the magnetic field measured at MEG sensors above the scalp generated by current sources located in the brain. We built the forward model for the Neuromag MEG system consisting of 306 MEG coils at 102 locations above the scalp (shown in Figure 2). At each location, there are 3 sensors—one magnetometer that measures the component of the magnetic field passing through the coil and two planar gradiometers that measure the gradient of this magnetic field in two orthogonal directions. We made use only of the orthogonal pair of planar gradiometer coils (102 pairs of sensors at 102 locations), as planar gradiometer coils have better spatial resolution than magnetometer coils thus facilitating source localization (Malmivuo, Malmivuo, & Plonsey, 1995). The forward model is built for a specific head model, which we developed here from the fsaverage MRI image from the Freesurfer toolbox (Fischl, 2012). The tessellated cortical surfaces for right and left hemisphere were extracted using the recon-all pipeline in Freesurfer and then downsampled to 81,000 (81k) vertices (mris_decimate from Freesurfer). We used this surface to constrain dipole orientation and define the volume of the model corresponding to the cortex. We generated the inner skull, outer skull, and scalp surfaces approximated with 2,562 vertices from the fsaverage head generated using the mri_watershed function. Using these surfaces, and with the conductivities of the scalp, CSF and brain set at 1 S/m and the skull at 0.025 S/m (i.e., 40 times lower conductivity), we applied the OpenMEEG toolbox (Gramfort, Papadopoulo, Olivi, & Clerc, 2010) to compute a boundary element model (BEM). Each row of the MEG forward matrix from the BEM is the magnetic field gradient detected across all 204 gradiometers from a unit current density source at one of the 81k cortical surface vertices. Using the Lausanne parcellation for 114 cortical ROIs (Cammoun et al., 2012), we subdivided the cortical surface and identified vertices belonging to each ROI using the volumetric parcellation of the fsaverage brain. Using this organization of vertices we then reduced the representation of the current source for each ROI down to a set of three dipoles in the x, y, and z directions at a single location. The location of the source for each ROI was selected by taking a weighted average of vertex locations where the weight of each location was determined by the magnitude (L2 norm) of the field generated at the gradiometers. In this way, we reduced our source model to 114 source locations, with three sources at each location in the canonical x, y, and z directions. We computed a new MEG forward matrix (M) of dimension 204 × 342 using OpenMEEG which approximates the linear mixing of source activity at the gradiometers to generate the measured MEG signals. We simulate source activity across 114 areas using the precision with edges determined by the structural connectome, that is, one sample from the real-valued equivalent of the inverse of the precision is a 114 × 1 vector. To this source activity, we added independent noise with variance set such that the ratio of the trace of the noise to the CSD was controlled at 25 dB. We forward modeled the data to the MEG sensors. A sample of the MEG data is represented as a complex-valued vector of length equal to the number of MEG sensors (204 sensors). The set of samples of relates to source activity is the MEG forward matrix. We localize activity to the 342 sources (three directions, along , and axes at 114 locations) by inverting the reduced lead field using regularized weighted minimum norm estimation (weighted L2 norm; Dale & Sereno, 1993 ) and applying it to data at the scalp. We estimated the inverse using (where is a penalization term): We defined as the 10th percentile of the weights of . The estimated source activity is then . We identify the time series for the three dipoles along the , and directions. Using a singular value decomposition at each ROI, we identified the optimal orientation of the dipole as the first singular vector. Using the first singular vector at each ROI, we reduced the source data from 342 × 1 to 114 × 1 for each sample. We used the source localized data as the input to the AGL to estimate partial coherence. We also estimated the coherence, imaginary coherence, and partial coherence under the L2 norm. Metrics for the accuracy of the functional connectivity estimates. Across all simulations we used the ground truth to help us understand the performance of different algorithms. To understand whether the AGL is better than the vanilla graphical lasso, we examined the penalization applied on the edges and nonedges of the network provided as a constraint in simulations 1, 2, and 3. Across all methods in simulations 1 and 3, we looked at the number of true edges recovered, the number of false positives estimated, and the accuracy of estimated edge weights. To ascertain the accuracy of estimated edge weights, we calculated the Pearson correlation between the Fisher r-to-z transformed edge weights across the set of true edges, that is, all edges in the ground truth model. Application to MEG Data MEG data. The MEG data we analyzed was shared by the Cambridge Centre for Ageing and Neuroscience (CamCAN). CamCAN funding was provided by the UK Biotechnology and Biological Sciences Research Council (grant number BB/H008217/1), together with support from the UK Medical Research Council and University of Cambridge, UK. This data was obtained from the CamCAN repository (available at https:// www.mrc-cbu.cam.ac.uk/datasets/camcan/; Shafto et al., 2014; Taylor et al., 2017) and was conducted in accordance with the Helsinki declaration and approved by the Cambridgeshire 2 Research Ethics Committee (reference: 10/H0308/50). MEG data was collected using a 306 sensor VectorView MEG system (Electa Neuromag, Helsinki). The 306 sensors consisted of 102 magnetometers and 204 planar gradiometers. The data were sampled at 1000 Hz and highpass filtered at 0.3 Hz. This data was run through temporal signal space separation (tSSS; Taulu et al., 2005; MaxFilter 2.2, Elekta Neuromag Oy, Helsinki, Finland) to remove noise from external sources and to help correct for head movements (location of the head was continuously estimated using Head Position Indicator coils). MaxFilter was also used to remove the 50 Hz line noise and also to automatically detect and reconstruct noisy channels. Spectral analysis. We extracted 480 seconds of resting-state gradiometer data for a single individual. We first applied a band-pass filter between 0.5 and 100 Hz and a notch filter at 50 Hz to remove line noise. We 97 built elliptic filters (designed using fdesign.bandpass function in MATLAB) with stop band set to 0.5 Hz below and above pass band, stopband attenuation set to 100 dB, and passband ripple set to 0.02. Band-pass filtering was then done using the filtfilthd function in MATLAB to minimize phase distortion. We analyzed five frequency bands: delta (1–3 Hz), theta (4–7 Hz), alpha (8–13 Hz), beta (14–29 Hz), and gamma (30–80 Hz). Within each band we optimized the dipole orientation across 114 ROIs as described in the section describing Simulation 3. Using the band-pass filtered data we were able to estimate adaptively source localized data and within each frequency band. Source localized broadband data, using band-specific source dipole orientations, was multitapered and Fourier transformed in 1-second windows. We used the all frequencies in every band, to avoid averaging over frequencies, to generate a 480 × 114 complex-valued matrix used for estimating the cross-spectral Using the complex-valued data within each frequency band, we have a 480 × 114 matrix which served as the input for estimating the partial coherence. We split the 480 samples from 114 sources into four continuous ensembles of 120 samples each based on the expectation that we would have robust, stationary networks estimable with 120 seconds (Chu et al., 2012). Further, having four ensembles allowed for four-fold cross-validation. Within each ensemble we estimated the cross-spectral density and, using the AGL, the precision. We then followed the same procedure as described earlier in the section on cross-validation. Thus, we had at the end of the analysis for each subject, partial coherence across all five frequency bands. Simple Five Node Network As a proof-of-concept simulation, we examined network recovery of a sparse five node network with five edges (see Figure 3A) representing the precision. We sampled data for each node from the inverse of the precision, the cross-spectral density. We apply the AGL to the observed data to extract the partial coherence; a network with weighted edges. We considered two cases, one where we have a small number of samples (24 independent samples) and, second, when we have a large number of samples (240 independent samples). Each simulation (24 and 240 samples) is repeated 200 times. The cross-validation process allows the AGL to place the same penalization everywhere, thus the penalization values assess the usefulness of a network constraint. We see from the penalization distribution (Figure 3) that there is reduced penalization for true edges relative to nonedges, as we expected, both with 24 and 240 samples. The second metric of interest is edge recovery. In Figure 3B (middle column) we can see that the false positives are well controlled (with the distribution concentrated at zero edges) while we recover between two to all five of the true edges present despite only 24 available samples. With 240 samples (Figure 3C, middle column), we recovered all true edges in all 200 simulations and avoided any false positives in 95% of simulations. The final test is the recovery of the actual edge weights—the complex values representing connection strength and relative phase. We estimate this correspondence using a correlation between the true edges and the recovered edges. A high correlation implies that the complex-valued vectors tend to align with the orientations and strengths of the original complex-valued vectors and a correlation close to 0 indicates incorrect weight and orientation (an orthogonal vector or a zero vector). From Figures 3B and 3C we can see that correlation is 0.5 with 24 samples while it is nearly 1 with 240 samples. We conclude that we are able to recover the weights and edges of the precision even when we have only 24 samples, but with (an order of magnitude) more samples, we are almost able to recover the precision perfectly. Recovering the Structural Connectome In the second simulation, we considered an order of magnitude increase in the number of nodes and edges. We used the structural connectome across 114 areas. The network is sparse, with 720 weighted edges out of a total possible of 6,441 edges. The inverse of the precision determined from the SC could represent the cross-spectral density estimated from intracranial electrocorticography (ECoG). Similar to the first simulation, we examine the performance of the AGL to estimate the correct partial coherence when we have 480, 960, 1,440, 1,920 and 2,400 samples. Since we simulate from a covariance structure with nonzero intra-ROI variance, the signal-to-noise ratio of each individual edge is modulated in every simulation iteration. When simulating data from the structural connectome in a low samples case (480 samples), AGL identifies the correct penalty structure (Figure 4A, left column) and controls false positives (Figure 4A, middle column). Network recovery under AGL in a high sampling situation (2,400 samples) is nearly perfect (Figure 4B). The penalization structure consistently (across all sampling scenarios) indicated lower penalization on SC edges relative to non-SC edges, the false positives are controlled (also across all sampling scenarios) and real edges identified (≥500 of 720) and finally, the edge weights, that is, the partial coherence, are well recovered (correlation ≥ 0.7). This showed that the AGL is able to infer a penalization structure that uses the structural connectome. Even when we simulated only 480 samples the AGL minimized false positives, showed the usefulness of knowledge of the SC and reasonably recovered the network weights. We conclude that low numbers of samples do not pose an impossible hurdle in judging the usefulness of the structural connectome, recovering the structural connectome and controlling false positives. Inferring an Inaccurate Structural Connectome Constraint We forced model misspecification onto the AGL and examined the results. Model misspecification is done by altering the constraint provided to the AGL relative to the generative network model. We expect that the penalization structure of the AGL will reflect when we have used an incorrect network as a potential constraint: a shift in penalization toward the graphical lasso, that is, a uniform penalization. An alternative hypothesis is that the AGL always uses any constraint provided: the penalization can never approach the graphical lasso. We test these hypotheses by shuffling node identities for the SC network constraint provided to the AGL. However, we generate data from the structural connectome determined precision. Examining the penalization structure (Figure 5B and 5C, left column), we find that the AGL does not place lower penalization values on the fake network edges, instead approaching the flat penalization of the graphical lasso. However, this does not imply network recovery in either the fake or the true networks (Figure 5B and 5C, middle and right column), with both the false positives and the true edges suppressed in both networks at 480 samples. Penalization at 480 samples is placed uniformly high across the whole network (on and off the fake network edges). However, at 2,400 samples, the AGL places a small uniform penalization across the whole network allowing more true edges to be estimated (Figure 5C, right column), while the false positives driven by the fake network continue to be controlled (Figure 5C, middle). This suggests that while the AGL remained constrained by the network provided, (1) an incorrect network constraint can be inferred from the penalization structure and (2) with sufficient samples the true network can be partially recovered despite an incorrect constraint. Comparing AGL to Contemporary Network Recovery Approaches Many contemporary algorithms aim to estimate the networks scaffolding EEG/MEG/ECoG data. We compared three methods that make similar assumptions about the data as using the AGL-estimated partial coherence : coherence (Bendat & Piersol, 2011), imaginary coherence (Nolte et al., 2004), and the partial coherence estimated under an L2-norm inverse (Colclough et al., 2016; Ter Wal et al., 2018). We first compared these methods when recovering a network with structural connectome edges using 480 samples. We compared the methods on the true positives, false positives, and the network weight recovery (Figure 6). We found that, at 480 samples, all methods were able to recover the true SC edges; however, they also estimated a considerable number of false positive edges. The AGL was considerably better at controlling false positive edges than all other methods, with the imaginary coherence performing better than coherence and the L2 norm. When estimating network weights, the L2 norm inverse and the AGL did considerably better than the coherence and imaginary coherence. At 2,400 samples, we saw similar performance differences across methods, with the AGL continuing to outperform all other methods at controlling false positives. Further, the AGL is better than the L2 norm inverse at estimating the network weights of the true network at 2,400 samples. We conclude that the AGL-estimated partial coherence outperforms contemporary algorithms at recovering the underlying network, both when we have limited independent samples and when we have large numbers of independent samples. Network Recovery After Source Localization In M/EEG research, we must both recover the network from limited samples and reduce the impact of signal leakage from source localization. We apply a commonly used source localization technique (weighted L2 norm inverse; see details in Methods, section on Simulation 3) to attempt to recover sources. We apply the AGL and other algorithms to this recovered source activity. Examining the 480 sample case (Figure 7A), we see that the AGL continues to outperform all other methods at controlling false positives in the network. However, other network recovery techniques were comparable in recovering true SC edges, with the coherence recovering all edges but also including a large number of false positives. All methods were comparably poor at recovering the network weights. When we have more samples (2,400; see Figure 7B), we see that the AGL clearly outperforms all algorithms in all metrics measured, with the correlation with network weights reaching 0.58. This suggests that when more samples were available we were able to partially overcome the difficulties imposed by source localization by using the AGL-estimated partial coherence. Application to MEG Data We extracted 480 seconds of preprocessed resting-state MEG data from a single subject from the open-source CamCAN dataset. We source localized this data (using weighted L2 norm; Dale & Sereno, 1993) to the 114 areas of the Lausanne parcellation. After source localization, we used 1-second windows to get amplitude and phase samples at each frequency from 1 to 50 Hz using the multitaper method. We applied the AGL with our cross-validation procedure (see section Cross Validation) to estimate the partial coherence. Note that since we examined only a single subject, we intended this to only be a demonstration of how the AGL-estimated partial coherence could be used. Further, we do not have a ground truth in this situation so we focus on the penalization structure to infer if the structural connectome (SC) is useful information in modeling the coherence. We did find that the SC serves as a useful constraint in the delta (2/3 frequencies), theta (1/4 frequencies), and beta bands (11/15 frequencies), but not in the alpha (0/6 frequencies) or gamma (0/20 frequencies) bands. The null results in the alpha and gamma band indicate that the measured functional connectivity involves other connections, for example, thalamocortical or other subcortical projections not included in the structural connectome. Finally, when we applied a fake network—a shuffled SC—as a potential constraint, we found that none of the frequency bands use the constraint applied and the algorithm chose to use vanilla graphical lasso. This indicates that only the SC serves as a useful constraint in the delta, theta, and beta bands. For the cases when the SC was a useful constraint, the partial coherence estimates the edges of the SC that are relevant for each frequency, which can be a subset of the structural connectome. We examined the edge weights in the delta, theta, and beta bands, looking for which band had the highest weight at each SC edge. We show this in Figure 8. Beta band networks tend to have connections distributed across the cortex, while theta and delta connections are more circumscribed. Delta band shows connections within frontal and cingulate regions and from frontal/cingulate to parietal regions. Theta band shows consistent connectivity across left and right hemispheres between temporal and parietal/occipital regions. Beta band connectivity dominates throughout the rest of the structural connectivity, with little specificity. The relevance of the structural connectivity to beta band functional connectivity is consistent with past research (Garcés et al., 2016). We conclude that the AGL can be applied to empirical data to discover networks in different frequency bands. We developed a model of MEG coherence constrained by knowledge of anatomical connectivity in the structural connectome. We showed that we can accurately infer the weighted network connectivity by means of partial coherence, for the first time, using the AGL. This method can assess if the structural connectome is useful as a constraint for estimation of the partial coherence by comparing the penalization applied to the structural connectome to the penalization applied outside it. Finally, we demonstrated how, when the functional connectivity is simulated from the structural connectome, the AGL-estimated partial coherence outperforms coherence, imaginary coherence, and the L2-norm estimated partial coherence. Functional Connectivity Network Using Partial Coherence The AGL yields a new measure of functional connectivity that is based on the expectation that the structural connectivity scaffolds the functional connectivity. Critically, the method also allows functional connections to exist that are not prescribed in the structural connectome. Given the ability of the partial coherence to reduce false positives and provide an accurate definition of a path (Avena-Koenigsberger et al., 2018), it serves as a useful electrophysiological functional connectivity measure for network analyses (Reid et al., 2019). Further, the precision can potentially be applied toward other analyses that attempt to decode the causal direction of connections (Baccalá & Sameshima, 2001; Sanchez-Romero & Cole, 2021). Estimating Partial Coherence Using the Structural Connectome Regularization is essential to estimate partial coherence for large networks. We argued that an L1 norm regularization is more intuitive than the L2 norm because the structural connectome is sparse. We can explicitly incorporate the structural connectome (SC) into the partial coherence estimate through the AGL. Past work applying a matrix penalty term to the graphical lasso (Pineda-Pardo et al., 2014)—using it to estimate the partial correlation—has directly forced the SC connection weights onto the penalization weighting. In contrast to Pineda-Pardo et al. (2014), we expected that the SC strengths are unlikely to map directly onto the strengths of the precision due to individual differences and variance within individuals across functional brain states. In addition, we expected that the SC can have different contributions across frequency bands yielding different connection weights. For these reasons we used the binarized SC to potentially organize the L1 penalization, that is, we allowed the penalization to entirely ignore the SC if appropriate. Using Larger Numbers of Samples in Functional Connectivity Research We found that the accuracy of network recovery is contingent on the number of samples used. While a subset of the network was recoverable when samples were comparable to nodes, from simulations it appeared that there was considerably improved performance with higher numbers of samples. While past work has suggested that for coherence there can be convergence within a few hundreds of samples ( Chu et al., 2012), we saw that for the imaginary coherence and for partial coherence, larger numbers of samples significantly improved performance. This knowledge provides impetus to use longer recordings (10 minutes or more) to estimate resting-state electrophysiological functional connectivity, similar to recent work in functional-MRI research (Gordon et al., 2017). In the simulations, we assumed a generative model where brain areas show random oscillatory behavior linked by the structural connectome. This could be represented using a zero-mean complex multivariate normal with a circularly symmetric precision. More detailed mean-field models of neural activity may be more phenomenologically accurate (Cabral et al., 2014); although, past work suggests there is limited gain in using them when explaining empirical data (Finger et al., 2016; Messé, Rudrauf, Giron, & Marrelec, 2015). As such, there is value in having multiple models to explain the data as a function of the hypothesis being tested. While partial coherence offers a clearer definition of a direct connection between areas, it is potentially susceptible to false positives, depending on the nature of causal direction of common effects in the network (Sanchez-Romero & Cole, 2021). Specifically, if two nodes A and B are directly influencing a third node C, and A and B are unassociated, then a false positive connection can appear between A and B. As such, the partial coherence potentially could be better used in concert with coherence as proposed by Sanchez-Romero and Cole (2021). In humans, structural connectivity is only estimated from diffusion weighted imaging and is an imperfect measure, subject to its own limitations (Karnath, Sperber, & Rorden, 2018; Maier-Hein et al., 2017). There are difficulties in tractography linked to overlapping fiber bundles that make it hard to identify correct bundle endpoints, and strict correction of incorrect streamlines can rapidly lead to large numbers of false negatives (Maier-Hein et al., 2017). The decision to remove nonhomologous interhemispheric connectivity may also have introduced a few false negatives. Finally, we used a group-averaged SC template for all the subjects, and while individual variability in SC is low (Chamberland et al., 2017), better models may be built using an individualized SC estimate. An important future direction will be to examine the optimal structural connectivity estimate for MEG data. Source localization can be formulated in several ways based on prior assumptions. While we used a weighted L2 norm inverse, beamformer reconstruction approaches are also quite common in MEG (Brookes et al., 2011; Gross et al., 2001) and require investigation within this framework. Bayesian techniques accounting for priors more explicitly can afford better source reconstruction (Baillet et al., 2001; Wipf & Nagarajan, 2009). Examining these alternative approaches was beyond our scope, but the AGL is equally applicable under these alternatives. Additionally, we chose to limit our analysis to an SC with 114 nodes; a future extension to this work might examine cases with more (or fewer) sources. We also ignore for our purposes subcortical source activity and connectivity. This may have led to the large variation in the estimated results in the MEG data. In this example, the alpha and gamma rhythms may not have mapped onto the structural connectome because of strong thalamacortical contributions. Estimation of subcortical activity in MEG, while possible, is difficult without explicit prior knowledge (Krishnaswamy et al., 2017), and would also potentially benefit from including the magnetometer recordings and developing individual subject head models. Understanding the relevance of different potential constraints, such as source modeling and the structural connectome, on a ”big data” measurement technique such as MEG data improves our ability to infer genuine signal variability from noise. This work developed a simple model derived from the constraint of the structural connectome and demonstrated that we can recover the model parameters in simulations. This method is useful in clinical situations and in cognitive neuroscience for understanding network structure. For example, estimating a gamma band partial coherence network in a working memory task to understand which structural connections are most strongly activated. Another example, as we have recently demonstrated in fMRI research (Wodeyar et al., 2020), is to examine the influence of lesions and concomitant structural disconnection on MEG or EEG functional connectivity. Interpreting M/EEG coherence is contingent on building and comparing different models of the data, and we believe our work takes us a significant step in this direction. Anirudh Wodeyar: Conceptualization; Formal analysis; Investigation; Methodology; Software; Visualization; Writing – original draft; Writing – review & editing. Ramesh Srinivasan: Conceptualization; Funding acquisition; Methodology; Resources; Supervision; Writing – review & editing. H. U. , & Network diffusion accurately models the relationship between structural and functional brain connectivity networks . , , & Communication dynamics in complex brain networks Nature Reviews Neuroscience . , L. A. , & Partial directed coherence: A new concept in neural structure determination Biological Cybernetics . , Magnetoencephalography for brain electrophysiology and imaging Nature Neuroscience . , J. C. , & R. M. Electromagnetic brain mapping IEEE Signal Processing Magazine J. S. , & A. G. Random data: Analysis and measurement procedures Vol. 729 Hoboken, NJ John Wiley & Sons K. J. , & Functional brain networks: random, “small world” or deterministic? PLoS One . , M. J. J. R. M. C. , … P. G. Investigating the electrophysiological basis of resting state networks using magnetoencephalography Proceedings of the National Academy of Sciences . , , … Exploring mechanisms of spontaneous functional connectivity in MEG: How delayed network interactions lead to structured amplitude envelopes of band-pass filtered oscillations . , J. P. K. Q. , … Mapping the human connectome at multiple scales with diffusion spectrum MRI Journal of Neuroscience Methods . , , & On the origin of individual functional connectivity variability: The role of white matter architecture Brain Connectivity . , C. J. M. A. M. T. M. B. , & S. S. Emergence of stable functional networks in long-term human electroencephalography Journal of Neuroscience . , C. J. B. L. , … M. A. EEG functional connectivity is partially predicted by underlying white matter connectivity . , J. B. I. D. P. M. , & E. R. Along-tract statistics allow for enhanced tractography analysis . , G. L. M. W. P. K. M. J. A. J. , & S. M. How reliable are MEG resting-state connectivity metrics? . , Graphical interaction models for multivariate time series A. M. , & M. I. Improved localizadon of cortical activity by combining EEG and MEG with MRI cortical surface reconstruction: A linear approach Journal of Cognitive Neuroscience . , De Vico Fallani , & Graph analysis of functional brain networks: Practical issues in translational neuroscience Philosophical Transactions of the Royal Society B: Biological Sciences . , , & E. I. A tutorial on regularized partial correlation networks Psychological Methods . , , … Modeling of large-scale functional brain networks based on structural connectivity from DTI: Comparison with EEG derived phase coupling networks and evaluation of alternative methods along the modeling path PLoS Computational Biology . , . , , & Sparse inverse covariance estimation with the graphical lasso . , K. J. Functional and effective connectivity: A review Brain Connectivity . , J. A. , & Ángel Pineda-Pardo Multimodal description of whole brain connectivity: A comparison of resting state MEG, fMRI, and DWI Human Brain Mapping . , E. M. T. O. A. W. D. J. D. J. J. J. , … N. U. F Precision functional mapping of individual human brains . , , & OpenMEEG: Opensource software for quasistatic bioelectromagnetics Biomedical Engineering Online . , , & Speech rhythms and multiplexed oscillatory sensory coding in the human brain PLoS Biology . , , & Dynamic imaging of coherent sources: Studying neural interactions in the human brain Proceedings of the National Academy of Sciences . , M. S. , & R. J. Interpreting magnetic fields of the brain: Minimum norm estimates Medical & Biological Engineering & Computing . , R. J. , & van Gerven M. A. Bayesian estimation of conditional independence graphs improves functional connectivity estimates PLoS Computational Biology . , I. S. P. K. , & M. A. Sparse inverse covariance matrix estimation using quadratic approximation . In Advances in neural information processing systems , … Alzheimer’s Disease NeuroImaging Initiative . ( Learning brain connectivity of Alzheimer’s disease by sparse inverse covariance estimation . , , & K. J. Is graph theoretical analysis a useful tool for quantification of connectivity obtained by means of EEG/MEG techniques? Frontiers in Neural Circuits . , , & Mapping human brain lesions and their functional consequences . , J. E. , … P. L. Sparsity enables estimation of both subcortical and cortical activity from MEG and EEG Proceedings of the National Academy of Sciences . , K. H. P. F. , … The challenge of mapping the human connectome based on diffusion tractography Nature Communications . , J. A. E. M. , & C. T. Graph theoretical analysis of resting-state MEG data: Identifying interhemispheric connectivity and the default mode . , , & Bioelectromagnetism: Principles and applications of bioelectric and biomagnetic fields New York, NY Oxford University Press A. T. , & Graphical modelling for brain connectivity via partial coherence Journal of Neuroscience Methods . , van Dijk B. W. S. M. , & Van Mieghem A mapping between structural and functional brain networks Brain Connectivity . , , & High-dimensional graphs and variable selection with the lasso The Annals of Statistics Y. A. I. ter Braak C. J. F. , & van Eeuwijk F. A. Gene regulatory networks from multifactorial perturbations using graphical lasso: Application to the DREAM4 challenge PLoS One . , , & Predicting functional connectivity from structural connectivity via computational models using MRI: An extensive comparison study . , J. B. , & A novel sparse graphical approach for multimodal brain connectivity inference . In International conference on medical image computing and computer-assisted intervention Berlin, Germany . , , & What graph theory actually tells us about resting state interictal MEG epileptic activity NeuroImage: Clinical . , , & Identifying true brain interaction from EEG data using the imaginary part of coherency Clinical Neurophysiology . , P. L. , & Electric fields of the brain: The neurophysics of EEG New York, NY Oxford University Press J. M. S. H. M. J. , … Ghost interactions in MEG/EEG source space: A note of caution on inter-areal coupling measures . , J. A. A. C. , & Guiding functional connectivity estimation by structural connectivity in MEG: An application to discrimination of conditions of mild cognitive impairment . , Covariance estimation: The GLM and regularization perspectives Statistical Science A. T. D. B. R. D. L. Q. , … M. W. Advancing functional connectivity research from association to causation Nature Neuroscience . , J. R. D. M. , & B. A. Identification of patterns of neuronal connectivity—Partial spectra, partial coherence, and neuronal interactions Journal of Neuroscience Methods . , J. M. , & Load dependence of β and γ oscillations predicts individual capacity of visual attention Journal of Neuroscience . , , & P. J. Working memory and neural oscillations: Alpha–gamma versus theta–gamma codes for distinct wm information? Trends in Cognitive Sciences . , , & Estimation of functional connectivity in fMRI data using stability selection-based sparse partial correlation with elastic net penalty . , , & M. W. Combining multiple functional connectivity methods to improve causal inferences Journal of Cognitive Neuroscience . , H. G. , & A general theory of coherence between brain areas M. M. J. J. van der Meer M. L. , … C. J. Functional connectivity changes in multiple sclerosis patients: A graph analytical study of MEG resting state data Human Brain Mapping . , P. J. , & L. L. Statistical signal processing of complex-valued data: The theory of improper and noncircular signals Cambridge, UK Cambridge University Press M. A. L. K. J. R. J. B. , … . ( The Cambridge Centre for Ageing and Neuroscience (Cam-CAN) study protocol: A cross-sectional, lifespan, multidisciplinary examination of healthy cognitive ageing BMC Neurology . , S. H. J. M. , & Cross-frequency synchronization connects networks of fast and slow oscillations during visual working memory maintenance . , S. M. K. L. C. F. T. E. , … M. W. Network modelling methods for fMRI . , W. R. , & P. L. EEG and MEG coherence: Measures of functional connectivity at distinct spatial scales of neocortical dynamics Journal of Neuroscience Methods . , C. J. Modern network science of neurological disorders Nature Reviews Neuroscience . , , & Applications of the signal space separation method IEEE Transactions on Signal Processing J. R. M. A. , … R. N. The Cambridge Centre for Ageing and Neuroscience (Cam-CAN) data repository: Structural and functional MRI, MEG, and cognitive data from a cross-sectional adult lifespan sample . , Ter Wal G. A. , & P. H. Characterization of network structure in stereoeeg data using consensus-based partial coherence . , Van Essen D. C. S. M. D. M. T. E. J. , … WU-Minn HCP Consortium . ( The WU-Minn Human Connectome Project: An overview . , , & Brain covariance selection: Better individual functional connectivity models using population prior . In Advances in neural information processing systems Graphical models in applied multivariate statistics Chichester, UK Wiley Publishing , & A unified bayesian framework for MEG/EEG source imaging . , J. M. S. C. , & Damage to the structural connectome reflected in resting-state fMRI functional connectivity Network Neuroscience . , J. C. , … Population-averaged atlas of the macroscale human structural connectome and its network topology . , , & A Gaussian graphical model approach to climate networks Chaos: An Interdisciplinary Journal of Nonlinear Science . , The adaptive lasso and its oracle properties Journal of the American Statistical Association A. M. , & The bootstrap and its application in signal processing IEEE Signal Processing Magazine Supporting Information Author notes Competing Interests: The authors have declared that no competing interests exist. Handling Editor: Vince Calhoun © 2022 Massachusetts Institute of Technology. Published under a Creative Commons Attribution 4.0 International (CC BY 4.0) license. Massachusetts Institute of Technology
{"url":"https://direct.mit.edu/netn/article/6/4/1219/112204/Structural-connectome-constrained-graphical-lasso","timestamp":"2024-11-10T22:13:36Z","content_type":"text/html","content_length":"491570","record_id":"<urn:uuid:4b3753a1-54f2-4269-abba-94bd2724d686>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00609.warc.gz"}
Predictive Analytics, Big Data, Hadoop, PMML The latest release of the R ‘pmml’ package adds further support for gradient boosted algorithms, specifically the gbm and xgboost functions. The xgboost conversion will be discussed in a future post, this post concentrates on converting gbm models to the PMML format. Gradient boosted models gained popularity due to their better ability, relative to random forests, to correctly classify categorical values. The probability of the predicted category is not as accurate as that of random forests, but if the goal is to simply get a predicted level, gradient boosting became the algorithm of choice. Broadly speaking, the algorithm works by training to predict a correct category using any model of choice. Once this step is finished, the incorrectly predicted data set is chosen and the same model type is trained again; thus making the overall prediction more and more accurate. The ‘gbm’ package does just this, and the model of choice is a tree model; a popular choice. The gbm model converter In principle, the gbm model is simply a collection of trees and trees are easily implemented in the PMML language. The conversion is somewhat complicated due to the method used to make predictions. Each tree calculates the probabilities of each possible category to be predicted. The probability of each category is summed over all the trees and the final prediction is the category with the highest probability. Therefore the models are not independent and moreover, each tree is a regression tree. The output of all the trees must be combined to convert a regression output to a classification output. A list of trees, each linked to the earlier one, can be represented in PMML as a model chain. PMML requires the model to explicitly specify the type of model, regression or classification. Since each tree outputs a numeric value, which is interpreted as a probability, the model would be a regression model. However since it is understood that the output is a probability which should be used to infer a category, the model should be a classification model. This could be implemented by defining a regression model chain and applying the appropriate functions to the output to predict a category, however this is not a very satisfactory solution. We would prefer that a classification model should be represented by a classification PMML model. This would also make the PMML model simpler when it is desired to predict the probabilities of categories other than the predicted category. The only requirement of a classification model chain is that the last model used in the chain should be a classification model, no matter what the intermediate model types. Our solution was to add a final multinomial regression model to the chain. Although this adds an extra model, it is a very simple model which automatically enables efficient extraction of all category probabilities, especially for a large number of categories. Each tree segment sums up the probabilities of each category with the previous segment. This way, the input to the last segment is the final sum of the probabilities of each category and it simply normalizes those inputs to probabilities. These are the details of the model representation, however the actual conversion, like other functions in the ‘pmml’ package, is very simple; just apply the pmml function. As an example, let us make a GBM model by using the ‘gbm’ package and the ‘iris’ data set. model <- gbm(Species~., data=iris, n.trees=2, interaction.depth=3, distribution="multinomial") The gbm function specifies that 2 trees must be fit and each tree should have a maximum depth of 3. Note that these are not default values, just sample small values. The function also requires that the distribution type of the response variable be given; for a classification model, we picked multinomial. pmodel <- pmml(model) Let us look at the input and output of the first model in the chain. ## <MiningSchema> ## <MiningField name="Species" usageType="predicted"/> ## <MiningField name="Sepal.Length" usageType="active"/> ## <MiningField name="Sepal.Width" usageType="active"/> ## <MiningField name="Petal.Length" usageType="active"/> ## <MiningField name="Petal.Width" usageType="active"/> ## </MiningSchema> ## <Output> ## <OutputField name="UpdatedPredictedValue11" optype="continuous" dataType="double" feature="predictedValue"/> ## </Output> The inputs are as expected and the output is the predicted probability of the first category by the first tree. Now consider the input and output of the second tree. ## <MiningSchema> ## <MiningField name="Species" usageType="predicted"/> ## <MiningField name="Sepal.Length" usageType="active"/> ## <MiningField name="Sepal.Width" usageType="active"/> ## <MiningField name="Petal.Length" usageType="active"/> ## <MiningField name="Petal.Width" usageType="active"/> ## <MiningField name="UpdatedPredictedValue11" optype="continuous"/> ## </MiningSchema> ## <Output> ## <OutputField name="TreePredictedValue12" optype="continuous" dataType="double" feature="predictedValue"/> ## <OutputField name="UpdatedPredictedValue12" optype="continuous" dataType="double" feature="transformedValue"> ## <Apply function="+"> ## <FieldRef field="UpdatedPredictedValue11"/> ## <FieldRef field="TreePredictedValue12"/> ## </Apply> ## </OutputField> ## </Output> The input now is the previously calculated probability and the output contains the running sum of the probability of category 1 from the trees. Next let us look at the input and output of the \(3^{rd}\) tree. ## <MiningSchema> ## <MiningField name="Species" usageType="predicted"/> ## <MiningField name="Sepal.Length" usageType="active"/> ## <MiningField name="Sepal.Width" usageType="active"/> ## <MiningField name="Petal.Length" usageType="active"/> ## <MiningField name="Petal.Width" usageType="active"/> ## </MiningSchema> ## <Output> ## <OutputField name="UpdatedPredictedValue21" optype="continuous" dataType="double" feature="predictedValue"/> ## </Output> Since each category was specified to have 2 trees each, the third tree now changes attention to the second category. Its inputs are the original inputs and it outputs the predicted probability of the second category from the \(1^{st}\) tree. In this way, after 2 trees each for 3 categories, that is, 6 trees, the \(7^{th}\) tree is the regression model which normalizes the probabilities to the final one. One can see this from the inputs and outputs defined in the last model. ## <MiningSchema> ## <MiningField name="UpdatedPredictedValue12"/> ## <MiningField name="UpdatedPredictedValue22"/> ## <MiningField name="UpdatedPredictedValue32"/> ## </MiningSchema> ## <Output> ## <OutputField name="Predicted_Species" feature="predictedValue"/> ## <OutputField name="Probability_setosa" optype="continuous" dataType="double" feature="probability" value="setosa"/> ## <OutputField name="Probability_versicolor" optype="continuous" dataType="double" feature="probability" value="versicolor"/> ## <OutputField name="Probability_virginica" optype="continuous" dataType="double" feature="probability" value="virginica"/> ## </Output> We see that although the actual PMML representation of a gbm object is not straightforward, the actual conversion is the usual, simple application of the pmml function, a one line command.
{"url":"http://www.predictive-analytics.info/2017/05/","timestamp":"2024-11-08T14:03:21Z","content_type":"text/html","content_length":"788838","record_id":"<urn:uuid:551cf491-0753-4d01-a55e-b9299c054364>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00818.warc.gz"}
Polymorphic Function Types A polymorphic function type is a function type which accepts type parameters. For example: // A polymorphic method: def foo[A](xs: List[A]): List[A] = xs.reverse // A polymorphic function value: val bar: [A] => List[A] => List[A] // ^^^^^^^^^^^^^^^^^^^^^^^^^ // a polymorphic function type = [A] => (xs: List[A]) => foo[A](xs) Scala already has polymorphic methods, i.e. methods which accepts type parameters. Method foo above is an example, accepting a type parameter A. So far, it was not possible to turn such methods into polymorphic function values like bar above, which can be passed as parameters to other functions, or returned as results. In Scala 3 this is now possible. The type of the bar value above is [A] => List[A] => List[A] This type describes function values which take a type A as a parameter, then take a list of type List[A], and return a list of the same type List[A]. Example Usage Polymorphic function type are particularly useful when callers of a method are required to provide a function which has to be polymorphic, meaning that it should accept arbitrary types as part of its For instance, consider the situation where we have a data type to represent the expressions of a simple language (consisting only of variables and function applications) in a strongly-typed way: enum Expr[A]: case Var(name: String) case Apply[A, B](fun: Expr[B => A], arg: Expr[B]) extends Expr[A] We would like to provide a way for users to map a function over all immediate subexpressions of a given Expr. This requires the given function to be polymorphic, since each subexpression may have a different type. Here is how to implement this using polymorphic function types: def mapSubexpressions[A](e: Expr[A])(f: [B] => Expr[B] => Expr[B]): Expr[A] = e match case Apply(fun, arg) => Apply(f(fun), f(arg)) case Var(n) => Var(n) And here is how to use this function to wrap each subexpression in a given expression with a call to some wrap function, defined as a variable: val e0 = Apply(Var("f"), Var("a")) val e1 = mapSubexpressions(e0)( [B] => (se: Expr[B]) => Apply(Var[B => B]("wrap"), se)) println(e1) // Apply(Apply(Var(wrap),Var(f)),Apply(Var(wrap),Var(a))) Relationship With Type Lambdas Polymorphic function types are not to be confused with type lambdas. While the former describes the type of a polymorphic value, the latter is an actual function value at the type level. A good way of understanding the difference is to notice that type lambdas are applied in types, whereas polymorphic functions are applied in terms: One would call the function bar above by passing it a type argument bar[Int] within a method body. On the other hand, given a type lambda such as type F = [A] =>> List[A], one would call F within a type expression, as in type Bar = F[Int].
{"url":"https://scala-lang.org/api/3.x/docs/docs/reference/new-types/polymorphic-function-types.html","timestamp":"2024-11-05T09:10:15Z","content_type":"text/html","content_length":"46691","record_id":"<urn:uuid:b1f77ab0-6566-44c0-b879-daaf8d3e322e>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00716.warc.gz"}