content
stringlengths
86
994k
meta
stringlengths
288
619
8 ft in metres Disclaimer: Whilst every effort has been made in building our calculator tools, 8 ft in metres, we are not to be held liable for any damages or monetary losses arising out of or in connection with their use. Full disclaimer. To convert feet to meters, multiply your figure by 0. Welcome to 8 feet in meters , the conversion from the US customary unit foot ft to the fundamental unit of length in the metric system, the meter m. Thus, to obtain 8 ft in meters we have to multiply 8 by 0. In the next part of eight feet in meters, we will provide you with the equivalent of 8 feet in other metric units, and then give you instructions regarding the search form in the sidebar. Perhaps you are also interested in learning what 8 feet in other common metric units of length, in millimeter mm , centimeter cm and decimeter dm , is. Now, we would like to take the opportunity to tell you how to use the search box in the sidebar to locate a conversion like 8 foot meters. Else, let us us know by means of filling in the comment form at the bottom, or get in touch by sending an email with the subject line convert eight feet to meters for example. 8 ft in metres To contact us, please click HERE. Therefore, the capacitance of the capacitor is 2. Note that the resulting value may have to be rounded to a practical or standard value, depending on the application. Note: Values are rounded to 4 significant figures. Fractions are rounded to the nearest 8th fraction. Conversely, to convert from these other units of length to metres, you would use the appropriate conversion factor, either by multiplying or dividing the original quantity by the factor. In summary, the meter is a unit of length in the SI system and is commonly used to measure distance and length in a variety of contexts. Its basis in units of 10 makes it easy to convert to other units of length. A typical human arm span A meter stick or yardstick A bicycle frame size A large pizza A three-foot 1 meter long fish A standard kitchen countertop A medium-sized dog A basketball hoop height A typical pool cue length A standard walking cane A small ladder or step stool Microwaves of GHz have a wavelength of 1 mm. Despite efforts to provide accurate information on this website, no guarantee of its accuracy is made. Therefore, the content should not be used for decisions regarding health, finances, or property. Volume Resizer for Recipes New! Note: Values are rounded to 4 significant figures. One foot is exactly equal to 12 inches. It is also exactly equal to 0. One yard is comprised of three feet. The foot is just behind the metre in terms of widespread use due to its previous popularity. The US is the only developed country that still uses the foot in preference to the metre. Originally defined in as one ten-millionth of the distance from the equator to the North Pole but was redefined in in terms of a prototyped metre bar this bar was changed again in Please provide values below to convert foot [ft] to meter [m], or vice versa. Definition: A foot symbol: ft is a unit of length in the imperial and US customary systems of measurement. A foot was defined as exactly 0. One foot contains 12 inches, and one yard is comprised of three feet. The various lengths were due to parts of the human body historically being used as a basis for units of length such as the cubit, hand, span, digit, and many others, sometimes referred to as anthropic units. This resulted in the measurement of a foot varying between mm and mm in the past compared to the current definition of While the United States is one of the few, if not only, countries in which the foot is still widely used, many countries used their own version of the foot prior to metrication, as evidenced by a fairly large list of obsolete feet measurements. 8 ft in metres Welcome to the Omni ft to m converter, a convenient tool to help you convert feet to meters. You can also use the converter backward to perform m to ft conversion! Have you ever had to think twice about the length of an object because you were unsure how to convert ft to m? You do not need to second-guess anymore; wait no longer: scroll down and read on to find out! To convert ft to m, you need to multiply your length value by 0. To convert m to ft, you need to multiply your length value by 3. Meaningful behind ear tattoos How many meters in 8 ft? Round decimal places : 1 2 3 4 5 Convert. Email me when someone replies to my comment. It is equal to 12 inches and approximately 0. In the next part of eight feet in meters, we will provide you with the equivalent of 8 feet in other metric units, and then give you instructions regarding the search form in the sidebar. From abacus to iPhones, learn how calculators developed over time. Use this conversion tool and chart to convert between feet and meters and feet, inches and meters. Fractions are rounded to the nearest 8th fraction. Contact Us! Additional details related to the units involved in the 8 feet meters conversion can be found on our home page, along with references to further readings, if you have such a need. Even though the server responded OK, it is possible the submission was not processed. Note: Values are rounded to 4 significant figures. Definition of Foot A foot is a commonly used unit of length in the United States and other countries. Please share. Note: You can increase or decrease the accuracy of this answer by selecting the number of significant figures required from the options above the result. To convert feet to meters, you can use a simple conversion factor. Perhaps you are also interested in learning what 8 feet in other common metric units of length, in millimeter mm , centimeter cm and decimeter dm , is. Feet to Meters Meters to feet Feet to meters Feet to meters feet in. To contact us, please click HERE. Bar Chart height chart. Note that you can convert between feet and inches here. Quick Rough Maths. Its basis in units of 10 makes it easy to convert to other units of length. From abacus to iPhones, learn how calculators developed over time. Full disclaimer. The US is the only developed country that still uses the foot in preference to the metre. 2 thoughts on “8 ft in metres” 1. What words... 2. Casual concurrence
{"url":"https://aleatha.pl/8-ft-in-metres.php","timestamp":"2024-11-03T03:16:41Z","content_type":"text/html","content_length":"27546","record_id":"<urn:uuid:fdd01985-0e95-479c-9ade-bb5526219d28>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00163.warc.gz"}
Day 7 of 31-Day May LeetCode Challenge In a binary tree, the root node is at depth 0, and children of each depth k node are at depth k+1. Two nodes of a binary tree are cousins if they have the same depth, but have different parents. We are given the root of a binary tree with unique values, and the values x and y of two different nodes in the tree. Return true if and only if the nodes corresponding to the values x and y are cousins.
{"url":"https://aanchalpatial.medium.com/day-7-of-31-day-may-leetcode-challenge-ce293bedabd9","timestamp":"2024-11-13T07:42:53Z","content_type":"text/html","content_length":"88920","record_id":"<urn:uuid:f078515e-de2d-41d5-aaff-ed4b4a1bd873>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00670.warc.gz"}
Interpreting your ALEKS online math placement score Last revised May 15, 2024 • ALEKS scores should be interpreted with the assistance of your academic advisor. You will be meeting with an advisor during orientation. • Course selection depends not just on your ALEKS score but on many other factors. These include: prior mathematics preparation, SAT/ACT scores (if available), extracurricular commitments, requirements of the school (Business, the College, etc.) you want to enter, the requirements of the major and/or minor (Finance, History, etc.) you think you might pursue, your personal interests, and your long-term career goals. • The ALEKS score is a number between 1 and 100 and is interpreted as a percentage correctly. • A higher ALEKS score is evidence that you have mastered more math concepts. • The topics covered by ALEKS include precalculus, but not calculus itself. • If you have taken a calculus class, but do not yet have credit for Math M211, Calculus 1, then you should also take the “Calculus placement test” during orientation. Example: you took the AP AB test the summer before entering IU. Your AP score is not yet known when you arrive for orientation. You should take the calculus placement test. • The table below shows both minimum required and recommended ALEKS scores needed for you to be qualified to enroll in the indicated course. • These placement guidelines are the result of analysis of the performance of thousands of students over several years. • With the exception of M106, D116/117, and B 110, you are not prevented from registering for a course even though your ALEKS score falls below the minimum required. • You are allowed up to 3 attempts at the ALEKS test in the first year. Between attempts, you will be required to complete “practice modules” to review skills. There is no additional fee for additional attempts during the initial year. After one year, email bestexam@indiana.edu in order to retake the ALEKS. Any retakes after the initial year incur the online placement exam fee. • Math courses numbered “M XX” (example: M 18) are designed specifically to prepare students for 100 and 200 level courses. Work in ALEKS practice modules does not give the same amount of practice as work in courses. • Your advisor will play a key role in explaining which of the many options is best for you. ALEKS scores should be interpreted with the assistance of your academic advisor. You will be meeting with an advisor during orientation.
{"url":"https://math.indiana.edu/undergraduate/aleks-online-math-placement-score.html","timestamp":"2024-11-03T03:45:48Z","content_type":"text/html","content_length":"46219","record_id":"<urn:uuid:d778c765-e88d-4a03-95f8-2ec4bc8cea0e>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00512.warc.gz"}
(PDF) One Hammer, Different Nails – A Note on the Confusing Sociologists’ Debate on Comparing Coefficients in Logistic Regression ... 34 35 Therefore, we will not report on the actual coefficients (although these can be found in online supplementary appendix B), but instead use the gologit2 output to compute non-linear probability models. 30 While marginal effects and predicted probabilities are not immune to unobserved heterogeneity, 36 they are considered less sensitive to changes in the model specification than ORs. 34 All predicted probabilities are derived following marginal standardisation, that is, as the average effect of sector on quality, as opposed to the effect of sector on quality on average (ie, prediction at the means). ...
{"url":"https://www.researchgate.net/publication/304567615_One_Hammer_Different_Nails_-_A_Note_on_the_Confusing_Sociologists'_Debate_on_Comparing_Coefficients_in_Logistic_Regression","timestamp":"2024-11-08T05:28:32Z","content_type":"text/html","content_length":"573098","record_id":"<urn:uuid:4b329617-215f-460a-88d8-2adf112373cc>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00730.warc.gz"}
area of a rectangle % of people told us that this article helped them. However, not all rectangles are squares. Learn more... A rectangle is a quadrilateral[1] Example 4 : If the length of each diagonal of a rectangle is 13 cm and its width is 12 cm, then find the area of the rectangle. Perimeter: The perimeter of a rectangle is … But we will separate the Java Area of a rectangle logic and place it in a method. There is no formula for finding the area of an irregular shape. How do I find the area of a rectangle when each side is a different length? Area of a parallelogram given sides and angle. Search for: Contact us. Primary Study Cards. Then, multiply the base by the height of the rectangle to get the area. Previous Area of a Parallelogram Practice Questions. The area of a rectangle depends on its sides. Q1 squares given to count, Q2 dimensions given to use formula. Click here for Answers . There is no such thing as an irregular rectangle. Solution for The area of a rectangle is 64 cm squared and the length is four times it’s width . Rectangle To find the area of a rectangle, multiply the length by the width. Find the area of the smaller rectangle The formula for the area of a rectangle is width x height, as seen in the figure below: All you need are two measurements and you can calculate its perimeter by hand, or by using our perimeter of a rectangle calculator above. Don't forget to express the result in the squared unit, and you are good to go. Use this calculator if you know 2 values for the rectangle, including 1 side length, along with area, perimeter or diagonals and you can calculate the other 3 rectangle variables. Store it in two different variables say length and width. Area of a square. Area of Rectangle - The area is usually measured in units like square meters, square feet, or square inches. OUTPUT. In other words, the area of a rectangle … Learn and revise how to work out the area of two dimensional shapes such as squares, rectangles and triangles with BBC Bitesize KS3 Maths. By using our site, you agree to our. Using those values we will calculate the Area of a rectangle and perimeter of a rectangle. To calculate the area, simply multiply the length of the rectangle by the height. The answer here is 12. Area of a rectangle. Solve either equation for one of the unknowns. Since a rectangle is comprised of four right angles, the diagonal that cuts through the shape will create a right triangle, so you can apply the Pythagorean theorem. Each of these squares has an area of 1 square inch, and in the rectangle, there are 18 such squares. You cannot find an area in meters, because area is expressed in square meters. Example 1. What is the area of the rectangle 1 Comment. If it's not a square, you can't, unless there's more information available than you're letting on (like an equation to calculate the length of the other side). Basically, the formula for area is equal to the product of length and breadth of the rectangle. Include your email address to get a message when this question is answered. A rectangle’s diagonal is the diameter of its circumcircle. Hence, we can say, the region enclosed by the perimeter of the rectangle is its area. Last Updated: October 13, 2020 Finding Area: Type 1 - Integers. For example, if the area is 60 and the width is 5, your equation will look like this: 60 = x*5. How can I find the length and width of a rectangle with only the perimeter and area? https://www.gigacalculator.com/calculators/area-of-rectangle-calculator.php. Using those values, this python program will calculate the Area of a rectangle and perimeter of a rectangle. To find the area of a rectangle, multiply the length by the width. In the next lines, values are assigned to these variables. Area of a rhombus. There are 10 references cited in this article, which can be found at the bottom of the page. Using those two values, it finds the area of a rectangle. Area of a cyclic quadrilateral. Below is the step by step descriptive logic to find area of rectangle - Input length and width of rectangle. Using this calculator, we will understand the algorithm of how to find the perimeter, area and diagonal length of a rectangle. Name: Super Teacher Worksheets - www.superteacherworksheets.com Area of a Rectangle Find the area of each rectangle. A rectangle is a quadrilateral with four right angles. If you'd like to cite this online calculator resource and information as provided on the page, you can use the following citation: Georgiev G.Z., "Area of a Rectangle Calculator", [online] Available at: https://www.gigacalculator.com/calculators/area-of-rectangle-calculator.php URL [Accessed Date: 24 Jan, 2021]. 3 years ago. Previous Area of a Parallelogram Practice Questions. A = area √ = square root Calculator Use. Find the area of a rectangular field 20 m long and 10 m wide. The area of the rectangle is the length multiplied by the width. rectangles. This is how the human body surface area was calculated as well: subject's bodies were covered in stripes of paraffin, which were then removed, sliced and measured. You can write your final answer in one of two ways: either 20 cm.sq. To quickly calculate the area of a rectangle, find the length of the base. This article has been viewed 486,131 times. Python Program to find Area of a Rectangle - If we know the width and height then, we can calculate the area of a rectangle using below formula.Area = Width * Height.Perimeter is the distance around the edges. Please consider making a contribution to wikiHow today. wikiHow is where trusted research and expert knowledge come together. Apply formula to calculate rectangle area i.e. Our online calculators, converters, randomizers, and content are provided "as is", free of charge, and without any warranty or guarantee. The area of a rectangle can be calculated by multiplying the lengths of two adjacent sides. In the last line we print the area of Rectangle to the screen. In this non-linear system, users are free to take whatever path through the material best serves their needs. Area of a quadrilateral. Width is given in the question, that is 12 cm. Area of a rectangle: The area of a rectangle is the product of its two adjacent sides . If you really can’t stand to see another ad again, then please consider supporting our work with a contribution to wikiHow. So, find its length. To find the area of a rectangle, multiply the length by the width, different types of polygons, some of them being triangles, quadrilaterals, pentagons hexagons etc. Practice Questions; Post navigation. This is a "two equations in two unknowns" situation. Area of a rectangle = Length × Width . How can I find the area of a rectangle if I only know the diagonal? Name: Super Teacher Worksheets - www.superteacherworksheets.com Area of a Rectangle Find the area of each rectangle. Area of a rectangle. This calculator will calculate the square footage of a rectangle given the length of its sides. Finding Area: Type 1 - Integers. Each of these squares has an area of 1 square inch, and in the rectangle, there are 18 such squares. This crossword clue ___ times width (area of a rectangle) was discovered last seen in the December 29 2020 at the Crosswords With Friends Crossword. Convert among square inch, square foot, square yard and square meter You could, for example, perform all of your measurements in inches or centimeters, calculate area in square inches or square centimeters then convert your final answer to the unit you need such as square feet or square meters. Write two equations, one for the area and one for the perimeter, both in terms of length and width. Thus, a square of side length L L will have an area of A = L \times L = L^2. If you need to find the area if you only know the area or the length of 1 side and a diagonal, keep reading the article! The area A of the rectangle is given by A = L × W = 30 × 25 = 750 square feet (or ft 2) Example 3 The width of a rectangle is 20 meters and its length is five fourths of its width. The formula of the Area of Rectangle When you know the width and height. The area of a rectangle is equal to its length multiplied by its width. If you're finding area, your answer will always be squared. If the problem says length is 105 breadth is 81 and other said of length is 103 and breadth is 53, how do I find the area of the rectangle? To find the area of a rectangle, multiply the length by the width. The formula is: Area = w × h w = width h = height. Area Of Rectangle Formula The area of a rectangle is written as, A=l*b where, A=Area l=length b=breadth Area Of Rectangle Example Program The area is calculated by multiplying the length of the rectangle by its width. Area of a rhombus. Substitute the values of length and width in the formula (Area = length * width) to compute the area of each rectangle. It may help some children to picture this concept by using a grid and counting each of the squares inside. Side of polygon given area. It is one of the simplest shapes, and … When you have a cube, finding the area of one face allows you to find the total surface area of the solid very quickly, since it will be six times the area of one face. Find Area of Rectangle using Function. Area of a parallelogram given sides and angle. Whereas when we speak about the perimeter of a rectangle, it is equal to the sum of all its four sides. Since a rectangular box or tank has opposite sides which are equal, we calculate each unique side's area, then add them up together, and finally multiply by two to find the total surface area. Find the length and the width wikiHow is a “wiki,” similar to Wikipedia, which means that many of our articles are co-written by multiple authors. Irregularly shaped areas are often divided into several rectangles when one needs to calculate their area, but can't to a precise calculation. Calculating the Area of a Rectangle. These unique features make Virtual Nerd a viable alternative to private tutoring. That is, A = l x w where l is the length and w is the width of the rectangle. See our full terms of service. X Each tool is carefully developed and rigorously tested, and our content is well-sourced, but despite our best effort it is possible they contain errors. References. Use this calculator if you know 2 values for the rectangle, including 1 side length, along with area, perimeter or diagonals and you can calculate the other 3 rectangle variables. Going back to our example, the area of the bottom face is 4 inches x 3 inches = 12 square inches. To find the area of a rectangle, multiply the length by the width, different types of polygons, some of them being triangles, quadrilaterals, pentagons hexagons etc. To create this article, 31 people, some anonymous, worked to edit and improve it over time. Every day at wikiHow, we work hard to give you access to instructions and information that will help you live a better life, whether it's keeping you safer, healthier, or improving your well-being. The object you have is an irregular quadrilateral. https:/ /www.mathsisfun.com/quadrilaterals.html, https://www.khanacademy.org/math/basic-geo/basic-geo-area-and-perimeter/area-formula-intuition/e/finding-area-by-multiplying, https://www.khanacademy.org/math /basic-geo/basic-geo-area-and-perimeter/area-formula-intuition/v/transitioning-from-counting-to-multiplying-to-find-area-3rd-grade-khan-academy, https://www.mathsisfun.com/pythagoras.html, https:// www.mathopenref.com/rectanglediagonals.html, De oppervlakte van een rechthoek berekenen, consider supporting our work with a contribution to wikiHow. Area of a rectangle calculations have a vast array of practical applications: construction, landscaping, internal decoration, architecture, engineering, physics, and so on and so forth. We are not to be held responsible for any resulting damages from proper or improper use of the service. by Marc. The midpoints of the sides of any quadrilateral with perpendicular diagonals form a rectangle. Primary Study Cards. Next Area of a Semi-Circle Practice Questions. Example 4 : If the length of each diagonal of a rectangle is 13 cm and its width is 12 cm, then find the area of the rectangle. Draw a sketch. 04, Oct 18. Where l is length and w is width of the rectangle. The lengths of one pair of adjacent sides: This choice is false, as is directly stated above. Your support helps wikiHow to create more in-depth illustrated articles and videos and to share our trusted brand of instructional content with millions of people all over the world. Area of a trapezoid. Next Area of a Semi-Circle Practice Questions. The surface area of a rectangular prism is the total area of all six faces. The theorem is: a^2 + b^2 = c^2, where a and b are sides of the triangle and c is the hypotenuse, or longest side. It has 4 sides (edges) It has 4 corners (vertices) It has 4 right angles . It means, User will enter the width and height of a rectangle. By definition, the object in which you are describing is not a rectangle. Amid the current public health and economic crises, when the world is shifting dramatically and we are all learning and adapting to changes in daily life, people need wikiHow more than ever. This article has been viewed 486,131 times. 2,374 views This simple program calculates the area of a rectangle. Therefore, the area of a rectangle formula states that: Area of rectangle = Length x Width A = L * W, where A is the area, L is the length, W is the width or breadth. How do I calculate the area of an irregular rectangle? It is either an irregular shape or a trapezoid. A rectangle in the plane can be defined by five independent degrees of freedom consisting, for example, of three for position (comprising two of translation and one of rotation), one for shape (aspect ratio), and one for overall size (area). A rectangle has 2 sides of equal length and 2 sides of equal width (breadth). All squares are rectangles. How do I find the area of a quadrilateral? All tip submissions are carefully reviewed before being published. Next, We are Calculating the perimeter and Area of a rectangle as we described in our first example. In other words, the area of a rectangle is the product of its length and width. The area of a trapezoid is found by multiplying its height by the average of its bases. Area of a circle. The area of any rectangular place is or surface is its length multiplied by its width. We examine each of the choices. Rectangle. There are different geometrical closed shapes that exist namely square, rectangle, triangle, circle, etc. Solution : To find the area of a rectangle, we have to know its length and width. Since in multiplication the order in which the numbers are multiplied does not matter, you need not worry about switching the places of the two measurements. Area of a parallelogram given base and height. Virtual Nerd's patent-pending tutorial system provides in-context information, hints, and links to supporting tutorials, synchronized with videos, each 3 to 7 minutes long. There are commonly available area formulas for a square, rectangle, parallelogram, rhombus, and trapezoid. Where l is length and w is width of the rectangle. Maximum area of a Rectangle that can be circumscribed about a given Rectangle of size LxW. Examples of Surface area of a Rectangle. Plug that value back into either equation, which will give you the value of the other unknown. We know w = 5 and h = 3, so: Area = 5 × 3 = 15 area = length * width. This Python program allows user to enter the length and width of a rectangle. To find the area of a rectangle you need to multiply the length and the width of a rectangle. Area & Perimeter of a Rectangle calculator uses length and width of a rectangle, and calculates the perimeter, area and diagonal length of the rectangle. Introduce the area of rectangles to young learners by printing this set of practice worksheets. Area of a quadrilateral. Area of a regular polygon. Note that the single hash marks on the wide sides of the rectangle mean that the two widths have equal length. For other area shapes, see formulas below to calculate Area (ft 2) = Square Footage. The crossword clue possible answer is available in 6 letters.This answers first letter of which starts with L and can be found at the end of H. 5-a-day Workbooks. GCSE Revision Cards. The length of each diagonal, in case a and b are the sides of a rectangle, is d = √a2 + b2, and the diagonals bisect each other at different angles. Also, explore the surface area or volume calculators, as well as hundreds of other math, finance, fitness, and health calculators. Code: Radius of circle given area. But this program uses a user-defined function areaOfRect() to find the area value of rectangle.. Be sure to include the units of the measurements in your answer. Answer: The area of the rectangle is W X L = 7 x 5 cm 2 = 35 cm 2. The surface area formula for a rectangular box is 2 x (height x width + width x length + height x length), as seen in the figure below:. All of the choices given lists sufficient information, with one exception. area = length * width. Within this area of rectangle program, First, We defined the function with two arguments using def keyword. In most cases, you will be given the … Store it in two different variables say length and width. wikiHow is a “wiki,” similar to Wikipedia, which means that many of our articles are co-written by multiple authors. The area of any rectangular place is or surface is its length multiplied by its width. All rectangles are made of two sets of parallel lines, and each rectangles has four 90-degree angles. By signing up you are agreeing to receive emails according to our privacy policy. The area of a rectangle is the amount of space inside it. Area (bottom edge) = length times width = lw. To find the area of a square, multiply the length of one side by itself. We know ads can be annoying, but they’re what allow us to make all of wikiHow available for free. Research source The length is 7 m and the width is 3 m. The area of the rectangle is 3 mm × 7 mm = 21 mm 2.; This means that 21 individual 1 mm 2 squares can fit inside the rectangle. means "right angle" are equal sides: For example, a rectangle with a base of 6 and a height of 9 has an area of 54. C# Calculate Rectangle Area. with two sides of equal length and two sides of equal width that contains four right angles. TIP: If we know the length & width of a rectangle. In this article, we will mainly be focusing on the understanding of the area of rectangle formula with some practical examples. The rectangle that depicts the probability that a randomly selected child will be between 6.5 and 8.6 years old has base from 6.5 to 8.6 and the same height as the larger rectangle. This free area calculator determines the area of a number of common shapes using both metric units and US customary units of length, including rectangle, triangle, trapezoid, circle, sector, ellipse, and parallelogram. The rectangle can be shown on top of a 10x10 grid with shaded inside the rectangle. {"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/d\/d1
{"url":"http://bil-service.nu/monty-python-vebah/8c4f02-area-of-a-rectangle","timestamp":"2024-11-04T21:55:25Z","content_type":"text/html","content_length":"50328","record_id":"<urn:uuid:b1dbc7e3-4043-43d3-a9b0-ed7e3a0cd383>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00883.warc.gz"}
In-Depth Performance Analysis and Comparison of Monolithic and Particulate Zwitterionic Hydrophilic Interaction Liquid Chromatography Polymer Columns Department for Pharmaceutical and Pharmacological Sciences, Pharmaceutical Analysis, University of Leuven (KU Leuven), Herestraat 49, 3000 Leuven, Belgium Institute of Pharmaceutical Analysis, College of Pharmacy, Jinan University, Guangzhou 510632, China Department of Chemical Engineering, Vrije Universiteit Brussel, Pleinlaan 2, 1050 Brussel, Belgium Author to whom correspondence should be addressed. Submission received: 27 February 2023 / Revised: 20 March 2023 / Accepted: 20 March 2023 / Published: 23 March 2023 The kinetic performance of different zwitterionic hydrophilic interaction liquid chromatography polymer columns is evaluated and compared in-depth. For this purpose, two lab-made monolithic columns, synthesized with different crosslinkers, and a commercial particle packed column are considered. It is found that performance evaluation techniques, such as comparing plate height curves or fitted A-, B- and C-terms, obtained by fitting experimental plate height data to a plate height model, are complicated by the determination of a reliable characteristic length. This is due to the very different morphology of these column types, and the heterogeneity of the monolithic columns. The occurrence of a convective flow through the packed particle column further complicates the interpretation of the obtained fitting parameters, as part of the C-term is wrongfully attributed to the A-term. Therefore, the use of the kinetic plot method is suggested for the comparative evaluation of these columns, as kinetic plots do not require the determination of a characteristic length, nor rely on any fitting parameters. With the kinetic plot method, it is demonstrated that the lab-made monolithic columns outperform the packed particle column for plate counts between 10,000 and 800,000. This is attributed to the higher column efficiency of these columns, due to their small domain and skeleton sizes, and their high permeability, resulting from their high external porosity and the occasional occurrence of preferential flow paths. 1. Introduction Modern liquid chromatography techniques have recently gained a lot of popularity due to their extended separation capabilities for complex samples [ ]. Among these, hydrophilic interaction liquid chromatography (HILIC) has been used for the analysis of polar compounds, such as metabolites and degradation products [ ], carbohydrates and aminoglycosides [ ], amino acids, peptides and proteins [ ], for which reversed-phase liquid chromatography (RPLC) is largely inadequate. This popularity of HILIC is in part related to the introduction of commercially available stationary phases with a broad variety of chemistries that allow an adequate tuning of the separation selectivity. In HILIC, the mobile phase typically consists of a large amount of organic solvent, such as acetonitrile (ACN), to which a small amount of water or aqueous buffer is added. The polar stationary phase will absorb water from the mobile phase to form a water-rich layer at its surface. Analytes can then partition from the mobile phase into this water-rich layer for retention [ ]. Other retention mechanisms such as ionic, adsorption and hydrophobic interactions are also possible, depending on the analyte, the stationary phase type and the mobile phase [ Examples of typical HILIC stationary phases are bare silica, amide, amino, cyano, diol and zwitterionic phases [ ]. Zwitterionic stationary phases carry both positively and negatively charged functional groups, such as sulfoalkylbetaine or amino-phosphate functional groups, attached to a suitable backbone. In accordance with recent developments in RPLC, HILIC stationary phase carriers have undergone a similar evolution, with the recent introduction of small (sub-2 µm) particle packed columns, core–shell particles and monolithic structures [ ]. HILIC stationary phases can, moreover, be silica or polymer-bonded, where polymer-bonded columns can be used over a larger pH-range with respect to silica-bonded columns. An often-cited downside of polymer columns is their low mass transfer for small molecules, due to their low proportion of mesopores, severely restricting diffusion. Whereas several studies have been devoted to the fundamental comparison of the kinetic performance of particulate and monolithic silica columns, much fewer studies have compared the kinetic performance of their polymer counterparts. Customarily, band broadening in a packed (monolithic or particulate) bed can be described by the general plate height model [ $H = H i n h o m + 2 D e f f u i 1 + k ″ + 2 α k ″ 2 1 + k ″ 2 ε e 1 − ε e d 2 u i S h m D m + 2 α k ″ 1 + k ″ 2 d 2 u i S h p z D p z$ With H the plate height; u the interstitial velocity; D , D and D the effective, porous zone, and bulk molecular diffusion coefficients, respectively; k″ the zone retention factor; ε the external porosity; α a geometrical constant; Sh and Sh the Sherwood numbers relating to the mobile zone and the porous zone, respectively, and d a characteristic length. In Equation (1), the first term ( $H i n h o m$ ) relates to band broadening originating from flow heterogeneities in the bed (traditionally referred to as eddy dispersion). The second term ( -term) represents the effective longitudinal diffusion. The third and fourth terms ( -, and -terms) are the resistance to mass transfer in the mobile and stationary zones, respectively. The zone retention factor is calculated as: With t the analyte retention time, L the column length and u the interstitial velocity: With F the flow rate and r the radius of the column. For particle packed columns, the characteristic length d is typically equal to the particle size d , while for monolithic columns, several measures can be used, such as the domain size (d ), the throughpore size (d ) and the skeleton size (d ). Note that: $d d o m = d t p + d s k e l$ In the present study, the performance of two monolithic zwitterionic polymer HILIC columns (poly(SPE-co-EDMA) and poly(SPE-co-MBA) monolithic column, respectively) is compared with that of a packed particle zwitterionic polymer HILIC column (ZIC-pHILIC). Previous work demonstrated that these columns display very low B-term coefficients (or, equivalently, very low effective diffusion coefficients D ) and intra-particle diffusion coefficients (D , where “pz” stands for “mesoPorous Zone”), which was attributed to very low surface diffusion rates, a strongly hindered diffusion in the polymer backbone, slow localized adsorption events, or a combination thereof [ The present study aims to further look into the effect of these very low diffusion coefficients on the overall performance of the two column formats and compare and analyze their kinetic performance in more detail. 2. Results and Discussion 2.1. Geometrical Characterization SEM images obtained for the different columns are shown in Figure 1 . Both monolithic columns exhibited uniform spherical microglobules agglomerated into larger structures. The microglobules of the poly(SPE-co-EDMA) monolithic column ( Figure 1 a) were smaller and seemingly had more voids between them compared to the poly(SPE-co-MBA) monolithic column ( Figure 1 b). The latter exhibited a more crowded, cauliflower-type morphology, that visually seemed more homogeneous compared to the poly(SPE-co-EDMA) column. Additionally, both monolith types displayed a large number of oversized voids, that were randomly positioned. It is not entirely clear whether these voids were inherently present in the monolithic structures, or whether they were a consequence of their preparation for taking the SEM pictures, in which case a piece of 0.5 cm was cut off each monolith. The domain size (d ) of the monoliths was determined from the skeleton size (d ) and the throughpore size (d ), according to Equation (4), by analyzing at least 100 of each for each monolith, as indicated in Figure 1 . Note that for d , intermediate-sized, repeating units were considered, while for the determination of d , the largest voids were omitted (as can for example be observed on the left side of Figure 1 a). The average d and d of the poly(SPE-co-EDMA) monolithic column were in this way estimated to be 0.7 µm and 0.8 µm, respectively, resulting in a domain size of 1.5 µm, while the skeleton size and throughpore size of the poly(SPE-co-MBA) monolithic column were determined to be 1.7 µm and 1.6 µm, respectively, resulting in a domain size of 3.3 µm, reflecting the smaller features of the poly(SPE-co-EDMA) monolithic column. These values are shown in Table 1 , together with the standard deviations (SD) for at least 100 measurements. It should be noted that the SD are generally high, especially for the throughpore sizes (coefficient of variation ~50%), demonstrating the high heterogeneity of the throughpores. Figure 1 c shows a representative SEM picture obtained for the ZIC-pHILIC particles, indicating the spherical shape of the particles. From Figure 1 c it can, however, also be observed that the size of the particles varies quite significantly. The analysis of 200 particles resulted in a mean number-based particle size of 4.7 ± 0.8 µm, in line with the particle size stated by the manufacturer (5 µm). The mean volume-based particle size was 5.3 ± 1.2 µm. Figure 1 d shows an enhanced magnification of the surface of one particle. From this picture, the polymer structure of the particle is clearly visible, revealing an internal monolith-like structure with skeletons and throughpores. Although no information about the mesopore size of these particles could be retrieved from the manufacturer, visual analysis of the SEM pictures suggests a skeleton size of 0.13 ± 0.05 µm and a mesopore size of 0.10 ± 0.04 µm. External porosity values ε for each column were determined in [ ] via ISEC experiments, while total porosities ε were determined from the elution time of toluene using tetrahydrofuran as the mobile phase. The porosity of the porous zone ε was calculated as: $ε p z = ε T − ε e 1 − ε e$ The resulting values are also shown in Table 1 Both the poly(SPE-co-EDMA) and poly(SPE-co-MBA) monolithic columns displayed very similar ε[e]-values of around 69%, in line with typical ε[e]-values observed for monolithic columns. The poly (SPE-co-MBA) monolithic column, however, had a smaller total porosity $ε T$, implying a significantly smaller internal porosity of ε[pz] =12.5% compared to the ε[pz] = 29.5% for the poly(SPE-co-EDMA) For the ZIC-pHILIC column, a total porosity of around 60% was obtained, in line with typical ε -values for particle packed columns. The external porosity as measured via ISEC, however, was rather large (around 44%), especially considering the relatively large particle size distribution that was deduced from the SEM pictures. Whereas most random packings of spherical particles have external porosities of 36–40%, packings consisting of particles with a higher particle size distribution are known to yield smaller ε -values since the smaller particles can position themselves between the larger particles and in this way fill up some of the ‘gaps’ in the interstitial volume. In fact, in [ ] it was demonstrated that dense random packings of particles with a mean sphericity of 0.86 and a dimensionless standard deviation between 0.1 and 0.6, have external porosities ranging between 35% and 30%, respectively. For a dimensionless standard deviation of 0.23, as is the case for the ZIC-pHILIC particles considered here, ε would be 34%. Assuming the ISEC measurements are inaccurate because of the large intra-particle voids visible in Figure 1 d, we therefore considered a value of 34% for the external porosity ε and a value of 40% for ε for the ZIC-pHILIC column in what follows. 2.2. Evaluation of Column Performance 2.2.1. Plate Height Curves The efficiency of the poly(SPE-co-EDMA) and poly(SPE-co-MBA) monolithic columns, and the ZIC-pHILIC column was subsequently evaluated using a series of polar compounds with similar characteristics (nucleobases and nucleotides). To ensure similar retention factors on all columns, the composition of the mobile phase was adapted for each compound and column individually, as shown in Table S1 in the Supporting Information Figure 2 shows representative chromatograms for each column, with zone retention factors varying between k″~2 and k″~10. As can be observed from Figure 2 , good peak shapes were obtained on all columns (with tailing factors <1.3), showing that the performance of the lab-made monolithic columns was not disturbed by the occurrence of, for example, excessive preferential flow paths or undesirable dual retention mechanisms. To evaluate the efficiency of the columns, Figure 3 shows the obtained curves of plate height ( ) versus interstitial velocity ( $u i$ ) for the three columns. Note that the interstitial velocity was preferred over the linear velocity for the construction of the plate height curves, as the former is fundamentally more sound, since band broadening essentially occurs because part of the molecules are stagnant in the mesoporous zone, while others are moving in the mobile phase. For band broadening to occur, it does not matter whether the molecules in the mesoporous zone are residing in the stagnant mobile phase or retained on the stationary phase. Since the expression for the interstitial velocity (see Equation (3)) does not distinguish between these two, it is hence more suitable to describe the evolution of band broadening with velocity. Figure 3 , it is evident that the plate heights obtained on the poly(SPE-co-EDMA) column ( Figure 3 a) are slightly higher than those obtained on the poly(SPE-co-MBA) column ( Figure 3 b), despite the smaller characteristic lengths of the former ( Table 1 ). This could be related to the seemingly more homogeneous structure of the poly(SPE-co-MBA) column ( Figure 1 b). The plate height curves obtained on the particulate ZIC-pHILIC column ( Figure 3 c) clearly display both higher plate heights and a steeper C-term compared to the monolithic columns. The particulate column also displays a certain degree of flattening or curvature of the plate height curve in the high flow velocity range. Given the monolithic-like structure of the particles (cf. Figure 1 d) and the high intra-particle porosity (ε = 40%), this flattening of the plate height curve at high velocities could be due to a convective (perfusion) flow through the particles. To verify this, the ratio of the pore velocity u versus the superficial velocity u , with $u S = u i ε e$ , was calculated using the correlation developed by Afeyan et al. [ $u p o r e u S = K p K τ d p o r e d p 2 1 − ε e ε p z$ With K[p] the particle permeability, K the column permeability, d[p] the particle size and d[pore] the mesopore size, here taken as equal to 0.10 µm, as deduced from the SEM pictures. Note that the permeability K can be estimated as follows: With ε = ε for K and ε = ε for K . The constant τ is related to the tortuosity of the packing, and has a typical value of 2. In this way, it can be calculated that the ratio u is 0.003, or that 0.3% of the average velocity through the column passes through the mesopores of the particles. Although this seemingly only represents a small percentage of the average velocity, it is also important to consider the ‘time of transport’ through the particles via diffusion versus convection. This can be calculated as: $t i m e c o n v e c t i o n = d p u p o r e = 0.67 s$ $t i m e d i f f u s i o n = d 2 p D e f f = d 2 p · τ 2 ε p z · D m = 0.17 s$ Note that u[pore] in Equation (8) corresponds to an interstitial velocity of u[i]= 2.4 mm/s in the packed bed, the maximum u[i] measured during the plate height experiments. This is because the higher the velocity is, the more convection will become predominant. The calculations in Equations (8) and (9) show that the transport through the particles via diffusion is only four times more rapid than via convection at this highest velocity, indicating that transport via convection is significant in the ZIC-pHILIC particles. It is, however, not entirely clear whether the observed amount of convection is large enough to explain the observed flattening at the high-velocity end of the plate height curve. A possible explanation for this lower-than-expected contribution of the intra-particle convection could be due to the fact that a number of parameters in these calculations are based on estimations (ε[e], ε[pz], d[pore]), that could have resulted in a lower accuracy of the obtained results. The experimentally obtained plate height data were subsequently fitted to the following plate height equation [ $H = A u i 1 2 + B u i + C u i$ For this purpose, all three coefficients (A, B and C) were either fitted freely or, alternatively, the B-coefficients were fixed to the values obtained via peak parking in [ ], and the A- and C-coefficients were subsequently fitted. The resulting values are shown in Table 2 . Note that a relatively good agreement is obtained between the B-term values obtained via fitting and peak parking (average deviation of 10%). Table 2 , it is moreover clear that the A-term values obtained for the packed bed column are higher than those obtained for the monolithic columns, which are much more similar to each other. The C-term values, on the other hand, are more in line for all columns, which is somewhat surprising considering the fact that the high velocity range of the plate height curves in Figure 3 is much steeper for the packed bed column, than for the monolithic columns. The larger A-terms seem to suggest a lower degree of homogeneity for the particle packed column. It must, however, be kept in mind that the fitted A-term values of the ZIC-pHILIC column are influenced by the assumed perfusion and its concomitant flattening of the plate height curve in the high-velocity range. This perfusion helps to suppress the C-term band broadening but part of the latter is inevitably wrongfully attributed to the A-term during the fitting, since the A-term incorporates a certain amount of flattening via the $u i 0.5$ -term in Equation (10). This most probably leads to an overestimation of the A-term, and consequently an underestimation of the C-term of the ZIC-pHILIC column, making the fitted A- and C-term values reported in Table 2 unreliable for this column. For the monolithic columns, this is not the case, as the plate height curves are perfectly linear with velocity in the heigh velocity range. Since the only term that is linearly proportional with u in Equation (1) is the C -term (the fourth term), considering the Sh -factor in the third term is velocity-dependent [ ], it can be assumed that the plate heights obtained on the monoliths in the high-velocity range are C -term dominated. 2.2.2. Permeability Measurements During the plate height measurements, column pressures were carefully monitored as a function of the applied flow rate to determine the column permeability values. Figure S1 in the Supporting Information shows the obtained curves of pressure as a function of the linear velocity u for the three evaluated columns. Note that for the construction of these curves, preference was given to the linear velocity u over the interstitial velocity, as the total velocity through the column (including the zero-velocity inside the pores) determines the column permeability under actual separation conditions. The experimental pressure values were fitted to a linear equation, the obtained equations and goodness of fit (represented by R²-values) are also shown in Figure S1 . In general, the pressures measured on all columns displayed a linear behavior with respect to the applied velocity, since all R² > 0.998. Looking closer at the values, it was, however, observed that the two monolithic columns displayed higher R²-values of 0.9997, whereas the R²-value obtained on the particulate column was slightly lower (0.998). When fitting the experimental pressure values obtained below 20 bar to a linear equation, a similar excellent R²-value of 0.9996 was obtained for the particle packed column. The values obtained above 20 bar deviated from this linearity in an upward manner. This could indicate that at pressures above 20 bar, the particles in the ZIC-pHILIC column become somewhat compressed and deformed, resulting in higher-than-expected pressures. The monolithic columns do not display this behavior, suggesting the monolithic structures are more mechanically stable at higher pressures, and hence inherently more suited as a chromatographic backbone in the case of polymeric stationary phases. Note that the maximum column pressure applied to all columns was below 100 bar. The maximum allowable backpressure specified by the manufacturer is 200 bar for the ZIC-pHILIC column. Based on these observations, permeability (K ) values were subsequently calculated via Darcy’s law [ Wherein η is the mobile phase viscosity (Pa·s), L the column length (m) and ΔP is the column pressure (Pa). The resulting K -values are shown as a function of u Figure 4 . Interestingly, the K -values obtained for the two monolithic columns seem to stabilize above u = 0.5 mm/s, whereas the values obtained for the poly(SPE-co-EDMA) column are higher for smaller u -values, and those obtained on the poly(SPE-co-MBA) column are actually lower. It should be mentioned that the lowest K -values were obtained below the recommended operational range of the nanoLC flow selector (recommended range 50 nL–1000 nL/min, values measured here starting at 20 nL/min), which could have resulted in these deviating values. Note also that the standard deviations (denoted by the vertical error bars) obtained for the K -values at these low velocities are clearly larger. However, it is somewhat surprising that both monolithic columns show opposing trends in this low velocity range. For the ZIC-pHILIC column, a generally decreasing trend of K with increasing flow rate was observed. This reflects the higher-than-expected observed pressure values and could hence indicate a compression of the packed bed at higher velocities. 2.2.3. Reduced Plate Height Curves Since band broadening is not only influenced by differences in uniformity of the evaluated structure (particle packed or monolithic), but also by differences in characteristic length and diffusion properties, reduced plate height curves of h versus ν were subsequently constructed to further investigate the band broadening behavior observed in the different columns. Note that for the calculation of the reduced plate height h and the reduced interstitial velocity ν a characteristic length d needs to be specified: As mentioned in the introduction, this characteristic length is typically taken equal to the particle size for particle packed columns, while for monolithic columns, the domain size (d ) can for example be used. Figure S2 in the Supporting Information shows the reduced plate height curves that were obtained using the particle size for the particle packed column, and the domain size for the monolithic columns, as specified in Table 1 Note that the curves in Figure S2 have all been constructed using the same scale on the x- and y-axis. This representation now shows a completely different picture compared to the curves shown in Figure 3 . The reduced plate heights obtained on the ZIC-pHILIC column (h = 6–9) are now much closer to those obtained on the poly(SPE-co-MBA) column (h = 4–6), while those obtained on the poly(SPE-co-EDMA) column (h = 9–18) are much higher. Note also the much steeper slope of the plate height curve in the high velocity range of the latter. This is entirely due to the much smaller characteristic lengths obtained for the poly(SPE-co-EDMA) column ( Table 1 ), impacting the calculation of both the reduced plate height h and the reduced velocity ν , as shown in Equations (12) and (13). However, as was already mentioned earlier, the standard deviations observed for these characteristic lengths were quite large, raising suspicions about the validity of using the domain size as the characteristic length for the monolithic columns. As an alternative, the permeability-based characteristic length proposed by Halasz (d ) was therefore investigated next [ $d H a l a s z = 10 3 K v 0$ Using the square root of the permeability of a column, the characteristic length is defined in terms of the “price” (pressure drop) that has to be paid for this characteristic length. To calculate d for the different columns, the permeability values K obtained at the highest measured pressure were used. This resulted in values of d = 5.2, 5.7 and 5.5 µm for the poly(SPE-co-EDMA), the poly(SPE-co-MBA) column and the ZIC-pHILIC column, respectively ( Table 1 ). Note that these values are in very close agreement with each other despite the completely different structure of the packing. Figure 5 shows the reduced plate height curves obtained using d as the characteristic length. Unsurprisingly, given the close proximity of the d -values, the curves show similar trends as observed for their non-reduced counterparts in Figure 3 . Minimum reduced plate heights observed for the monolithic columns are h = 3–5 for the poly(SPE-co-EDMA) column, and h = 2–4 for the poly(SPE-co-MBA) column, and hence slightly lower for the latter, in line with the more homogeneous structure of the poly(SPE-co-MBA) column ( Figure 1 b). Minimum plate heights for the ZIC-pHILIC column are h = 5–7, and hence larger than for the monolithic column, while also the c-term is steeper and shows the same curvature/flattening as in Figure 3 c. Despite the fact that these curves seem to present a more realistic view on the performance of the columns, and are more in line with one another, it should be mentioned that the square root of K in fact has no structural meaning, and can hence not be linked to the morphology, disorder or heterogeneity of the columns. Another interesting observation that can be made from the curves shown in Figure 5 , is the seemingly random order of the plate height curves in the high velocity range. In fact, similar random variations of the curve order can also be observed for the plate height curves in Figure 3 Figure S2 , with no obvious link between the steepness of the curves and the corresponding k″–values. Given the high dependency of the plate heights on the c -term (at least for the monolithic columns), which decreases with increasing k″, as can be deduced from Equation (1), this is somewhat surprising as a systematic decrease in the c-term region with k″ would be expected. However, surprisingly, there does seem to be a correlation between the molecular weight (MW) of the compounds ( Table S1 in the Supporting Information ) and the order of the plate height curves, where the highest MW compounds generally have the steepest c-terms, and the low MW compounds the flattest c-terms. One exception to this behavior is the compound with a k″ = 9.69 on the poly(SPE-co-MBA) column. To explore these observations further, and assuming the high velocity part of the plate height curve is c -term dominated, at least for the monolithic columns, Equation (1) demonstrates the importance of intra-particle diffusion (D ) in the c -term. D can be written as [ $D p z = k 0 ″ γ m p D m + k ″ − k 0 ″ γ s D s k ″$ In this equation, γ represents the stationary phase diffusion, γ the mesopore diffusion, and k ″ the zone retention factor k″ for k′= 0. Note that the phase retention factor k’ and the zone retention factor k″ are related as follows: $k ″ = 1 + k ′ ε T ε e − 1$ Substituting k’ = 0 in Equation (16) yields: From Equation (17) it can be understood that k[0]″ is a structural feature of the column packing, depending only on the interstitial porosity ε[e] and the total porosity ε[T]. In [ ], it was demonstrated that D and hence D are very low for the columns evaluated in this work, and this was attributed to a very low amount of surface diffusion, a very strong and localized adsorption mechanism and/or strongly hindered diffusion in the polymer matrix. Under these circumstances, it can be assumed that the contribution of the stationary phase diffusion to the overall intra-particle diffusion is very low, or in other Substituting Equation (18) in Equation (15), yields: $D p z = k 0 ″ γ m p D m k ″$ From Equations (1) and (19) it then follows that: $H C s = C s · u i = 2 α k ″ 1 + k ″ 2 d 2 u i S h p z D p z = 1 5 · α k ″ 2 1 + k ″ 2 d 2 u i k 0 ″ γ m p D m$ With α being a geometrical constant (6 for a packed bed column and 4 for a TSM) and Sh = 10 [ Rearranging this leads to an expression for γ $γ m p = 1 C s d 2 5 · α k ″ 2 1 + k ″ 2 1 ε T ε e − 1 1 D m$ From Equation (21), it can be deduced that γ essentially depends on structural column characteristics (d, ε and ε ), C , D and k″. Even though the actual value of d is not entirely clear for the monolithic columns, this value is constant per column. In other words, when evaluating a single column, the exact value of d does not matter that much, as long as the same value is consistently used for that column. Therefore, d was taken to calculate γ for the monolithic columns. C -values were taken as equal to the C-term values obtained by fitting the plate height curves to Equation (10), as shown in Table 2 , considering the C-term region was C Figure 6 shows the obtained calculated values of γ plotted as a function of the MW of the compounds for the two monolithic columns. Interestingly, a rough trend can be observed where γ seems to decrease as the MW increases, with the exception of the compound with a MW = 268 g/mol (k″ = 9.69) on the poly(SPE-co-MBA) column. Although this is highly speculative, and more data are required to confirm this trend, these observations seem to suggest that compounds experience more obstruction against free movement in the mesoporous space of the monolithic polymer matrix, as their MW increases. Since it was impossible to obtain the C -term for the ZIC-pHILIC column, as its fitted C-term was also impacted by the observed flattening of the curve, the same calculations were not made for the ZIC-pHILIC column. 2.2.4. Kinetic Plot Analysis As amply demonstrated in the previous sections, comparing the performance of structurally diverse columns, such as the particle packed column and monoliths investigated in this work, presents a number of difficulties when a suitable characteristic length needs to be selected. For particle packed columns, a good choice seems to be the particle size specified by the manufacturer, although a detailed assessment of the true particle size typically reveals deviating values that can show a certain degree of polydispersity. For monolithic columns, this is even more difficult, as multiple measures can be used, such as the throughpore size, the skeleton size, or the domain size. However, varying degrees of homogeneity throughout the monolithic structure can result in large variations in the obtained values, making it difficult to determine a single representative value. Additionally, it has also been demonstrated that comparing fitted A-, B- and C-term values, as is often done in the literature [ ], can be severely complicated when unexpected phenomena occur, such as perfusion flow through the particles, as this can lead to an incorrect interpretation of the obtained A-, B- and C-term values. To remove any uncertainty about the characteristic length when comparing the kinetic performance of structurally diverse columns, and to avoid any complications when interpreting obtained fitting values, an elegant solution to compare column performance is the kinetic plot (KP) method [ ]. Kinetic plots express and compare the performance of columns as the time required to obtain a certain plate count, and are obtained by combining all relevant information of a column in the following equations: $t 0 = Δ P m a x η K v 0 u 0 2$ $N = Δ P m a x η K v 0 u 0 H$ With ΔP[max] the maximum pressure the column can withstand and η the mobile phase viscosity. Note that using Equations (22) and (23), every experimentally obtained datapoint of u[0] versus H is converted into a measure of the time that is required to obtain a certain plate count when operating the column at the maximum pressure, and at the corresponding velocity u[0]. In this way, each datapoint of t[0] versus N is in fact obtained in a different column length, where low values of u[0] will typically be obtained in long column lengths and hence result in high N-values, while high u[0]-values will typically be obtained in short column lengths and hence result in lower Kinetic plots for the columns evaluated in this work were constructed for the compound with k″~2 and for a maximum pressure ΔP = 200 bar and are shown in Figure 7 . The compound with k″~2 was the same for all columns (uracil) and was purposely chosen to avoid any bias on the column performance that might be due to MW effects. The plots in Figure 7 show that the two monolithic columns outperform the ZIC-pHILIC column over the entire relevant range of plate counts between 10 and 8 × 10 plates. This is entirely attributed to the higher efficiency of the monolithic columns, as the permeability values of the monoliths and the particle-packed column are relatively similar. This higher efficiency of the monoliths is somewhat surprising, given their higher degree of heterogeneity, but seems to be due to the smaller domain and skeleton sizes, that keep their plate heights within check. The relatively high permeability of the monoliths, similar to that of a 5 µm particle-packed column, can in turn be explained by their much higher external porosity and the apparent occurrence of preferential flow paths within the monolithic structure. The two monolithic columns, moreover, perform similarly well in the range of roughly 5000–120,000 plates. Only for higher plate counts, does the poly(SPE-co-MBA) column outperform the poly (SPE-co-EDMA) column, due to the slightly higher K -values and slightly better efficiency of the former. The ZIC-pHILIC becomes more performant than the two monolithic columns for very high plate counts (≥10 plates) only. This is due to the slightly lower B-term values that were obtained for the ZIC-pHILIC column, as can be deduced from Table 2 . As the high N-range of the kinetic plot is typically obtained at very low velocities, where the B-term is dominant, this lower B-term results in the better performance of the ZIC-pHILIC column in this region. 3. Materials and Methods 3.1. Reagents and Materials Adenosine and uracil were obtained from Janssen Chimica (Geel, Belgium). Uridine, thiourea, hypoxanthine and inosine were purchased from Sigma-Aldrich (Steinheim, Germany). Toluene was obtained from Acros Organics (Geel, Belgium). Glacial acetic acid (99.9% purity) was obtained from Merck (Darmstadt, Germany), ammonium acetate from Sigma-Aldrich. Acetonitrile (ACN) and methanol (MeOH), both HPLC grade, were from Fisher Chemicals (Erembodegem, Belgium). Milli-Q water was prepared in the lab using a Milli-Q system (Millipore, Bedford, MA, USA). The following columns were evaluated in this work: a ZIC-pHILIC particle packed column from Merck (2.1 mm I.D. × 150 mm, 5 μm) and two in-house made capillary monolithic columns: a poly(SPE-co-MBA) (0.1 mm I.D. × 226 mm) and a poly(SPE-co-EDMA) column (0.1 mm I.D. × 234 mm). More details on their preparation mode can be found in [ ]. Briefly, designated amounts of the hydrophilic monomer (SPE), crosslinker (N,N′-methylenebisacrylamide (MBA) or ethylene dimethacrylate (EDMA)), initiator (AIBN) and porogens (methanol for MBA, water and propanol for EDMA) were accurately weighed and mixed into a 1.5-mL vial. After ultrasonication and degassing for 10 min, the polymerization mixtures were introduced into pre-treated capillaries. Both ends of the capillaries were sealed with GC septa. The capillaries were then submerged into a water bath at 60 °C for 12 h. Finally, the resulting monolithic columns were flushed with methanol overnight in order to remove any residual reagents inside the capillaries. All columns possessed sulfoalkylbetaine zwitterionic functional groups, covalently bonded to the porous polymer beads in the case of the ZIC-pHILIC column. 3.2. Instrumentation Measurements on the ZIC-pHILIC column were done on an Agilent 1290 UHPLC system (Agilent Technologies, Santa Clara, CA, USA) consisting of a quaternary pump, an autosampler, and a diode array detector with a flow cell of 1 μL. Measurements on the capillary monolithic columns were executed on an Ultimate 3000 RSLC nano system (Dionex, Amsterdam, the Netherlands), with a Binary Rapid Separation Nano Flow pump with nano flow selector, an autosampler, a four-port injection valve with a 20 nL internal loop (VICI, Houston, TX, USA) and a variable wavelength detector (VWD) with a 3 nL flow cell. Experiments were executed at room temperature (21.5 ± 0.5 °C), using an injection volume of 20 nL for the monolithic columns, and 1 μL for the ZIC-pHILIC column. The detection wavelength was set to 254 nm, and the data acquisition rate was 40 Hz for all experiments. Data acquisition and processing were done with Chromeleon software (version 6.8, Dionex) or OpenLab Chemstation software (edition C.01.07, Agilent Technologies). pH values were measured using a Metrohm 691 pH meter (Antwerp, Belgium). Scanning electron microscopy (SEM) experiments were performed using a TESCAN MIRA4 system (Brno, Czech Republic), using an energy between 5 and 15 keV. Magnifications were between 700× and 50.000×. 3.3. Samples and Mobile Phases Stock solutions of uracil, thiourea, hypoxanthine, adenosine, uridine and inosine were prepared in a concentration of 2000 ppm in ACN:H O (50:50, ). These stock solutions were subsequently diluted in pure ACN to a final concentration of 50 ppm for each compound. Mobile phases were prepared by mixing ACN in different ratios with an ammonium acetate solution (adjusted to pH = 6.0 with glacial acetic acid) as shown in Table S1 in the Supporting Information ]. The ammonium acetate concentration in Table S1 represents the total concentration in the mobile phase. Molecular diffusion coefficients (D ) for each compound in their respective mobile phases were determined via Taylor-Aris experiments, as detailed in [ ]. The obtained D -values are also given in Table S1 3.4. Plate Height Measurements Plate heights were measured over a broad range of velocities using the mobile phases shown in Table S1 . The composition of the mobile phase was adjusted for each column and compound individually, to obtain similar zone retention factors (k″) at the optimum velocity on each column, ranging between k″ = 2 and k″ = 10. All plate heights were measured in triplicate for at least 16 different velocities on each column. Analyte retention times t were obtained from the first moments of the peaks, while peak widths were determined at 4.4% of the peak height. All measured data were corrected for the extra-column contribution (ECC). For the Agilent 1290 UHPLC system, the ECC was experimentally determined using a zero-dead volume union instead of the column. For the Dionex Ultimate 3000 RSLC nano system, the ECC was calculated from the geometrical volume of the tubing, the injection volume and the volume of the flow cell. 4. Conclusions A detailed kinetic performance analysis of different zwitterionic hydrophilic interaction liquid chromatography polymer columns was performed, wherein two different monolithic materials were compared with a particulate material. These materials were already studied in previous work, where it was demonstrated that they all display very low diffusion coefficients in the mesoporous zone. To further unravel the consequences of this low mesoporous zone diffusion, several typical column performance analysis approaches were considered. This consisted of first assessing the structure and porosity of the different packings via SEM and ISEC. The obtained SEM pictures revealed a relatively large heterogeneity for the monolithic columns, especially for the throughpores. The particle packed column on the other hand, consisted of spherical particles with a rather large particle size distribution. The ISEC measurements resulted in external porosity values that were in line with those expected for monolithic columns, but were relatively high for the particle packed column, especially considering their large particle size distribution. Therefore, a value of ε[e] = 0.34 was considered for further calculations, based on literature data for packings with similar particle size distributions. Next, plate heights were measured over a broad range of velocities on all columns for compounds with similar retention factors. The obtained plate heights were fitted to a plate height equation, and the obtained fitting parameters (A, B and C-terms) compared. The obtained high A-term values for the ZIC-pHILIC column suggested a high heterogeneity of the packed particle column, while the C-term values were rather similar for all columns. Further inspection of the plate height curves of the ZIC-pHILIC column, however, revealed an amount of perfusion through the particles that impacted the fitted A- and C-term values and made them useless for further comparison. In an attempt to compare the performance of the different column formats via reduced plate height curves, the requirement for a single, representative characteristic length for each packing was complicated by their largely differing structure and the heterogeneity observed for their structural elements. Interestingly, the (reduced) plate height curves revealed a rather random pattern in the high-velocity region of the curves, where no clear correlation was observed with k″, suggesting other parameters in the general plate height model could be responsible for this observation. Further investigation of the plate height curves of the monolithic columns, where the high-velocity zone of the plate height curve was considered c[s]-term dominated, revealed a correlation between the obstruction factor in the mesoporous zone and the molecular weight of the compounds. Although speculative at this instance, this suggests that a compound experiences more obstruction against free movement in the mesopores of these polymer monoliths as its MW increases. To overcome the problems encountered with the traditional column performance comparison methods (reduced plate height plots; A-, B-, C-constant analysis), kinetic plots were constructed to compare the different column formats in a geometry-independent way. This was done for a compound (uracil) that had a similar k″ on all materials. These revealed that the monoliths—despite their observed heterogeneity—performed better than the ZIC-pHILIC column evaluated in this study, over the entire relevant range of plate counts. This was attributed on the one hand to their smaller skeleton and domain sizes that led to higher efficiencies, and on the other hand, to their high permeability, resulting from to their high external porosity and the occurrence of occasional preferential flow paths due to the presence of some very large macropores. This study clearly demonstrates the utility of the kinetic plot method to compare the kinetic performance of different column formats, that can be difficult to characterize in terms of geometric dimensions. As a follow-up, analysts could also compare column supports in terms of other parameters, such as the ‘greenness’ of their corresponding analytical procedures, using for example the Analytical GREEnness Metric Approach [ Supplementary Materials The following supporting information can be downloaded at: , Table S1: Column dimensions and chromatographic conditions used to evaluate the performance of the columns; Figure S1: Column pressure as a function of the linear velocity for the three columns evaluated in this work; Figure S2: Reduced plate height curves of h versus ν obtained by using the domain size (d ) as the characteristic length for the monolithic columns, and the particle size (d ) for the particle packed column. Author Contributions H.L.: Data curation, Methodology, Investigation, Writing—review and editing, Z.J.: Resources, Project administration, Funding acquisition, Supervision, Writing—review and editing, G.D.: Conceptualization, Data curation, Investigation, Methodology, Writing—original draft, D.C.: Conceptualization, Data curation, Investigation, Methodology, Funding acquisition, Supervision, Writing—original draft. All authors have read and agreed to the published version of the manuscript. This research was funded by the National Natural Science Foundation of China (Grant numbers: 81872830,82073806), Natural Science Foundation of Guangdong Province, China (Grant number: Institutional Review Board Statement Not applicable. Informed Consent Statement Not applicable. Data Availability Statement Data available upon written request to the corresponding author. Haibin Li gratefully acknowledges the Lin Jian Biomedicine Development Foundation and the financial support from Jinan University. Conflicts of Interest The authors declare no conflict of interest. Figure 1. Scanning electron microscopy (SEM) pictures of the evaluated column materials. (a) poly(SPE-co-EDMA) monolithic stationary phase, (b) poly(SPE-co-MBA) monolithic stationary phase, (c) ZIC-pHILIC particles and (d) close-up of particle surface. Red lines in (a,b) indicate the size of d[glob] and yellow lines the size of d[tp]. Red lines in (d) indicate the ‘skeleton’ sizes of the polymer structure, yellow lines indicate the mesopore sizes. Figure 2. Chromatograms obtained on the ( ) poly(SPE-co-EDMA) monolithic column, flow rate = 0.0006 mL/min; ( ) poly(SPE-co-MBA) monolithic column, flow rate = 0.0006 mL/min; ( ) ZIC-pHILIC column, flow rate = 0.1 mL/min. Mobile phase compositions are shown in Table S1 ; column temperature: room temperature; peak annotation: (1) uracil, (2) adenosine, (3) thiourea, (4) uridine, (5) inosine, (6) hypoxanthine. The t -marker was toluene. Figure 3. Plate height curves of H versus u obtained on the ( ) poly(SPE-co-EDMA) monolithic column, ( ) poly(SPE-co-MBA) monolithic column and ( ) ZIC-pHILIC column. k″ = 1.9–2.2 (), k″ = 2.9–3.7 (), k″ = 4.7–6.4 (), k″ = 6.4–7.3 (), k″ = 9.7–10.5 (). Mobile-phase conditions are given in Table S1 Figure 4. Curves of permeability (K[v0]) as a function of the linear velocity (u[0]) for the columns evaluated in this work: () poly(SPE-co-EDMA) monolithic column, () poly(SPE-co-MBA) monolithic column, () ZIC-pHILIC column. The error bars are the standard deviations obtained from three replicate measurements. Figure 5. Reduced plate height curves of h versus ν obtained by using d (Equation (14)) as the characteristic length for the ( ) poly(SPE-co-EDMA) monolithic stationary phase, ( ) poly(SPE-co-MBA) monolithic stationary phase and ( ) ZIC-pHILIC column. k″= 1.9–2.2 (), k″= 2.9–3.7 (), k″= 4.7–6.4 (), k″= 6.4–7.3 (), k″= 9.7–10.5 (). Mobile-phase conditions as in Table S1 Figure 6. Plots of γ[mp] versus MW for the (a) poly(SPE-co-EDMA) monolithic column and (b) poly(SPE-co-MBA) monolithic column. Figure 7. Kinetic plots of time (t[0]) versus plate count (N) for the materials evaluated in this work: () poly(SPE-co-EDMA) monolithic column, () poly(SPE-co-MBA) monolithic column, () ZIC-pHILIC column. Component = uracil, having k″ ≅ 2 on all three materials. Table 1. Structural characteristics (d[glob], d[tp], d[dom] and d[Halasz]) and column porosities of the columns evaluated in this work. The reported sizes show the average values of at least 100 independent measurements and their standard deviations. Column $d s k e l$ (µm) $d t p$ (µm) $d p$ or $d d o m$ (µm) d[Halasz] (µm) $ε e$ $ε T$ $ε i$ poly(SPE-co-EDMA) 0.7 ± 0.1 0.8 ± 0.4 1.5 ± 0.4 5.2 0.6923 0.7830 0.2948 poly(SPE-co-MBA) 1.7 ± 0.3 1.6 ± 0.7 3.3 ± 0.8 5.7 0.6995 0.7370 0.1248 ZIC-pHILIC / / 4.7 ± 1.1 5.5 0.4398 0.6040 0.2931 Table 2. A-, B- and C-term values obtained by fitting the experimental plate height data to Equation (10), either fitting all terms freely (BFIT) or fixing the B-term value to the value obtained by peak parking in [ ], and subsequently fitting A and B (BPP). Column Compound k″ H[min] (mm) Equation (10) B[FIT] Equation (10) B[PP] A (mm^1/2/s^1/2) B (mm²/s) C (s) A (mm^1/2/s^1/2) B (mm²/s) C (s) Uracil 1.92 2.17 × 10^−2 1.38 × 10^−2 3.35 × 10^−3 1.22 × 10^−2 1.25 × 10^−2 3.48 × 10^−3 1.30 × 10^−2 Uracil 3.08 2.76 × 10^−2 2.29 × 10^−2 3.80 × 10^−3 1.31 × 10^−2 2.89 × 10^−2 3.27 × 10^−3 8.74 × 10^−3 SPE-co-EDMA Thiourea 4.96 1.37 × 10^−2 6.90 × 10^−3 3.83 × 10^−3 3.71 × 10^−3 7.59 × 10^−3 3.70 × 10^−3 3.21 × 10^−3 Thiourea 7.00 1.53 × 10^−2 8.43 × 10^−3 4.11 × 10^−3 4.11 × 10^−3 9.53 × 10^−3 3.91 × 10^−3 3.42 × 10^−3 Hypoxanthine 10.29 2.47 × 10^−2 1.46 × 10^−2 3.53 × 10^−3 2.18 × 10^−2 1.39 × 10^−2 3.64 × 10^−3 2.23 × 10^−2 Uracil 1.86 1.70 × 10^−2 1.78 × 10^−2 2.08 × 10^−3 1.38 × 10^−3 1.56 × 10^−2 2.31 × 10^−3 2.80 × 10^−3 Adenosine 2.93 2.12 × 10^−2 2.22 × 10^−2 1.53 × 10^−3 1.96 × 10^−2 1.96 × 10^−2 1.80 × 10^−3 2.11 × 10^−2 SPE-co-MBA Thiourea 4.73 1.22 × 10^−2 8.80 × 10^−3 2.66 × 10^−3 1.63 × 10^−3 6.25 × 10^−3 3.05 × 10^−3 3.20 × 10^−3 Uridine 6.37 1.48 × 10^−2 1.25 × 10^−2 1.47 × 10^−3 1.21 × 10^−2 1.21 × 10^−2 1.72 × 10^−3 1.20 × 10^−2 Inosine 9.69 1.36 × 10^−2 1.42 × 10^−2 1.30 × 10^−3 5.98 × 10^−3 1.41 × 10^−2 1.25 × 10^−3 6.04 × 10^−3 Uracil 2.18 2.78 × 10^−2 3.84 × 10^−2 2.07 × 10^−3 0.00 3.92 × 10^−2 1.84 × 10^−3 0.00 Adenosine 3.70 4.11 × 10^−2 8.70 × 10^−2 1.37 × 10^−3 9.12 × 10^−3 8.31 × 10^−2 1.50 × 10^−3 1.26 × 10^−2 ZIC-pHILIC Uridine 6.39 2.87 × 10^−2 4.98 × 10^−2 1.29 × 10^−3 5.71 × 10^−3 4.42 × 10^−2 1.61 × 10^−3 1.05 × 10^−2 Uridine 7.31 3.26 × 10^−2 5.45 × 10^−2 1.50 × 10^−3 9.31 × 10^−3 5.84 × 10^−2 1.25 × 10^−3 5.97 × 10^−3 Uridine 10.54 3.14 × 10^−2 4.71 × 10^−2 1.62 × 10^−3 1.31 × 10^−2 5.33 × 10^−2 1.33 × 10^−3 7.41 × 10^−3 Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https: Share and Cite MDPI and ACS Style Li, H.; Jiang, Z.; Desmet, G.; Cabooter, D. In-Depth Performance Analysis and Comparison of Monolithic and Particulate Zwitterionic Hydrophilic Interaction Liquid Chromatography Polymer Columns. Molecules 2023, 28, 2902. https://doi.org/10.3390/molecules28072902 AMA Style Li H, Jiang Z, Desmet G, Cabooter D. In-Depth Performance Analysis and Comparison of Monolithic and Particulate Zwitterionic Hydrophilic Interaction Liquid Chromatography Polymer Columns. Molecules. 2023; 28(7):2902. https://doi.org/10.3390/molecules28072902 Chicago/Turabian Style Li, Haibin, Zhengjin Jiang, Gert Desmet, and Deirdre Cabooter. 2023. "In-Depth Performance Analysis and Comparison of Monolithic and Particulate Zwitterionic Hydrophilic Interaction Liquid Chromatography Polymer Columns" Molecules 28, no. 7: 2902. https://doi.org/10.3390/molecules28072902 Article Metrics
{"url":"https://www.mdpi.com/1420-3049/28/7/2902","timestamp":"2024-11-09T20:33:33Z","content_type":"text/html","content_length":"506050","record_id":"<urn:uuid:a8dce7f0-52d3-41fb-b99b-908a107e6941>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00712.warc.gz"}
Rules for identity Chapter 39 Rules for identity In chapter 30, we mentioned the philosophically contentious thesis of the identity of indiscernibles. This is the claim that objects which are indiscernible in every way are, in fact, identical to each other. It was also mentioned that we will not subscribe to this thesis. It follows that, no matter how much you learn about two objects, we cannot prove that they are identical. That is unless, of course, you learn that the two objects are, in fact, identical, but then the proof will hardly be very illuminating. The general point, though, is that no sentences which do not already contain the identity predicate could justify an inference to ‘$a=b$’. So our identity introduction rule cannot allow us to infer to an identity claim containing two different names. However, every object is identical to itself. No premises, then, are required in order to conclude that something is identical to itself. So this will be the identity introduction rule: 0 $\mathscr{c}=\mathscr{c}$ =I Notice that this rule does not require referring to any prior lines of the proof. For any name $\mathscr{c}$, you can write $\mathscr{c}=\mathscr{c}$ on any line, with only the =I rule as Our elimination rule is more fun. If you have established ‘$a=b$’, then anything that is true of the object named by ‘$a$’ must also be true of the object named by ‘$b$’. For any sentence with ‘$a$’ in it, you can replace some or all of the occurrences of ‘$a$’ with ‘$b$’ and produce an equivalent sentence. For example, from ‘$R(a,a)$’ and ‘$a=b$’, you are justified in inferring ‘$R(a,b)$’, ‘$R (b,a)$’ or ‘$R(b,b)$’. More generally: $m$ 0 $\mathscr{a}=\mathscr{b}$ $n$ 0 $\mathscr{A}(\ldots\mathscr{a}\ldots\mathscr{a}\ldots)$ 0 $\mathscr{A}(\ldots\mathscr{b}\ldots\mathscr{a}\ldots)$ =E $m$, $n$ The notation here is as for $\exists$I. So $\mathscr{A}(\ldots\mathscr{a}\ldots\mathscr{a}\ldots)$ is a formula containing the name $\mathscr{a}$, and $\mathscr{A}(\ldots\mathscr{b}\ldots\mathscr{a}\ ldots)$ is a formula obtained by replacing one or more instances of the name $\mathscr{a}$ with the name $\mathscr{b}$. Lines $m$ and $n$ can occur in either order, and do not need to be adjacent, but we always cite the statement of identity first. Symmetrically, we allow: $m$ 0 $\mathscr{a}=\mathscr{b}$ $n$ 0 $\mathscr{A}(\ldots\mathscr{b}\ldots\mathscr{b}\ldots)$ 0 $\mathscr{A}(\ldots\mathscr{a}\ldots\mathscr{b}\ldots)$ =E $m$, $n$ This rule is sometimes called Leibniz’s Law, after Gottfried Leibniz. To see the rules in action, we will prove some quick results. First, we will prove that identity is symmetric: $1$ open subproof, 1 $a=b$ AS $2$ 1 $a=a$ =I $3$ 1 $b=a$ =E $1$, $2$ $4$ close subproof, 0 $a=b\rightarrow b=a$ $\rightarrow$I $1$–$3$ $5$ 0 $\forall y(a=y\rightarrow y=a)$ $\forall$I $4$ $6$ 0 $\forall x\,\forall y(x=y\rightarrow y=x)$ $\forall$I $5$ We obtain line $3$ by replacing one instance of ‘$a$’ in line $2$ with an instance of ‘$b$’; this is justified given ‘$a=b$’. Second, we will prove that identity is transitive: $1$ open subproof, 1 $a=b\wedge b=c$ AS $2$ 1 $a=b$ $\wedge$E $1$ $3$ 1 $b=c$ $\wedge$E $1$ $4$ 1 $a=c$ =E $2$, $3$ $5$ close subproof, 0 $(a=b\wedge b=c)\rightarrow a=c$ $\rightarrow$I $1$–$4$ $6$ 0 $\forall z((a=b\wedge b=z)\rightarrow a=z)$ $\forall$I $5$ $7$ 0 $\forall y\,\forall z((a=y\wedge y=z)\rightarrow a=z)$ $\forall$I $6$ $8$ 0 $\forall x\,\forall y\forall z((x=y\wedge y=z)\rightarrow x=z)$ $\forall$I $7$ We obtain line $4$ by replacing ‘$b$’ in line $3$ with ‘$a$’; this is justified given ‘$a=b$’. Practice exercises A. For each of the following claims, provide an FOL proof that shows it is true. 1. 1. $P(a)\vee Q(b),Q(b)\rightarrow b=c,eg P(a)\vdash Q(c)$ 2. 2. $m=n\vee n=o,A(n)\vdash A(m)\vee A(o)$ 3. 3. $\forall x\ x=m,R(m,a)\vdash\exists x\,R(x,x)$ 4. 4. $\forall x\,\forall y(R(x,y)\rightarrow x=y)\vdash R(a,b)\rightarrow R(b,a)$ 5. 5. $eg\exists xeg x=m\vdash\forall x\,\forall y(P(x)\rightarrow P(y))$ 6. 6. $\exists x\,J(x),\exists x\,eg J(x)\vdash\exists x\,\exists y\,eg x=y$ 7. 7. $\forall x(x=n\leftrightarrow M(x)),\forall x(O(x)\veeeg M(x))\vdash O(n)$ 8. 8. $\exists x\,D(x),\forall x(x=p\leftrightarrow D(x))\vdash D(p)$ 9. 9. $\exists x\bigl{[}(K(x)\wedge\forall y(K(y)\rightarrow x=y))\wedge B(x)\bigr{]}% ,Kd\vdash B(d)$ 10. 10. $\vdash P(a)\rightarrow\forall x(P(x)\veeeg x=a)$ B. Show that the following are provably equivalent: • ‣ $\exists x\bigl{(}[F(x)\wedge\forall y(F(y)\rightarrow x=y)]\wedge x=n\bigr{)}$ • ‣ $F(n)\wedge\forall y(F(y)\rightarrow n=y)$ And hence that both have a decent claim to symbolize the English sentence ‘Nick is the $F$’. C. In chapter 26, we claimed that the following are logically equivalent symbolizations of the English sentence ‘there is exactly one $F$’: • ‣ $\exists x\,F(x)\wedge\forall x\,\forall y\bigl{[}(F(x)\wedge F(y))\rightarrow x% =y\bigr{]}$ • ‣ $\exists x\bigl{[}F(x)\wedge\forall y(F(y)\rightarrow x=y)\bigr{]}$ • ‣ $\exists x\,\forall y(F(y)\leftrightarrow x=y)$ Show that they are all provably equivalent. (Hint: to show that three claims are provably equivalent, it suffices to show that the first proves the second, the second proves the third and the third proves the first; think about why.) D. Symbolize the following argument There is exactly one $F$. There is exactly one $G$. Nothing is both $F$ and $G$. • ∴ There are exactly two things that are either $F$ or $G$. And offer a proof of it.
{"url":"https://forallx.openlogicproject.org/html/Ch39.html","timestamp":"2024-11-02T09:32:10Z","content_type":"text/html","content_length":"74700","record_id":"<urn:uuid:2601d0b6-847b-4e3f-bd73-34d9e2982e5e>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00320.warc.gz"}
Integration - Algebraic Substitution, 3 Category: Integral Calculus, Algebra "Published in Suisun City, California, USA" Evaluate Solution: Consider the given equation above In this type of equation, we cannot integrate it by a simple integration because both the numerator and the denominator have radical equations. If you will rationalize the denominator in order to eliminate the radical sign, then the numerator still has radical equations and the above equation will be more complicated. To eliminate the radical signs at the numerator and the denominator, we have to use the Algebraic Substitution as follows If then Substitute the values of √x , √x + 1, and dx to the above equation, we have but Hence, the above equation becomes Therefore, the final answer is
{"url":"https://www.math-principles.com/2013/07/integration-algebraic-substitution-3.html","timestamp":"2024-11-08T11:14:12Z","content_type":"application/xhtml+xml","content_length":"104626","record_id":"<urn:uuid:91772959-8d2d-4572-902f-fbd8a46d5753>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00474.warc.gz"}
drag coefficient of cylinder April 23, 2017, #13 Member Flow past cylinder is a challenging case for CFD. You should understand the physics if you want to simulate it reasonably. For instance, at Re=1000, the attached boundary layer on the cylinder is laminar; the boundary layer separates at about 82deg and the wake is turbulent. However, you cannot simply employ a turbulence model like k-epsilon to compute the flow; using a yehanyu turbulence model makes the boundary layer separation point moves to the rear side (>90deg), and hence the wake is much narrower than reality, and therefore the drag is much smaller than reality. In reality, the separation point does not move to the rear side until much higher Reynolds number (about 2.5E5; "the drag crisis":at this Reynolds number, boundary layer Join transitions occur and the boundary layer becomes turbulent). Date: Mar Posts: 48
{"url":"https://www.cfd-online.com/Forums/fluent/33290-drag-coefficient-cylinder.html","timestamp":"2024-11-14T22:15:57Z","content_type":"application/xhtml+xml","content_length":"110647","record_id":"<urn:uuid:c8f67f0a-3262-41df-b48f-b5a50f89193a>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00534.warc.gz"}
RPointCloud: Visualizing Topological Loops and Voids Visualizations to explain the results of a topological data analysis. The goal of topological data analysis is to identify persistent topological structures, such as loops (topological circles) and voids (topological spheres), in data sets. The output of an analysis using the 'TDA' package is a Rips diagram (named after the mathematician Eliyahu Rips). The goal of 'RPointCloud' is to fill in these holes in the data by providing tools to visualize the features that help explain the structures found in the Rips diagram. See McGee and colleagues (2024) <doi:10.1101/2024.05.16.593927>. Version: 0.8.0 Depends: R (≥ 3.5) Imports: methods, graphics, stats, TDA, ClassDiscovery, ClassComparison (≥ 3.3), Mercator, rgl, splines, circlize Suggests: knitr, rmarkdown, Polychrome, igraph, ape, PCDimension Published: 2024-08-19 DOI: 10.32614/CRAN.package.RPointCloud Author: Kevin R. Coombes [aut, cre], Jake Reed [aut], RB McGee [aut] Maintainer: Kevin R. Coombes <krc at silicovore.com> License: Artistic-2.0 URL: http://oompa.r-forge.r-project.org/ NeedsCompilation: no CRAN checks: RPointCloud results Reference manual: RPointCloud.pdf RPointCloud: CLL Clinical Data (source, R code) Vignettes: RPointCloud: A Mass Cytometry Example (source, R code) RPointCloud: Regulatory T Cells (source, R code) Package source: RPointCloud_0.8.0.tar.gz Windows binaries: r-devel: RPointCloud_0.8.0.zip, r-release: RPointCloud_0.8.0.zip, r-oldrel: RPointCloud_0.8.0.zip macOS binaries: r-release (arm64): RPointCloud_0.8.0.tgz, r-oldrel (arm64): RPointCloud_0.8.0.tgz, r-release (x86_64): RPointCloud_0.8.0.tgz, r-oldrel (x86_64): RPointCloud_0.8.0.tgz Old sources: RPointCloud archive Please use the canonical form https://CRAN.R-project.org/package=RPointCloud to link to this page.
{"url":"http://www.stats.bris.ac.uk/R/web/packages/RPointCloud/index.html","timestamp":"2024-11-08T11:30:15Z","content_type":"text/html","content_length":"8113","record_id":"<urn:uuid:e78b1893-86e3-4e1a-a1c4-f3cda925e174>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00863.warc.gz"}
Can decimals such as 0.23 and 0.9 be rational numbers? | Socratic Can decimals such as 0.23 and 0.9 be rational numbers? 1 Answer Yes, $0.23$ and $0.9$ are rational numbers. $0.23 = \frac{23}{100}$ $0.9 = \frac{9}{10}$ Since both $0.23$ and $0.9$ fulfills: "In mathematics, a rational number is any number that can be expressed as the quotient or fraction $\frac{p}{q}$ of two integers, a numerator p and a non-zero denominator q." From source: Rosen, Kenneth (2007). Discrete Mathematics and its Applications (6th ed.). New York, NY: McGraw-Hill. pp. 105, 158–160. ISBN 978-0-07-288008-3 Impact of this question 13901 views around the world
{"url":"https://api-project-1022638073839.appspot.com/questions/can-decimals-such-as-0-23-and-0-9-be-rational-numbers#606207","timestamp":"2024-11-02T21:16:44Z","content_type":"text/html","content_length":"33217","record_id":"<urn:uuid:adb5722e-9fc8-41ad-8bd5-a0c8c2fc67b2>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00376.warc.gz"}
Revision Cards AS Level Chemistry Cards and Calculation Booklets to Help Improve Your Maths for Chemistry The following text books are a good aid to your learning. They have been recommended by students and teachers. The books are separated into the relevant exam boards. All the books cover the new specification (from 2015). Click on the books to find out more Click on the books to find out more Click on the books to find out more
{"url":"https://www.a-levelchemistry.co.uk/books.html","timestamp":"2024-11-05T15:21:39Z","content_type":"text/html","content_length":"50061","record_id":"<urn:uuid:833e067c-26ab-4dde-99a6-83df4704f8d6>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00217.warc.gz"}
Greedy Algorithms: Grab What's Best and Move On Sometimes in life, the best plan is to have no plan at all - just seize the moment and hope for the best. In the world of algorithms, this impulsive strategy has a name: greedy algorithms. They're the quick thinkers, the ones who make the best immediate choice without worrying about future consequences. It's like grabbing the last slice of pizza without pondering if someone else wanted it—you see it, you want it, you take it. So What Are Greedy Algorithms Anyway? Imagine you're hiking up a mountain with no map, and at every fork, you choose the path that goes upward the steepest. You're making the best immediate choice to reach the top, right? That's a greedy algorithm in action—making the optimal local choice at each step with the hope that these choices lead to the global optimum. Key Traits of Greedy Algorithms: • Local Optimality: At each step, you choose what's best right now, not worrying about the long-term implications. • Irrevocability: Once a choice is made, there's no going back. No regrets, no second-guessing. • Simplicity and Efficiency: Greedy algorithms are usually straightforward and fast because they don't consider every possible option—just the best one at the moment. How to Identify Greedy Problems Spotting a greedy problem is like recognizing a familiar face in a crowd—it just clicks once you know what to look for. 1. Optimal Substructure: The problem can be broken down into smaller subproblems, and the optimal solution to the main problem includes optimal solutions to the subproblems. 2. Greedy Choice Property: Making the best local choice leads to the best global solution. If grabbing the best option now doesn't ruin your chances later, you've got a greedy situation. 3. Problem Hints: Look out for problems involving minimization or maximization—like finding the least number of coins or the shortest path. 4. Sorting Helps: If arranging data in a particular order (like ascending or descending) seems beneficial, a greedy approach might be your go-to. Classic Examples: • Activity Selection Problem: Choosing the maximum number of activities that don't overlap in time. • Huffman Coding: Creating an optimal binary tree for data compression based on character frequencies. • Prim's and Kruskal's Algorithms: Finding the minimum spanning tree in a connected, weighted graph. • Dijkstra's Algorithm: Finding the shortest path from one node to all other nodes in a graph with non-negative edge weights. How to Approach Greedy Problems So, you've got a hunch that your problem is ripe for a greedy solution. Here's how to tackle it: 1. Understand the Problem Inside Out • Dive Deep: Don't just skim the problem statement. Understand every nuance. • Identify the Goal: What exactly are you optimizing for? Time? Cost? The number of steps? 2. Check for Greedy Properties • Optimal Substructure: Can the problem be broken down into smaller, similar problems? • Greedy Choice Property: Does making the best immediate choice at each step lead to the optimal overall solution? 3. Define Your Greedy Strategy • Determine the Best Local Choice: What decision can you make right now that seems the most advantageous? • Stay Consistent: Ensure that your method for making choices is applied uniformly throughout. 4. Prove It Works • Mathematical Proof: If possible, prove that your greedy strategy always leads to an optimal solution. • Counterexamples: Try to find a scenario where your greedy approach might fail. If you can't, that's a good sign. 5. Implement Efficiently • Keep It Simple: Greedy algorithms are about simplicity. Don't overcomplicate the code. • Optimize: Use data structures that make your algorithm run faster, like heaps or priority queues if needed. 6. Test Thoroughly • Edge Cases: Test your algorithm with unusual or extreme inputs. • Compare Results: If possible, compare your greedy solution's output with a known optimal solution. When Greedy Algorithms Don't Cut It Greedy algorithms are not the silver bullet for every problem. • Counterexamples Exist: Some problems might seem to fit the greedy mold but have specific cases where greedy choices don't lead to an optimal solution. • No Greedy Choice Property: If making the best immediate choice can mess up your overall goal, you need a different approach—like dynamic programming. Additional Resources Here are some resources I personally find invaluable, and I think you'll benefit from them too. freeCodeCamp’s YouTube video First up, if you're more of a visual learner, check out this fantastic video by freeCodeCamp: Leetcode problems Theory is just the appetizer; practice is the main course: This resource on LeetCode is a goldmine for anyone looking to sharpen their skills. It breaks down problems into categories, making it easier for you to focus on specific types of greedy problems. Final Thoughts Algorithms are like life choices—sometimes the straightforward path is the best one. Greedy algorithms embrace that philosophy, focusing on the here and now to build a better future, one optimal choice at a time. So next time you're faced with a problem, consider getting a little greedy. It might just lead you to the optimal solution faster than you think. Happy coding! 🥂
{"url":"https://vmhatre.com/greedy-algorithms-grab-whats-best-and-move-on","timestamp":"2024-11-06T01:46:07Z","content_type":"text/html","content_length":"169948","record_id":"<urn:uuid:d195df4e-c57a-4e49-b7d1-77252e129340>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00042.warc.gz"}
lag_: Abstract base class for inflation swaps. - Linux Manuals (3) lag_ (3) - Linux Manuals lag_: Abstract base class for inflation swaps. QuantLib::InflationSwap - Abstract base class for inflation swaps. #include <ql/instruments/inflationswap.hpp> Inherits QuantLib::Instrument. Inherited by YearOnYearInflationSwap, and ZeroCouponInflationSwap. Public Member Functions InflationSwap (const Date &start, const Date &maturity, const Period &lag, const Calendar &calendar, BusinessDayConvention convention, const DayCounter &dayCounter, const Handle< YieldTermStructure > the constructor sets common data members virtual Rate fairRate () const =0 Date baseDate () const Period lag () const Date startDate () const Date maturityDate () const Calendar calendar () const BusinessDayConvention businessDayConvention () const DayCounter dayCounter () const Protected Attributes Date start_ Date maturity_ Period lag_ Calendar calendar_ BusinessDayConvention bdc_ DayCounter dayCounter_ Handle< YieldTermStructure > yieldTS_ Date baseDate_ Detailed Description Abstract base class for inflation swaps. Inflation swaps need two term structures: a yield curve, and an inflation term structure (either zero-based, i.e., the rate $ r(t) $ equals $ I(t)/I(t_0) - 1 $ where $ I $ if the index and $ t_0 $ is the base time, or year-on-year, i.e., $ r(t) = I(t)/I(t_p) - 1 $ where the previous time $ t_p $ is defined as $ t $ minus one year.) Member Function Documentation Date baseDate () const The inflation rate is taken relative to the base date, which is a lag period before the start date of the swap. Generated automatically by Doxygen for QuantLib from the source code.
{"url":"https://www.systutorials.com/docs/linux/man/3-lag_/","timestamp":"2024-11-08T08:14:40Z","content_type":"text/html","content_length":"8955","record_id":"<urn:uuid:88500374-55bf-484f-8ab2-87d2c7106bd3>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00853.warc.gz"}
Closed Form Solution: Simple Definition, Examples Calculus Definitions > Closed Form Solution A closed form solution is an expression that gives an exact result with a finite amount of data. An example would be a definite integral, which gives the area under a curve. The exact definition depends on the context, but it’s generally agreed that these solutions must have commonplace quantities: 1. A finite number of symbols (e.g. x, y, z), 2. Only the following operators: + – * ÷ 3. Basic functions like: Most (but not all) mathematicians also include special functions like the gamma function in this list. The precise definition for closed form solution is what mathematicians Jonathan Borwein (“Dr. Pi“) and Richard Crandall call ” … a community-varying and epoch-dependent notion” where no one has the “right” answer. In other words, the definition depends on who you’re talking to or where you’re reading it. Check with the text you are using (or with your instructor) to clarify how they are shaping their particular definition. Examples of Closed Form Solutions One simple example of a function with a closed form solution is the quadratic equation, which uses only three coefficients (a, b, c) and one variable (x): ax^2 + bx + c = 0. Its solutions can be expressed by simple terms like addition, subtraction, division, multiplication, and square roots. The solutions can be expressed as: Cubic functions and quartic functions are also functions with closed form solutions. Some polynomial functions don’t have a closed form solution. In general, equations with degree of more than 4 either don’t have a solution, or can’t be solved with simple operations. For example, Hermite polynomials have solutions involving modular elliptic functions (which Wikipedia calls “a scarcely recognizable form”). Alternative Definitions of Closed Form Solutions Alternate definitions of closed form solutions have been proposed to deal with the problem of the ‘fuzzy definition’ usually given for closed form solution. For example, in a 1999 article in The American Mathematical Monthly, Timothy Chow suggests that the criteria of importance is whether functions are closed under exp and log; and defines the field EL as the intersection of all subfields which are closed under those two functions. This is a useful criteria, but has not been widely accepted as a definition for closed form. Borwein, J. & Crandall, R. (2010). Closed Forms: What they are and why we care. Retrieved from https://www.carma.newcastle.edu.au/jon/closed-form.pdf on January 14, 2017. Chow, Timothy Y. What is a Closed-Form Number? The American Mathematical Monthly Vol. 106, No. 5 (May, 1999), pp. 440-448. Retrieved from http://www.jstor.org/stable/2589148 January 14, 2017. Comments? Need to post a correction? Please Contact Us.
{"url":"https://www.statisticshowto.com/closed-form-solution/","timestamp":"2024-11-13T15:05:39Z","content_type":"text/html","content_length":"65788","record_id":"<urn:uuid:78042103-6da1-40d7-bc12-7d9e15033afb>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00395.warc.gz"}
Maths Revision • Created by: raaaaw • Created on: 21-09-18 19:55 IS A PART OF THE WHOLE TWO QUANTITIES ARE IN THIS IF ONE IS ALWAYS THE SAME MULTIPLE OF THE OTHER 1 of 10 2 of 10 SIMPLY {RATIO} 3 of 10 THE RATIO OF THE LENGTH OF AN OBJECT IN A SCALE DRAWING TO THE LENGTH OF THE REAL OBJECT 4 of 10 5 of 10 6 of 10 7 of 10 8 of 10 9 of 10 10 of 10 Other cards in this set Card 2 View more cards No comments have yet been made
{"url":"https://www.prod.gr.cuttlefish.com/revision-tests/maths-revision-29?game_type=flashcards","timestamp":"2024-11-02T08:27:37Z","content_type":"text/html","content_length":"42293","record_id":"<urn:uuid:6a0e4734-d8d2-4da4-93f1-54f996de60c5>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00615.warc.gz"}
3 Ways to Make Your Life Easier as a Geometry Teacher 3 Ways to Make Your Life Easier as a Geometry Teacher Without More Geometry Worksheets Teaching Geometry is one of the most difficult subjects to teach. Not because of the reasons you’d think though. It’s not because most kids are told how hard it is before they ever take it, it’s not because you have to teach kids proofs, it’s not because its different than all of the other math classes due to it’s more visual concepts. It’s because it’s one of the most time consuming core subject areas there is to prepare for. Sure, you could print work sheets from a book coast by and cover the content, but if you are teaching the most engaging real world math class in your school shouldn’t it be taught that way? With that said here are 3 ways to be that amazing teacher and not lose your mind because of how much of your free time you have to give up in preparation! 1.) Join Groups Join communities of other geometry teachers online that love to share great ideas for teaching each lesson instead of scouring or Google. You don’t have to reinvent the wheel with every lesson. Together across the country there have been geometry teachers that have created mind blowing lesson for every topic. Your time doesn’t need to be spent finding them. You should be a part of a Geometry Teacher Forum to get these great ideas. If you are one of those teachers who created a mind blowing lesson don’t be greedy share it with everyone! 2.) Stop Wasting Time Creating Geometry Worksheets Stop creating your own geometry worksheets, tests, quizzes, lesson plans, bell work assignments, PowerPoint slides, and anything else you can think of. Think about it like this. How long does it take you to create one test or geometry worksheet that is common core aligned? We all know the answer and are afraid to admit it. TIME IS MONEY! You can purchase full lesson plans with every piece of content you could possibly ever need. What dollar amount is worth getting to spend two extra hours per night with your children, getting to go home right after school, being able to go on a date with your spouse, being able to focus on your son or daughter’s game rather than stressing the entire time about when you are going to prepare your lessons for tomorrow? I think you get the point. Spend the money to save the time. It will be the best decision you’ve ever made for your stress as a teacher and you will wish you would have done it long ago! 3.) NO SNOW BALLS When you feel you are starting to get behind do not let things snowball. Like we stated before. There have been many teachers before you that have created great mind blowing lessons. Guess what? They also have video taped them and made them available for the world to see. I know what you are thinking, “then I have to find them, more research.” No, you just make sure the plans you purchase include all of that. Here is what we recommend! Do not be afraid to put on a video of someone else leading the lesson and get caught up while they are engaged. Use self teaching technologies such as Khan Academy. Your students love these technologies anyways. If you are stressed about something else or not fully engaged with your students are you really providing the best lesson to them anyway? There are literally millions of places you can find materials on-line. Do not get sucked into that time trap either! Below are some Free Geometry Worksheets, Guided Teacher Notes, Lesson Plans, On-line Activities, Tests, Quizzes, and more.
{"url":"https://geometrycoach.com/3-ways-to-make-your-life-easier-as-a-geometry-teacher-without-more-geometry-worksheets/","timestamp":"2024-11-08T08:34:32Z","content_type":"text/html","content_length":"142845","record_id":"<urn:uuid:7f2a8d9d-19fd-4ec0-8ff6-e38d6f01f01c>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00752.warc.gz"}
HESI A2 HESI A2 Quizlet Math 1. What is the result of adding 7 2/3 and 5 5/12? • A. 11 1/12 • B. 12 1/4 • C. 13 1/12 • D. 14 1/6 Correct answer: C Rationale: To add mixed numbers, convert them to improper fractions. 7 2/3 = 23/3 and 5 5/12 = 65/12. Adding them results in 233/12. Converting this improper fraction back to a mixed number gives 19 5/12, which simplifies to 13 1/12. Therefore, the correct answer is 13 1/12.\nChoice A (11 1/12) is incorrect as it does not represent the correct sum of the given mixed numbers. Choice B (12 1/4) is incorrect as it is not the result of adding 7 2/3 and 5 5/12. Choice D (14 1/6) is incorrect as it is not the correct sum of the provided mixed numbers. 2. If the elephant weighed 3 tons, how many pounds did it weigh? • A. 6000 pounds • B. 5000 pounds • C. 3000 pounds • D. 7000 pounds Correct answer: A Rationale: To convert tons to pounds, you multiply by 2000 because 1 ton is equal to 2000 pounds. Therefore, 3 tons equal 3 × 2000 = 6000 pounds. The correct answer is 6000 pounds. Choice B (5000 pounds) is incorrect because it miscalculates the conversion. Choice C (3000 pounds) is incorrect as it represents the weight of 1 ton, not 3 tons. Choice D (7000 pounds) is incorrect as it overestimates the weight of 3 tons. 3. What substance makes up the pads that provide support between the vertebrae? • A. bone • B. cartilage • C. tendon • D. fat Correct answer: B Rationale: Cartilage is the correct answer as it is the substance that makes up the pads between the vertebrae. Cartilage provides cushioning and support between the bones of the spine, allowing for flexibility and preventing friction between the vertebrae. Bone (choice A) is incorrect as it forms the structure of the vertebrae, not the intervertebral discs. Tendon (choice C) is incorrect as it connects muscle to bone and is not found between the vertebrae. Fat (choice D) is incorrect as it is not the substance that makes up the pads between the vertebrae. 4. Multiply 0.05 by 22 and express the result as a decimal: • A. 1.1 • B. 0.11 • C. 0.011 • D. 0.0011 Correct answer: C Rationale: When multiplying 0.05 by 22, you get 1.10. To express this result as a decimal, you move the decimal point two places to the left since there are two total decimal places in the question (one in 0.05 and none in 22), resulting in 0.011. Choice A (1.1) incorrectly adds a decimal place, choice B (0.11) incorrectly moves the decimal point only one place, and choice D (0.0011) adds an extra zero. 5. What does 'Parameter' mean? • A. A constant variable • B. A characteristic or constant factor • C. A measurable limit • D. A calculated risk Correct answer: B Rationale: The correct answer is B: 'A characteristic or constant factor.' In the context of systems or experiments, a parameter is a fixed element that influences the behavior or outcome. It is not a variable like in choice A, which can change. Choice C, 'A measurable limit,' is incorrect as a parameter is not necessarily a physical limit but a defining factor. Choice D, 'A calculated risk,' is unrelated to the definition of a parameter. Similar Questions Access More Features HESI A2 Basic $49/ 30 days • 3,000 Questions with answers • 30 days access HESI A2 Premium $99/ 90 days • Actual HESI A2 Questions • 3,000 questions with answers • 90 days access
{"url":"https://nursingelites.com/questions/7-23--5-512--","timestamp":"2024-11-01T22:32:19Z","content_type":"text/html","content_length":"60715","record_id":"<urn:uuid:2383e33b-3f1f-4f7b-8657-58d6817127a0>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00559.warc.gz"}
Quests and Questions I’ll start off with some comments on why quadratic forms are a natural type of object to want to classify. Then I’ll review well-known classifications of the quadratic forms over I’ll actually just focus on nondegenerate quadratic forms. A quadratic form An important tool will be the fact that quadratic forms are essentially the same thing as symmetric bilinear forms. An In coordinates, quadratic forms and symmetric bilinear forms can both represented by symmetric matrices. A symmetric matrix Two quadratic forms are equivalent if you can turn one into the other with an invertible linear transformation. That is, Why quadratic forms? Quadratic forms are a particularly natural type of object to try to classify. When I say “equivalence” of some sort of linear algebraic object defined on a vector space For example, for large The space of (not necessarily symmetric) bilinear forms is Classifying objects is also not too interesting if there’s very few of them for very straightforward reasons, and this tends to happen if there’s far fewer than To get to the happy medium, where there’s a nontrivial zero-dimensional space of things up to equivalence, you want to start with some space of objects that’s just a bit less than Classification of quadratic forms over certain infinite fields Let’s start with Theorem (Sylvester’s law of inertia): For an Proof: First we’ll show that every quadratic form can be diagonalized. We’ll use induction to build a basis Now that we’ve diagonalized The ordering of the diagonal entries doesn’t matter because we can permute the basis vectors so that all the Theorem: All Proof: As in the real case, we can diagonalize any quadratic form; the proof of this did not depend on the field. But in Note that this proof works just as well over any quadratically closed field not of characteristic 2. Over other fields, things get a bit complicated. One class of fields you might want to consider is the p-adic numbers, and there is a known classification (called the Hasse invariant) of quadratic forms over Over the rationals, things are even worse. The best result about classifying rational quadratic forms we have is the Hasse-Minkowski theorem: Over any finite extension of But over finite fields, things go back to being relatively straightforward. Classification of nondegenerate quadratic forms over finite fields of odd characteristic The part of the proof of Sylvester’s law of inertia that shows that every nondegenerate real symmetric form can be diagonalized with diagonal elements Lemma 1: Let Proof: As in the proof of Sylvester’s law of inertia, with Lemma 2: Let Proof: If invertible symmetric matrices What’s going on here is that there’s a notion of the determinant of a bilinear form that doesn’t depend on a matrix representing it in some basis. For every Theorem: Up to equivalence, there are exactly two Proof: Lemma 2 implies that invertible symmetric matrices of square determinant and invertible symmetric matices of nonsquare determinant represent inequivalent quadratic forms. It remains to show that any two invertible symmetric matrices with either both square determinant or both nonsquare determinant represent equivalent quadratic forms. The squares in Note, as a consequence, if we count the degenerate quadratic forms as well, there are Given an First, note that we don’t have to check every quadratic form individually, because if two quadratic forms are equivalent, then for any given scalar And we don’t have to check all And it turns out that in odd dimensions, for any nonzero quadratic form We can compute these offsets It turns out that it matters whether If you state and solve the recurrence, you should find that: It turns out that counting how many vectors a quadratic form assigns each value is useful for another counting problem: Now we can determine the order of the automorphism group of a quadratic form. If you evaluate these products, you should get: You may notice that, in odd dimensions, the size of the automorphism group of a quadratic form does not depend on its determinant. In fact, the groups are the same. This is because multiplication by a nonsquare scalar sends a quadratic form of square determinant to a quadratic form of nonsquare determinant, and does not change its automorphism group. As a sanity check, By the orbit-stabilizer theorem, these count the number of quadratic forms of each equivalence class. The sum of the sizes of the two equivalence classes is the total number of (individual, not up to equivalence) nondegenerate quadratic forms on a vector space, or equivalently, the number of invertible symmetric matrices. This sum is The leading terms of these are Fields of well-defined Euler characteristic This might seem silly, but it’s deeper than it sounds. This notion of “size” is called Euler characteristic (this is actually different from what algebraic topologists call Euler characteristic, though it’s the same for compact manifolds). A topological space Euler characteristic has a lot in common with sizes of finite sets, and is, in some ways, a natural generalization of it. For starters, the Euler characteristic of any finite set is its size. Some familiar properties of sizes of sets generalize to Euler characteristics, such as In fact, all of the counting problems we just did over finite fields of odd size applies just as well to fields of well-defined odd Euler characteristic. There are only two infinite fields that have an Euler characteristic: Recall that in the counting problems in the previous section, we needed to know whether or not Anyway, let’s start with positive-definite real quadratic forms (i.e. those only taking positive values). If For negative-definite quadratic forms, of course, the positive and negative cases switch. Now let’s try the indefinite quadratic forms. Recall that for every nondegenerate quadratic form If you remove one point from In comparison, if you take our earlier formula for the sizes of these level sets over Now let’s look at complex quadratic forms. Since every complex number is square and In the odd-dimensional case, I’ve left out the nonsquare case, and relabeled the case where It’s easy to verify that the zero-sets of complex quadratic forms have Euler characteristic The Euler characteristics of the other level sets of complex quadratic forms can be checked by induction. A 3. For some Thus, summing these up, we get There are some apparent disanalogies between finite fields and infinite fields with Euler characteristics. For instance, it is not true that exactly half of nonzero complex numbers are squares, at least in the sense that And, while finite fields of odd characteristic have two nondegenerate The oversupply of equivalence classes of real quadratic forms is a little subtler. Our analysis of nondegenerate quadratic forms over finite fields of odd characteristic predicts that any two nondegenerate real quadratic forms whose determinants have the same sign should be equivalent, and this is not the case. To address this, let’s look at the heap of isomorphisms between two nondegenerate quadratic forms whose determinants have the same sign. A heap is like a group that has forgotten its identity element. The heap of isomorphisms between two isomorphic objects is the underlying heap of the automorphism group of one of them. In particular, if the automorphism group of some object is a Lie group, then the heap of isomorphisms between it and another object it is isomorphic to is homeomorphic to the automorphism group, and thus they have the same Euler characteristic. In the finite case, this is just saying that the heap of isomorphisms between two isomorphic objects has the same size as the automorphism group of one of them. So when we computed the sizes of automorphism groups of nondegenerate quadratic forms over High Ground One day the ground shook. It was scary, but it was over soon, and none of us were injured. Two days later, we saw water in the distance. At first we thought it was a mirage, but later in the day, not only was it still there, it had grown. Our first instinct, once we realized it was real, was to migrate towards it; it would be a godsend if it’s fresh, so we figured we’d better check. But it kept getting closer, and not just because we were going closer to it. We soon realized the implications, and turned around and fled for the mountains. The mountains were far away; we weren’t sure exactly how far away, as neither my husband nor I had ever been to the mountains. We walked just about as far as we could each day, but we could see that the water was catching up. And then, one day, we woke up before dawn, because we were wet. We all got up and started walking again, but first I tried a taste of the water. It was salty. We reached dry ground just after dawn, but the water caught up to us again around noon. By dusk, it was up to our ankles. We’d moved especially quickly that day, and were exhausted, but we couldn’t rest, because we’d drown, so we pressed on through the night. Our son was too exhausted to keep walking, so my husband carried him on his back while he rested through the night. The water kept rising through the night and the next day. By the end of the day, I felt like I could barely go on. It wasn’t just the sleep deprivation and overexertion, but also the pain in my feet from them being immersed in cold water the whole time, and worst of all, the dehydration. It was impossible to get any fresh water, because any source of fresh water we could reach got drenched in saltwater before we could reach it. We couldn’t go on much further, and could only hope we wouldn’t have to; the mountains were noticeably closer now, and perhaps we’d reach steeper ground soon, and be able to outrun the water. But for now, we had to keep walking, so we did. Our son’s sleep cycle got desynched from the sun, and my husband started carrying him on his back again in broad daylight. The water reached our knees, and I figured our son would likely have trouble walking once he woke up anyway. My husband collapsed, and our son fell into the water, woke up, and shrieked. I grabbed our son, helped him get on his feet, and then tugged on my husband’s arm to help him up. My husband stirred, but didn’t get up. I slapped him, in case that helped wake up, but he didn’t move. I abandoned him, helped our son onto my back, and kept moving, hoping my husband would get up and start moving again on his own. He didn’t. Just before dusk, I started to wonder if it was my imagination, or if the water line ahead of me was a bit closer than it was earlier. I looked down, and saw that the water wasn’t reaching my knees anymore. I hadn’t felt it because my legs were too numb. I was beating the water, now, and the mountains were so much closer. I kept walking through the night. Was I still beating the water? I thought I probably was, but I couldn’t tell for sure, because my legs were too numb, and the moon was almost new, so I couldn’t see well. I didn’t want to waste any time reaching down to check the water level with my hands. In any case, I wasn’t beating the water fast enough. I could feel that I would collapse soon, just like my husband did. My son had had a fair amount of rest, and the water might be shallow enough for him now, so I thought about letting him off and making him walk. It was tempting, but I knew it wouldn’t save me; if I stopped to put him down, I wouldn’t be able to start walking again. So I kept going, hoping that by the time I drop, we’d be close enough for my son to make it to the mountains. Cardinality and the axiom of choice If the axiom of choice is false, then there are two sets which are not the same size, but neither one of them is larger than the other. This, and similar seemingly absurd results, are sometimes used as motivation for the axiom of choice. But this is not so absurd when you unpack what it means: A set The usual reason to think this shouldn’t happen for sets is that cardinalities of sets are supposed to correspond to an intuitive notion of size. But this intuitive interpretation is not God-given; it is an interpretation that people came up with because it seemed to fit. If the axiom of choice is false, this interpretation fits less well. The axiom of choice can be viewed as saying that sets are so flexible that the possibility of fitting one inside another is limited only by their relative sizes, whereas the negation of the axiom of choice says that sets have some essential structure that can’t be adequately interpreted as size. But it gets worse. Without the axiom of choice, it’s possible to have an equivalence relation Surely this is still absurd? Again, I think it is not. It only sounds absurd because of the inappropriate language we used to describe the situation in which there’s an injection from I suspect that, to many people, “ And it shouldn’t be surprising that there could be an equivalence relation The sense in which cardinality can be about structure more general than size can become even more apparent in more austere foundations. Consider a mathematical universe in which everything that exists can be coded for somehow with natural numbers, and every function is computable. There’s a set of real numbers in this universe, which we know of as the computable real numbers: they’re coded by numbers representing programs computing Cauchy sequences that converge to them. It doesn’t really make sense to think of this universe as containing anything “bigger” than I think that it can be helpful to think of cardinalities as potentially being about inherent structure of sets rather than simply “size” even if you’re assuming the axiom of choice the whole time. Fun fact: if there’s any model of ZFC at all, then there’s a countable model. This often strikes people as absurd; ZFC asserts the existence of uncountable sets, like To give a couple concrete examples of how I visualize cardinality as being about structure: When we encounter mathematical objects of cardinality It’s interesting to note that you can forge something long ( The idea behind this post was a specific instance of the general principle that, when a result seems absurd, this doesn’t necessarily refute the foundational assumptions used to prove it, but rather means that your way of thinking isn’t well adapted to a mathematical universe in which those assumptions are true. Another example of this is that the Banach-Tarski theorem and similar results often strike people as patently absurd, but people get used to it, and one could try explaining why such results shouldn’t be seen as as absurd as they first seem, as a way of conveying intuition about what a mathematical universe in which the axiom of choice holds looks like. While I don’t find the allegedly counterintuitive things that are likely to happen without the axiom of choice compelling, this doesn’t undercut other arguments for the axiom of choice. I think the strongest is that every Terrestrial Astrogation “Signal from Bumblebee. Bogey spotted, bearing .40-.13, estimated range 300 miles,” said Sam, Killjoy’s signalman. Amelia, the pilot, quickly turned so that they’d have a good view in that general direction. While the ship was turning, Nate, the navigator, worked out exactly which direction to expect to see the bogey. Once the turn was complete, he pointed the telescope in that direction, and hunted around for it. It didn’t take long to find it, and he reported the bearing. Having bearings to the bogey from two ships rather than one gave them a more precise idea where it was; Bumblebee’s range estimate had only been a wild guess, though it had turned out not to be far off. Sam relayed the bearing back to Bumblebee with the light gun. Contrary to popular belief, the light gun, rather than Killjoy’s real gun, was the second-most energy-intensive machine on the ship, behind only the engine. Space ships are fragile, so it doesn’t take much punch to kill them, and they have no reason for their guns to pack more punch than that. Being able to yell for help is just as essential for survival in space warfare as being able to shoot. Since as far as anyone can tell, sound doesn’t transmit through space at all, that means you need others to be able to see you yelling for help, from a vast distance away. Hence the extremely powerful light gun. Bumblebee and Killjoy each took another set of bearings on the bogey, and passed them over to each other, which meant they now had enough information to determine the bogey’s velocity. Nate ran the numbers on his slide rule. “It’s on an intercept course with Bumblebee,” he said, “They’ll make contact in about 12 minutes, and I don’t think Bumblebee has any chance of running away from them.” Bumblebee was carrying some extremely valuable cargo from Habitat 6 back to Habitat 1. Habitat 6 had just been captured right after Bumblebee had departed it, so if Bumblebee and its cargo were lost, there was no going back for more. If anyone was on an intercept course for Bumblebee, there’s a good chance they knew all of this, and were targeting Bumblebee because of it. Nate continued, “There’s no way anyone else can make it there to rescue them. We can probably do it, though. Turning, uh… .83-.25 or so, and accelerating at about an eighth of a g would be a good way to start.” Amelia immediately followed those directions. Those directions were similar to her intuition, which was famously good regarding maneuvering in orbit (she had set a record for how far a trip she could take and return from alive, back in the early days of space travel when people first discovered how to get out of Habitat 1, before the math behind orbital mechanics was known), but it still generally works better to wait for input from the guy with the charts and a slide rule when possible. The course change was Nate’s signal to calculate a more precise intercept course. “Do we want to engage them simultaneously with Bumblebee, or before?” he asked. “Before,” said Amelia. The rest of the crew simultaneously cursed under their breaths at that. Normally, you want to coordinate with your squadronmates to fight enemies simultaneously, rather than one at a time. The only way it could possibly make sense to try to fight a one-on-one dual with the bogey before Bumblebee was close enough to help was if Bumblebee getting out of the fight in one piece was more than four times as important as Killjoy doing the same, a value judgement that no one in Killjoy’s crew could endorse with much enthusiasm. “Maybe push the acceleration to a sixth of a g, then,” said Nate, “and tell Bumblebee to run away.” “Bumblebee is already doing that,” said Sam. Nate got to work plotting the intercept course, and when he was done, he gave Amelia a course correction and an even higher acceleration. When they got close to the bogey, Joan, Killjoy’s gunner and engineer, fired a tracer at it, and, predictably, missed, but seeing the path of the tracer was helpful for lining up the next shot. The tracer rounds were a recent invention, having been invented during the current war, and caused an enormous improvement in gunnery accuracy outside of habitats. Their side had been on the verge of victory when the other side developed the tracers, and they would have lost if they hadn’t managed to reverse engineer the tracers and produce some themselves as soon as they did. Nevertheless, Joan missed the next couple shots as well, though the last one was close. There was a loud cracking sound. “We’re hit!” said Joan, unnecessarily. Usually, a ship falls apart instantly when it gets hit; the projectile must have just barely grazed them. Joan more usefully continued, “We’ve lost one oxygen tank and we’re losing fuel. I’ll patch up the fuel tank.” “You’ll what?” Sam asked incredulously, as Joan grabbed a bag of leak repair equipment and jumped towards the door. She unlocked the door (which was designed to seal against a habitat airlock, and was very much not designed to be opened in a vacuum), yanked it open, quickly propelled herself through it, grabbed the handle on the other side, and slammed it shut. The air inside the ship was noticeably thin by the time she’d finished that step, but not dangerously so. The oxygen tanks were automatically releasing more air, with an audible hissing sound. The exterior of the ship was not designed to be crawled across, and as such, did not have handholds. Joan had no way to get traction without being pressed against the ship. Luckily, Amelia caught on to what she was trying to do, and used maneuvering thrusters to push the whole ship towards Joan, so Joan could climb across it, and across from Joan so it would be like Joan was climbing downhill. “What’s she doing? There’s no way she’ll make it back before losing consciousness!” said Sam. “She knows,” said Nate. “But then we’ll have no way to recover her before she dies!” said Sam. “She knows,” Nate repeated. Joan reached the fuel leak, and the rest of the crew watched her work through the window, while Amelia continued to push the ship towards her with maneuvering thrusters so she wouldn’t float away. Joan finished patching the leak just before passing out. “That was lucky,” Nate commented, “I counted 16 seconds since she opened the door, though I might have been off slightly because I was distracted by Sam talking. That’s longer than people usually last, and it was a very efficient patch job.” Meanwhile Killjoy was tumbling a bit as a result of Amelia’s hasty maneuvering, and Amelia was busy straightening the ship out. “Where can we get to now?” Amelia asked after stopping the tumbling, in a grim tone of voice that suggested she had a hunch about the answer. Nate looked around with the telescope to get a good sense of their position and velocity. “While I’m figuring that out, you’ll want to boost us .45-.40 by 50 feet per second,” said Nate. That would prevent them from getting too far away from where the habitats tended to orbit. Nate checked the time by pointing his telescope at Earth’s terminator line, checked the fuel gauge, checked the charts for where all the known habitats should be, and started running the numbers for all the habitats it was even remotely plausible they could make it to, including enemy-controlled habitats where they could surrender. This took several minutes. “Nowhere,” he finally announced. “Okay, let’s look around for a new habitat. Sam, ask Bumblebee, if they’re still alive, and whoever else is in signal range, Queen of the Angels, maybe, to help us out,” said Amelia. “Will do,” said Sam, “Bumblebee is alive, by the way. They just messaged that they destroyed the bogey.” Nate and Amelia started looking around for undiscovered habitats, and Sam joined them once he was done sending the message. Searching for habitats was tricky, because from a distance, a habitat looks like a dim star when it’s in sunlight (and is nearly invisible when shadowed), so even when there’s a habitat in plain view, it doesn’t stand out from its surroundings very well. But they all knew their constellations decently well, so they’d know when they saw something out of place. “Over there! I see something!” Sam said excitedly, pointing, after they’d been looking for several minutes. “You’re pointing at Mercury,” said Nate, “Is there something else next to it?” “No. Uh, I guess it’s just Mercury,” said Sam, dejected. It was almost an hour later that Sam again saw something. “Please don’t tell me that’s Venus over there,” he said. Nate took a look through Sam’s telescope. “Hm, no, that’s not anything I know about,” he said, “Looks promising. Send the bearing over to Bumblebee and Queen of the Angels; see if they can find it.” From looking at how the unidentified object was moving, Nate became convinced it must be a habitat, even before Queen of the Angels replied with a bearing on an unidentified object that was a potential match for the one Killjoy had reported. After a couple more bearing reports on the object from Queen of the Angels, Nate had a pretty precise picture of the presumed-habitat’s orbit. “We can definitely make it, but it will take a while. Bring us to .75-.25 at a twenty-fifth of a g.” “Okay. Now, when you say ‘we can definitely make it’, do you mean we have enough fuel and oxygen to make it there, or just that we have enough fuel?” asked Amelia, while following Nate’s directions. “Oh shit, uh, hold on. Let’s see if I can make this work,” said Nate. After recalculating, he said, “Okay, we can’t get there before the oxygen tanks nominally run out, but the time the oxygen tanks are rated for is less than the amount of time before we lose consciousness, and I’m not sure exactly how much less, so I’ll just give you the fastest route we can do with the fuel we have and hope for the best.” Amelia returned to the controls and brought Killjoy to the new course Nate had planned out, while Nate jotted down notes about what the rest of the course should look like. When they were done, the three of them joined hands and Amelia led them in a slow-breathing meditation exercise. After a few minutes of this, Amelia stopped, and said, “Sleep.” Sam and Nate strapped themselves in and tried to sleep, and Amelia dimmed the lights for them. It wasn’t easy falling asleep when you didn’t know if you were going to wake up again, even when knowing that falling asleep sooner would make that more likely. But they both eventually managed; it helped that it had been a long day. Amelia continued meditating, while being careful not to fall asleep so that she could keep the ship on course. It was unclear to her whether they were going to make it. The possibility occurred to her of throwing someone out the door to conserve oxygen, though she quickly rejected this idea. There were many drawbacks: She and Nate needed to stay, because they needed to take shifts piloting, since the journey would take too long for one of them to do it, and Sam didn’t know how to fly. Sam needed to stay, because he was the only one of them with any wilderness survival skills worth speaking of, so Amelia and Nate might not fare very well on their own even if they did make it to the habitat. Air escaping through the door as someone gets tossed through it would limit how effective an oxygen-conserving measure it would be. And, of course, she greatly preferred all three of them making it. About a third of the way through the trip, Amelia woke Nate up for a shift, and went to sleep herself. About three-fourths of the way through, He woke her up again to take the last shift, and went to sleep again himself. She made it most of the rest of the way. The oxygen tanks were long since empty, and she was very, very tired. She entered a confused, dreamlike state, and then jolted herself back into reality. Had she fallen asleep? She didn’t know. Was Killjoy in the same place it had been when she’d last been paying attention? That one was supposed to be easy, but she didn’t know. A wave of panic washed over her as she realized how disoriented she was, and then she remembered where the habitat was supposed to be, she looked in that direction, and there it was. She could even make out its toroidal shape through the telescope. So close. How much longer? It took her a while to think of a way to answer that question. She looked for Earth’s terminator line, and then looked at Nate’s notes on their course. Then she looked back at Earth’s terminator line again and wrote down the time so she wouldn’t immediately forget it again. She hadn’t missed any maneuvers, and there was just over 20 minutes to go. She wasn’t going to make it. Maybe Nate could make it, if he had a higher tolerance for oxygen deprivation than she did, she thought. Of course, maybe not, and maybe even if Nate could stay awake long enough he wouldn’t be able to dock at the habitat, since they were cutting it down to the wire on fuel and he wasn’t an experienced pilot. But it was a risk she’d have to take, because she knew she wasn’t going to make it Amelia pushed herself over to where Nate was strapped in, grabbed him, and shook. He didn’t wake up. She punched him, and he stirred. She punched him harder. Nate awoke, they made eye contact, and then Amelia passed out. Nate felt groggy as hell, and part of him wanted to just go back to sleep. But he understood what was happening. He slowly unstrapped himself, and made his way to the controls. It took him longer than it should have to get himself oriented, but soon enough he worked out where in space Killjoy was, and when. They were a tiny bit off course. He worked out how to correct for this, reached for the controls, and hesitated. He was forgetting something; what was it? He looked behind him. Aha! Amelia was unconscious and not strapped in. She could get thrown across the ship and get injured if he adjusted the thrusters. He went over and strapped her in before returning to the controls, orienting himself again (he’d already forgotten which direction they were off course in), and correcting their course. The habitat drew closer, to the point where Nate could see its toroidal shape with naked eye, and then to the point where he could see that it was large. He needed to make several more course corrections to make sure that Killjoy would come to a stop just in front of the habitat’s airlock. He was getting nauseous, despite usually being pretty impervious to spacesickness. His terminal maneuvering was slightly sloppy, and by the end, the engine was sputtering and not producing as much thrust as it was supposed to, but he made it. He fired the docking harpoons and reeled them in, successfully sealing Killjoy’s door against the habitat’s airlock. He went to the door, opened it, opened the airlock behind it, and took in a deep breath of fresh air. Uniqueness of mathematical structures This post is an introduction to model theory, of sorts. Occasionally I get asked what model theory is, and I generally find it quite difficult to give someone who doesn’t already know any model theory a good answer to this question, that actually says anything useful about what model theory is really about without leaving them hopelessly lost. This is my attempt to provide a real taste of model theory in a way that should be accessible to a math grad student without a background in logic. Warm-up exercise Let’s say I make a graph with the following procedure: I start with a countably infinite set of vertices. For each pair of vertices, I flip a fair coin. If the coin lands heads, I put an edge between those two vertices; if the coin lands tails, no edge. Now you make another graph in a very similar manner. You also start with a countably infinite set of vertices. But instead of flipping a coin, you roll a fair standard six-sided die for each pair of vertices. If the die comes up 6, you put an edge between those two vertices; if it comes up anything from 1 through 5, no edge. What is the probability that these two graphs are isomorphic? For the numerical answer, paste “Gur zhygvcyvpngvir vqragvgl” into https://rot13.com/. An explanation will appear later in this post. There are several cases in which we can identify a mathematical object up to isomorphism with a list of first-order properties it satisfies (I’ll tell you what that means in a sec) and some data about cardinality. Here’s a couple examples: All countable dense linear orders without endpoints are isomorphic. Any two algebraically closed fields of the same characteristic, which have transcendence bases of the same cardinality, are isomorphic. It turns out that the possibility of uniquely specifying a mathematical structure in this way corresponds to interesting structural properties of that structure. First, the basic definitions: A first-order language consists of a set of relation symbols, each of which is labeled with a number representing its arity (number of inputs it takes), a set of function symbols, each of which is also labeled with a number representing its arity, and a set of constant symbols (which could also just be thought of as 0-ary function symbols). For example, the language of linear orders has one binary relation A first-order structure in a given language is a set We can compose function symbols, constant symbols, and variables into ways of pointing to elements of a structure, called terms. We have as many variables as we want, and they are terms. Constant symbols are terms. And for each n-ary function symbol A first-order formula is a way of actually saying things about a first-order structure and elements of it represented by variables. If A first-order formula with no free variables is called a sentence. These are true or false statements about a first-order structure. Many types of mathematical objects are defined by listing first-order sentences that are true of them. For instance, a linear order is a structure with a First-order sentences can tell us a lot about a structure, but not everything, unless the structure is finite. Löwenheim–Skolem theorem: Given a countable set of first-order sentences (in particular, any set of sentences if the language is countable), if there is any infinite structure in which they are all true, then there are first-order structures of every infinite cardinality in which they are all true. This is why the uniqueness results all have to say something about cardinality. You might also think of some examples of ways to identify an infinite mathematical object up to isomorphism with a list of axioms without saying directly anything about cardinality, but in all such cases, you’ll be using an axiom that isn’t first-order. For instance, all Dedekind-complete ordered fields are isomorphic to the reals, but Dedekind-completeness isn’t a first-order sentence. Same goes for any way of characterizing the natural numbers up to isomorphism that says something like “every set of natural numbers that contains 0 and is closed under successor contains all of the natural numbers”. Countable structures Let’s go back to the example of countable dense linear orders. If you don’t know the proof that all countable dense linear orders are isomorphic, here it goes: suppose we have two countable dense linear orders, Now let’s get back to the warm-up exercise. A graph can be viewed as a first-order structure whose elements are the vertices, with a single binary relation (the big conjunction before “ This is enough for us to construct an isomorphism, using essentially the same proof as for countable dense linear orders. Since we each started with countably many vertices, we can label each of our vertices with natural numbers, and then iteratively match the next available unmatched vertex in one graph to a vertex on the other, alternating between which graph we take the next available unmatched vertex from on each step, just like before. On each step, only finitely many vertices have been matched. The new vertex shares edges with some of the already matched vertices and doesn’t share edges with some others. We need to match it with a vertex in the other graph that shares exactly the same pattern of edges with previously matched vertices. And we know that somewhere in that graph, there must be such a vertex. So we can match the new vertex and keep going, and the bijection we get at the end preserves the edge relation, and is thus an isomorphism. For the general argument that these are both special cases of, we’ll need the concept of a type (not to be confused with the identically-named concept from type theory). Given a first-order structure In both of these cases, for any finite set of elements Theorem: Let That is, our ability to specify a countable structure (up to isomorphism) by its first-order properties corresponds exactly to the condition that there are only finitely many different behaviors that elements of the structure can have in relation to any given finite subset. The proofs that all countable dense linear orders without endpoints are isomorphic and that all countable random graphs are isomorphic look the same because they both follow the proof of this theorem, which goes like so: Suppose there are only finitely many types over Lemma: If Proof: Let Armed with this lemma, we can prove the theorem. Let As it turns out, the converse of the theorem is also true. Given a set of first-order sentences for which there is, up to isomorphism, only one countable model, all models have only finitely many types over any finite list of elements. Whenever there’s infinitely many types, there will be some types (which cannot be specified by a single formula) that appear in some models but not in others. Uncountable structures Let’s turn to the other example I introduced at the beginning: any two algebraically closed fields of the same characteristic with transcendence bases of the same cardinality are isomorphic. Every field has a transcendence basis, so a corollary of this is that any two uncountable algebraically closed fields of the same characteristic and cardinality are isomorphic. A sketch of the proof: Given algebraically closed fields Here’s another example: Any two vector spaces over the same vector space, with bases of the same cardinality, are isomorphic. Since every vector space has a basis, a corollary of this is that, over a countable field, any two uncountable vector spaces of the same cardinality are isomorphic. Citing Zorn’s lemma is overkill, since there’s only one way to extend a bijection between bases to an isomorphism. But the basic idea is the same in each case: We have an appropriate notion of basis, and we extend a bijection between bases to an isomorphism. And vector spaces are also first-order structures; the language has a binary operation The thing that unites both these cases is called strong minimality. A first-order structure is called minimal if every set defined by a first-order formula is either finite, or the complement of a finite set. More formally: Let’s go over the general notion of “basis” we’ll be using: Say that Now let’s look at type spaces in minimal structures. Let So for a strongly minimal structure, the structures satisfying the same sentences are classified by the cardinality of a basis. This isn’t quite the end of the story; in some cases, a structure with too small a basis would be finite, and we could thus distinguish it from the rest with a first-order sentence saying that there are And if the vector space is over a finite field, then its basis must be infinite. Another case where where the basis must be infinite is an infinite set. A set is a first-order structure in the language with no relations, no functions, and no constants. Every subset of a set is independent, so a basis for the set is just the entire set. In these cases where a basis must be infinite; there’s only one (up to isomorphism) countable model: the model with a countably infinite basis. You can check that both of these examples satisfy the finitely-many-types condition from the previous section for having a unique countable model. So the general story, for a strongly minimal structure In the previous section, we had a converse, so you may ask, if an uncountable structure is isomorphic to all structures of the same cardinality satisfying the same sentences, is it strongly minimal? This is not quite true. For example, consider a vector space Supply and Demand Until recently, I thought I understood the concept of supply and demand functions pretty well: for each possible price of a good, we look at how much of the good consumers would collectively purchase if it was offered at that price, and how much of it producers would sell if offered that price for it. Sounds simple enough. Problem is, the amount of something that a consumer will buy or that a producer will sell depends on more factors than just its price. So in order to determine how much of a good would be demanded or supplied in a counterfactual where its price changes, we also need to know about these other factors. You might want to let everything else stay the same, but this cannot be done. Changing the price of a good must come with some other changes as well. For instance, if the good has a substitute, then the price and quantity of the substitute cannot both stay the same when the price of the good So for supply and demand functions to be well-defined, we need a counterfactual model, which tells us what exactly is happening in the counterfactual where we set the price of a good and ask how much is demanded or supplied. Given multiple goods (for simplicity, two goods, Possible answers I’m going to make a couple simplifying assumptions for this discussion, not because they are true, but because they are convenient. First, producers and consumers are two entirely separate groups of people. Consumers don’t get the money they spend on consumption by producing something, and producers don’t spend the money they get for production on consuming anything else. Consumers just magically have money, and producers just hoard money so they can swim in it like Scrooge McDuck. Second, a system of equations has exactly one solution whenever I want it to. Wikipedia suggests that in the counterfactuals considered for demand functions, prices of substitutes and complements should stay the same. This gives an answer to the question of how to extract individual demand functions from a joint demand function. Let End of story? No; there are other answers that could be given to these questions. You may have noticed that the counterfactual models I suggested were not especially realistic, which could be considered a drawback, so let’s look at some other possibilities. Suppose the government plans to tax or subsidize a good, and we want to know what effect this will have on the quantity of the good, the price paid by consumers, and the price received by the producers. The usual story for how to figure this out from supply and demand functions Another possible counterfactual model is that a new agent enters the market and offers to buy or sell unlimited amounts of a good for a fixed price Cases in which the appropriate notion of supply and demand functions is unclear There are other situations that economists use supply and demand functions to describe, and each situation may need a different notion of what supply and demand functions mean, and the right way to define them for a given purpose might not always be obvious. For instance, suppose we want to predict the effects of price controls. This may only give meaning to the demand function on prices greater than or equal to the market price, and the supply function on prices less than or equal to the market price, since if supply and demand aren’t equal due to price controls, then whichever one is smaller will determine the quantity traded, so it isn’t clear that there’s an objective way to say what the larger one should be. Supply and demand functions are supposed to help us understand the effects of changes to supply or demand on price and quantity. The demand curve describes the set of (price, quantity) pairs that can be obtained by changes to how easy the good is to produce, and the supply curve describes the set of (price, quantity) pairs that can be obtained by changes to how desirable the good is. This is not completely well-specified, so what the supply and demand functions should be depends on which sorts of shocks should be considered to only affect demand for the good, or to only affect supply of the good. This isn’t as straightforward as it might sound, since, for instance, two supply shocks might have different effects on demand because of having different effects on the market for a complement or substitute for the good; the supply shock can’t have no effect on the market for the complement or substitute, because it affects the market for the original good, and the markets are linked, so some decision must be made about what assumptions to make about how a supply shock is supposed to affect the markets for other goods, which will depend on, for example, the extent to which the supply shock is due to producers leaving the market for a substitute or complement in order to produce the good in question instead, or vice-versa. The area of the region bounded by the supply and demand curves and the price axis is used as an estimate of how much wealth is created by the market for the good (neglecting externalities). It isn’t clear to me what way of defining supply and demand functions makes this the best metric, and it’s not going to be perfect in any case, as it measures wealth in units of money, and the value of a certain amount of money to someone can vary depending on circumstances (for instance, on how much money they already have). Another problem is that areas of smaller regions in a supply and demand graph are used as measures of deadweight loss caused by market distorsions, but the nature of the market distorsion in question would require us to use one particular meaning of supply and demand functions in order to correctly describe the effects of the distorsion; if this meaning is not one for which area in the supply and demand graph accurately represents value, then accurately representing deadweight loss as the area of a region in a graph isn’t possible. A possible explanation for why these subtleties are not emphasized is that they don’t make much difference in practice, as supply and demand functions tend to not depend too much on how exactly they’re defined. But we can construct examples in which they are quite different. Consider the supply function in the market for left shoes. Assume no one actually buys one shoe at a time, so no matter what the prices of left and right shoes are, the demand for each will be the same. Also assume that making left shoes and making right shoes is always equally easy for the producers, so they would shift entirely to producing just the more expensive chirality of shoe if the prices differed. That is, there is a demand function for pairs of shoes, The market prices for left shoes and for right shoes must be the same, since otherwise the producers would only make shoes of one chirality. Let If we are to assume that the prices of right shoes are constant, then If we assume that the government taxes or subsidizes left shoes such that the price received by the producer for left shoes is If we assume a new agent offers to buy or sell unlimited numbers of left shoes for Of these three purported supply functions, the first is radically different from the others, in that the latter two are continuous (if One could argue that this example doesn’t count, because a pair of perfect complements can just be considered a single good sold as a bundle (as left and right shoes typically are in practice). But complements that are not perfect complements also exist, and producers do sometimes have some ability to shift between markets, so approximate versions of this example should exist in real life. Joint supply and demand functions for infinitely many types of goods We can, of course, consider joint supply and demand functions of arbitrarily many goods, instead of just two. For So, where Green Bus [Inspired by this map] “Everyone up! Company meeting in thirty minutes! Breakfast available in the mess hall until then,” yelled Sergeant Tucker. Even in my groggy state, I realized instantly that something unusual was happening. Getting woken up after no where near enough sleep wasn’t unusual; in fact, it was becoming the norm. I suspected that the Californian Peoples’ State’s attacks had been carefully timed to fuck up our sleep as much as possible. They might not even have been seriously trying to capture territory, just wearing us down. But the lack of urgency in Sergeant Tucker’s voice was unprecedented, and he was giving us time to eat breakfast before doing anything else. There was no sound of gunfire, too. I dressed and followed the rest of the platoon out of the barracks and into the mess hall. It was bright out already, but was in the middle of my sleep anyway because of a grueling firefight we’d had in the middle of the night, in which we’d repelled an attack that had threatened to cut off our last remaining access to the canal. After a breakfast that was no more appetizing than what we’d been eating, though oddly generous in quantity compared to the carefully rationed meals we’d been getting, we were hurried out of the mess hall and into the briefing room. Captain Smith began the briefing, “We have surrendered to the Californian Peoples’ State.” Hearing that was a relief, to be honest. We should have surrendered months ago, but Colonel Fitch was such a hardass I’d thought it was never going to happen. Captain Smith continued, “We’ve reached a deal where, in exchange for surrendering the Sutter pocket, we will be given transportation to Reno instead of being taken prisoner. The first of the buses will be arriving in Sutter in about an hour. Bring your gun on the bus with you, and for God’s sake don’t ruin the deal by firing at any Californian soldiers. UN peacekeeping forces will be present to ensure the Peoples’ State doesn’t break the deal. The Californians will try to convince you to surrender individually to be taken prisoner instead of coming with the rest of us to Reno. Do not listen to them. Conditions in Reno are much better than here, so don’t think you’re better off in a prison camp than in Reno or anything foolish like that. If you have a family here in the Sutter pocket, they’ll be coming with you; you’re dismissed now, so you can go get them. Everyone else, pack up and reassemble here in twenty minutes. We’ll be marching to the bus stop together. Dismissed!” I was in shock. Why would the Peoples’ State offer us this deal? If they’d just kept up the attacks for another couple weeks, we would have collapsed, and I would’ve thought the Californians would’ve caught onto this. Maybe the Californians were afraid that Free States of America forces in the Tahoe area would break the siege on the Sutter pocket soon? But at the rate the war’s been going, that didn’t seem likely, and besides, if Colonel Fitch suspected the same thing, there’d be no way he would have accepted the deal. Maybe the Californians were just bending over backwards to avoid incurring a few casualties (some of which would inevitably be civilians) in the process of taking the pocket by force. Or maybe it was a trick and we were all going to get taken prisoner anyway. The promise of UN peacekeepers made that last possibility seem somewhat unlikely; we all figured the UN was a bit biased in favor of the Peoples’ State, but it was unlikely they’d let them get away with breaking an evacuation agreement. Like most of us, I didn’t have family with me in the Sutter pocket (or family at all, for that matter). I packed my things and regrouped with the others, and we marched into town. Another group of soldiers were already waiting at the bus stop when we arrived, and more groups joined us shortly thereafter. Soon, six green buses pulled up and stopped in front of us. It looked like the buses collectively just about fit all of us gathered in front of them, which was a small fraction of the total number of Free States of America soldiers in the pocket, even accounting for the fact that those with families in the pocket weren’t present. Probably others were getting picked up in the small portion of Yuba City we still controlled instead of consolidating us all in Sutter first. Some Californian soldiers and UN peacekeepers got out of the buses. There weren’t very many UN peacekeepers; not enough to make much of a difference in a fight if anyone broke the agreement, anyway. But their presence was still useful, since no one wanted to piss off the UN. A Californian officer held up a megaphone, and spoke, “To encourage you all to give up your arms instead of going to Reno, we’re sweetening the deal. If you stay, instead of becoming a prisoner of war, you will gain the rights of California citizens, able to live and work freely in California, and exempt from the draft, with the only additional restrictions being that, until the war ends, you will not be able to own weapons, and someone’ll check in on you occasionally to make sure you’re not up to anything fishy. If you’d like to go to the war zone in Reno anyway, you may now board the buses. If you’d like to stay, you can just walk right past the buses and hand over your weapons to any Californian officer on the other side of the buses.” The bus doors opened, and we were ordered onto the bus, officers and senior NCOs standing to the side and glaring at us lest anyone think of not boarding. For the most part, this seemed to work. Everyone near the front of the lines boarded the buses, though once someone walked past the buses, a few more people down the line followed them. I got on the bus. Looking out the window, I saw that the proportion of people choosing to stay was increasing towards the end of the line, but still most people got on the buses. After the rank and file boarded, the officers and NCOs followed. Most of them anyway; many stayed behind, not to give up their arms, but because they had civilian family in the Sutter pocket and would take a later bus with them (a higher fraction of officers and NCOs than rank-and-file soldiers were married, so in particular, more of them had family in the pocket, though still most didn’t). Though I did see one sergeant in another company approach a bus, hesitate, and then run past it, to a visibly negative reaction from his company’s officers. The bus’s engine started and the doors closed. “All right, let’s go,” said the bus driver, “We’ll be taking a slightly roundabout route so that we can stay within California-controlled territory until we reach the front line in Reno, but we will get you there in a few hours.” That sounded slightly suspicious, but the officers didn’t seem worried, and there were a couple UN peacekeepers on the bus, so I was pretty sure we weren’t getting kidnapped. The bus pulled away, and we were on our way. It was a slow, very bumpy ride across the Sutter pocket and through Yuba City; the war had not been kind to highway 20. But once we left Yuba City and turned South, the ride was pretty smooth. Many of the other soldiers fell asleep. I wished I could do that, but I’ve never been able to sleep on the road, apparently not even in my current exhausted Just past Placerville, the bus pulled over and stopped. “We’re gonna have to stop for about 45 minutes to recharge the bus,” announced the bus driver, “Meanwhile, you can get out, stretch your legs, and have some lunch.” Most people had woken up, and I could tell by looking around at everyone’s faces that I wasn’t the only one to be surprised by that announcement. It wasn’t surprising that these silly electric buses would have to regularly stop to recharge for an extended period of time, but no one had mentioned that they’d feed us, and I wouldn’t’ve expected the Californians to give enemy soldiers free food if the deal didn’t require them to. I don’t think even the officers on board had seen this coming. The doors opened, and the food aroma was overpowering. Whatever it was, it smelled delicious. I was ravenous despite having had a larger than usual breakfast about three hours previously, and I think everyone else was too. Some soldiers at the front didn’t feel the need to wait for the officers to confirm to us that it was okay to leave the bus, and once they got up, the rest of us followed. We were parked near what appeared to be an impromptu outdoor kitchen staffed by beautiful young women. We approached them, and were each handed generous servings of food, and immediately started scarfing them down. I stood in silence while I ate, next to Jones and Johnson, who were making small chat while they ate, though I wasn’t listening to what they were saying. When I was about three-quarters of my way through the meal and starting to slow down noticeably, a lady with a dazzling smile approached the three of us. “Hey, I’m Trisha,” she introduced herself, extending her hand. We each shook her hand and introduced ourselves. “How y’all doing?” she asked. There was a bit of a pause as we all processed how to answer that. I came up with an answer first. “Relieved, but also exhausted,” I said. “Yup, that,” echoed Johnson, and Jones nodded. “We’ve got some cots nearby if you want to lie down for a bit,” she said. “Sure, that’d be great,” I said. She glanced at Jones and Johnson, but they both declined, and Jones mumbled something about not wanting to miss the bus. Trisha gestured for me to follow her, and I did. “Do you want to be woken up before the bus leaves?” she asked. I thought about it for a while without answering. “If I say no, is it the same deal we were offered back in Sutter?” I asked. “Mhm,” she said. “What about all my stuff that’s still on the bus?” “We’ll get it for you.” I didn’t say anything after that, and she didn’t press for a real answer until we reached the door of a building that I gathered was where the promised cots were, and she gave me a questioning “I’m not getting back on that bus,” I said. Trisha smiled, said “welcome to the Californian Peoples’ State,” and left. I entered the building, fell onto a cot, and fell asleep almost instantly. I awoke, feeling only somewhat refreshed, but desperately needing to pee. There were several other Free States soldiers on other cots now. I got up, I found a bathroom, and relieved myself. I saw there were showers, with a sign saying “10 MINS MAX” by them. I used to take showers twice that long all the time, but now being able to take a shower for 10 whole minutes sounded like unbelievable luxury. I wasn’t sure whether the showers were for me or not, but I decided to just go for it instead of trying to find someone to ask. A clock started counting down from 10 minutes when I turned the water on. I used up almost the whole 10 minutes, and when I got dressed again, it struck me how much my uniform stank. I’d already known we’d been filthy, but I guess I’d adjusted to it and it was only apparent again now that it contrasted with my clean body. I left the building and immediately ran into a Californian official who asked my name, told me to fill out some paperwork (which fortunately wasn’t too long), took a picture of me, and printed out and handed me an ID hard. “Your duffle bag’s right over there,” he said, pointing to my belongings (sans gun) in a pile of luggage, “You can take it now or leave it and come back for it whenever. You can stay in this building again tonight. Tomorrow morning, some of those buses’ll be heading back to Yuba City, and others’ll be going to Sacramento and the bay. There will be job and housing fairs in all those locations, and also one in Placerville in case you decide to just stay here. We’ve got some pamphlets here summarizing what the available options will be in case that helps you decide where you want to go. And if you want to go somewhere else in the Peoples’ State, let me know, and it is likely we will be able to help you out. Any questions?” “Not right now. Thanks,” I said. I took a pamphlet, folded it up and put it in my pocket without reading it, and walked back to the bus charging station. The outdoor kitchen was still in operation, or perhaps in operation again. But the people there were different. Most of them were older than the women who had been there when I’d first arrived, and there were also some children present. Their genders were much more balanced, though still majority female. There were also some recently former Free States soldiers like me hanging around. A woman waved at me as I approached, and it took me a second to realize it was Trisha; apparently not all the women who had been here earlier were gone. She was dressed much more conservatively than she had been earlier. “This is how you dress when you’re not trying to manipulate enemy soldiers?” I guessed. “Um, not really. There’s actually another bus coming soon with the families from your company,” she said. “Oh, so you’re dressing to manipulate a different demographic of enemy soldier.” “Yes, exactly.” “Hm, I didn’t exactly sign up to stand here so my fellow soldiers on their way to Reno could stare at me on their way by and judge me for abandoning them.” “Well, you better scram quick, then. The bus’ll be here any moment now. Some of your comrades who joined us are hanging out over that hill,” she said, pointing. I thought about it. “Actually, you know what, my buddy Kyle and his wife Ashley would probably be on that bus, and as awful as telling them I defected to their faces sounds, letting them find out later in Reno without the chance to say goodbye sounds worse. I’ll stick around.” “I bet you don’t eat like this every day,” I said, gesturing at the kitchen. “Not quite,” she said, “Though there haven’t really been food shortages, so we’ve been eating pretty well. The main limitation is that anything that takes a lot of water to grow is a bit expensive, since we can’t grow it in California, and the war hasn’t been great for trade. If you’re wondering about shortages causing problems, the main thing is water. There’s enough to drink, of course, but the water rationing is tight enough that I usually don’t get to shower and wash my clothes as much as I’d like. We got a water bonus for participating in these greeting parties, so everyone around here is a bit cleaner than usual. Though we always get to shower and do laundry more than it looks like you guys have been, no offense, so I’m not sure if we really needed to bother cleaning up more than usual. Anyway, I usually shower for about 5 minutes every 3 days and use up my water ration.” “So that 9-minute show I just took…?” I asked. “Was part of your defection bonus. You didn’t just use up all your water for the week; don’t worry. I’m just warning you about what things will be like once you settle in. Though not for too long, hopefully. We expect the water crisis to end this year, so it’s really only a short term problem.” If they expected the water crisis to end this year, that meant that they expected to capture sources of water from Free States of America. Holding sources of water and denying them to California had been a deliberate strategy by the Free States of America to try to weaken California, which hadn’t yet been terribly effective. We were all starting to figure California would recapture most of those water sources from us instead of collapsing, but if the Californians thought that was going to happen this year, then they were feeling even more optimistic than we thought they should. A green bus pulled up. Trisha excused herself and ran off to take care of something. I watched people file out of the bus, looking for Kyle and Ashley. Instead, I saw Sergeant Tucker, Lieutenant Dan, their wives, and Sergeant Tucker’s five-year-old daughter get off the bus and walk vaguely in my direction. I looked away and pretended I didn’t see them. They kept walking closer. “Private Carlson!” said Lieutenant Dan, “What the hell are y’all doing still in Placerville?” Dammit! “Goddammit, Carlson! You defected, didn’t you?!” said Sergeant Tucker, with a stern look. I nodded sheepishly. This was even more uncomfortable than telling that to Kyle and Ashley would have been. Fortunately, the tension was interrupted by a Californian boy, maybe about eleven or so, carrying boxes of food and handing them out to us. I took one even though I wasn’t sure if it was only intended for the newcomers. The kid stuck around and introduced himself as Ben, and we took turns introducing ourselves to him in between mouthfuls of food. “They’re making you do this?” Linda Tucker asked Ben. “No, my teacher told us about it as a volunteer activity. No one had to be here,” said Ben. “Wait, the schools are still running here?” asked Mrs. Tucker. “Of course. Summer break doesn’t start until May,” said Ben. Sergeant and Mrs. Tucker exchanged glances. “Have things changed around here in the last few years, you know, with the war and all?” asked Mrs. Tucker. “Yeah, my Dad’s away on the front line near Redding. I haven’t seen him in almost a year.” Ben looked sad. “That’s it?” asked Mrs. Tucker. “Uh, I guess so. Mom and my siblings and I are doing fine,” said Ben. Sergeant and Mrs. Tucker exchanged some more glances. No one said anything, but it looked like they were having a whole private conversation with their eyes. Sergeant Tucker looked away and made some awkward eye contact with Lieutenant Dan just as Mrs. Tucker said, “We’re not going to Reno.” Lieutenant Dan looked exasperated. “Linda, you goddamn hippy! You know everything’s gonna be fine when we get back to the Free States, right?” “We’re not going to Reno,” Mrs. Tucker repeated. Sergeant Tucker nodded. Lieutenant Dan rolled his eyes and let out a disgusted grunt. While this was going on, I overheard a conversation behind me between a Californian and a Free States soldier’s wife who had apparently overheard Mrs. Tucker’s exchange with Ben, and asked, “Is school mandatory? I wouldn’t want to send my child to a public school.” “No Ma’am, lots of people homeschool their kids,” said the Californian. “But people who homeschool their kids still have to pay taxes for other people to go to public schools, right?” “Well yes, they do pay taxes, Ma’am. Although, actually a fairly small amount of that has been going to schools lately.” The lady seemed reassured by this somehow, even though it was really just a diplomatic way of saying that funds had been diverted from schools to the war effort. I wasn’t a big fan of taxpayer-funded public education myself, but I had trouble imagining why anyone would think that was any better. I heard Ashley’s voice, “Hey, there’s Cole!” I turned my head and saw Kyle and Ashley running towards me. I left the gaggle I was in, ran towards them, and hugged both. “So, you defected?” asked Kyle, sounding surprisingly not that disappointed. “We were thinking of doing the same, honestly,” said Ashley, “Just don’t tell the brass over there.” “Actually the Tuckers are also defecting,” I said. “You’re shitting me!” said Kyle. “No, they’re really doing it.” “I guess that would explain the tension you can see between the Tuckers and the Dans right now,” said Ashley. Word about the Tuckers defecting went around pretty fast, and seemed to start a sort of domino effect. When the bus continued on its way to Reno, there were maybe a handful of people still on it. Not including the Dans, amazingly enough. Soon after, another bus pulled up from the other direction. Private Jones and a few other soldiers disembarked. I saw Jones look through the crowd until he found Private Johnson, who was turned away and hadn’t seen. Jones ran towards him and called out to him. They hugged each other. Kyle, Ashley, and I wandered over to see what was going on. “I thought you were going to Reno,” said Johnson. “The bus pulled over in South Lake Tahoe next to another green bus, and they told us that since they were both only half full, they’d be consolidating into just one bus for the rest of the trip. After I got out of the bus to board the other one, I asked if it was too late to change my mind, and ended up getting back on the same bus, turning around, and heading back here,” said Jones. “Wait, I thought they said the bus would be staying in California-controlled territory all the way until Reno. Don’t we control South Lake Tahoe?” asked Johnson. “We” might not have been the best pronoun to refer to the Free States by, now that we’d all defected to California, but no one pointed this out. “Not anymore, we don’t,” said Jones, “There were Californian soldiers all over the place. No Free States soldiers or signs of fighting to be seen.” “Jesus. I wonder why they went that far before consolidating into fewer buses. They could have done that here in Placerville,” Johnson pointed out. “I don’t know, but my guess is they just wanted to flex on us by parking us in front of a California garrison in South Lake Tahoe,” said Jones, “One other thing. Get this: Just before we pulled in here, we passed another station just like this one. I saw a green bus pulling away from it in the other direction towards Reno. Then I looked at the people still milling around there on the ground, and there was Mrs. Fitch with her kids.” “The Fitches defected?!” Johnson asked incredulously. Jones shook his head, “Not Colonel Fitch. I only saw Mrs. Fitch and their kids.” “Yeah, but you might have just not seen him. I mean, if his wife and kids were there,” said Johnson. “Yeah, sure. I can’t prove beyond all doubt that Colonel Fitch was on the bus. But if you seriously think that crazy son-of-a-bitch would stay with his wife and kids instead the Free States of America, then I’ve got a bridge to sell you.” Exact 2-cycles are degenerate isomorphisms The situation in which you have vector spaces I’ll call such a pair Given a finite-dimensional vector space Homogeneous polynomials and multilinear forms Given a vector space Given a symmetric bilinear form This doesn’t quite work if Similar things happen with higher degree homogeneous polynomials and symmetric multilinear forms. Let Newtonian spacetime In special relativity, we work with a 4-dimensional (3 for space and 1 for time) real vector space In Newtonian physics, things are a little different. We can still work in 4-dimensional spacetime, but we don’t have a single Minkowski inner product measuring both distance and duration. We do have a global notion of time; that is, there’s a linear map The time function The ordinary inner product on In the spacetime example, it is conventional in special relativity to normalize the speed of light to Let’s go back to the first example. Given Now, in the second example, where The general story All three arguments from the previous section took the following form: Let In the spacetime example, All exact 2-cycles of vector spaces can be fit into this general story. Given any exact 2-cycle What more? What about exact 2-cycles in abelian categories other than vector spaces? In general, the two objects in an exact 2-cycle need not be isomorphic. For instance, with abelian groups, there’s an exact 2-cycle between the 4-element cyclic group and the Klein four-group. Though two objects in an exact 2-cycle must be isomorphic in any category in which every short exact sequence splits (this is the gist of the dimension-counting argument from the beginning showing that two vector spaces in an exact 2-cycle must be isomorphic). Is there still some way of seeing exact 2-cycles as degenerate isomorphisms even in contexts in which there need not be actual isomorphisms? Also, what about exact It’s a big world out there Kimi and Jerilyn’s mother continued their bedtime story, “And then Adam the bat’s friend, Jane the fish, showed up and stuck her head out of the water. Adam hastily finished chewing the beetle he was eating before exclaiming, ‘Jane! Where’ve you been? We haven’t seen each other in ages.’ ‘I’ve been exploring a marvelous new worlds. I wish I could show you,’ said Jane the fish. ‘Who says you can’t? I can’t swim, but you could carry me,’ said Adam the bat. ‘I suppose so. You’ll have to hold your breath, though,’ said Jane. So Adam took in a big breath of air, and held on to Jane’s back. ‘Hold on tight!’ said Jane, and plunged below the surface. She swam, and swam, and swam, for what seemed like eternity, especially to the air-breather on her back trying to hold his breath. Finally, just as Adam was about to run out of breath, Jane the fish surfaced again, and Adam the bat took in a big breath of fresh air. There wasn’t a rock ceiling above their heads, just empty space all the way up. And far, far, far above the ground there was an enormous intense light, so bright that neither bat nor fish could look directly into it without hurting their eyes, and it illuminated their surroundings so much that they could see far around them from light alone, without echolocation. Adam shook his wings dry, took off, flew up, and kept climbing higher into the air, with no rock ceiling above him to limit his ascent. Eventually he got spooked by how far he’d gotten from any solid object, so he started flying back down, and returned to Jane in the water. That’s all for now, dears. Sleep tight.” Their mother kissed each of them on the forehead. “Mom,” asked Kimi, “could there actually be a light that bright?” “I don’t know,” she answered, “but according to ancient myth, there is such a thing. Or was, at least. I suppose there’s no way of knowing whether it’s still around. It’s in a far away world with no rock ceiling too, so goes the myth. Sweet dreams.” Their mother left. “No rock ceiling,” Kimi whispered, “That’s even wilder than the thing about the light. Like, would it just be air all the way up forever? Surely there’d have to be an end somewhere, right?” “Maybe there is a rock ceiling there, but it’s so high up that you can’t hear the echo,” Jerilyn suggested. “Wow, that would be so disorienting, not being able to hear the echo off the rock ceiling,” said Kimi. “Given what Mom said about the light, maybe you could see the rock ceiling even if you couldn’t echolocate it,” said Jerilyn. “But I guess if people thought there wasn’t a rock ceiling at all, it must be high enough that you can’t see it either,” said Kimi. “I guess so,” Jerilyn agreed. “Jerilyn,” said Kimi. “Do you think it’s real?” “No,” said Jerilyn. “Are you sure?” Jerilyn hesitated. “No,” she said. She really had no way of knowing for sure, however outlandish it may sound. “I’m not actually feeling all that tired. Are you?” “Eh, somewhat, but not especially.” “Let’s go find the place Mom was talking about.” Jerilyn thought about it. On the one hand, the mythical place probably didn’t exist, and even if it did, there was no way they were going to find it. On the other hand, an adventure might be fun. “Let’s do it,” she said. They snuck off and made their way to their canoe. They avoided making sounds so as not to advertise their presence, so they had to rely on touch to find their way, but they knew the route well enough that that wasn’t a huge impediment. They set off, and as they knew the waters immediately surrounding the dock by heart, they were able to navigate away from the island silently, but once they were a ways out, Kimi started making clicking noises with her tongue so they could echolocate their surroundings. They aimed straight for the closest point where the rock ceiling met the water. They couldn’t echolocate that far, of course, but Jerilyn remembered the way from her navigation lessons. Once they got too far from the archipelago, they had to rely on trying to keep going in a straight line, but soon after, they encountered the wall. “What now?” asked Kimi. “I suppose we look around for a tunnel,” said Jerilyn. They turned right and followed where the rock ceiling met the water, keeping it on their left, their casual conversation sufficing to provide enough noise for them to track their surroundings. They never found a tunnel. Eventually they got tired, pointed their canoe back in the direction they came from, and set off for home. When they first encountered an island, they weren’t sure which one it was, and they went all the way around it in a circle so they could estimate its size and shape. It seemed unfamiliar, but Jerilyn thought back to her navigation lessons, and by the time they had completed their circle around the island, she came up with a guess as to which island it was. If she was right, they were significantly off course. She turned the canoe in the direction she thought home was, and when they passed the next island, she gained confidence that she was right, and indeed, their new path took them straight home, where they docked the canoe, dried themselves off, went straight to bed, and each fell asleep instantly. Kimi and Jerilyn made several more expeditions to find tunnels to new worlds, taking off in different directions each time. On their fourth trip, they found an indentation well into the rock, which tapered out into a vein of air sticking just above the surface of the water. They got as far in as they could, until the rock ceiling got too low for them to stay under while they were sitting in the canoe. They stashed their paddles in the canoe, and carefully got out and swam farther in while towing the canoe. They soon reached a point where the canoe couldn’t go any farther even without them inside. They found a part of the rock ceiling that jutted down below the rest, and they pulled the end of their canoe downwards, pushed it under the jutting rock, and released it, so that the jutting rock extended into the canoe and would keep it from floating away. They swam in further. But soon even the indentation they found sunk below the surface of the water. They each took a big breath of air, and kept swimming farther out underwater. They hadn’t brought sonar rods or a light, and couldn’t snap underwater, so they had no way of echolocating underwater, and had to rely on touching the rock ceiling above them to tell where it was. They didn’t get very far before Jerilyn decided that that wasn’t a great idea. She turned back, and pushed Kimi to turn back as well. Even with Jerilyn’s caution, they were both somewhat short of breath by the time they could get their noses back into the air. On their next trip, they brought a pair of sonar rods, and aimed for the same indentation they had found on their previous trip. When they arrived at where the rock ceiling met the water, they were in unfamiliar territory. On their previous trip, they had been keeping the line where the rock ceiling met the water to their left as they’d followed it until finding the indentation, and this time, they’d tried going a bit to the right of the course they’d taken on the previous trip in an attempt to go more directly to the indentation, so they figured that they’d overcorrected, and turned left. They soon found the indentation again. Again, they went as far as they could while keeping their heads above water, Kimi carrying the sonar rods. Then they dove down into the water, much deeper than necessary just to stay below the rock, so that they would be able to echolocate as far as possible without the nearby part of the rock getting in the way, and Kimi rang the sonar rods. The rock ceiling’s descent flattened out not long after the last of it passed below the surface of the water, and there was a small air pocket just a bit after the rock ceiling flattened out. About twice as far in as the air pocket, the rock ceiling started to pitch up sharply. They swam up back towards the surface for air, Kimi ringing the sonar rods between armstrokes so they could keep track of where the air was instead of bumping into the rock ceiling. “Let’s check out that air pocket,” Kimi suggested, after they surfaced. “Not a good idea,” said Jerilyn. “We can totally make it there,” said Kimi. “Air pockets sometimes have bad air in them. We could get there, of course, but I’m not so sure we could make it back after coughing out nasty air,” Jerilyn explained. Kimi reluctantly agreed not to explore the air pocket, and they turned back. On their next trip, they brought buckets. They figured if they weren’t sure they’d have enough air in their lungs for the trip to the air pocket and back, they could bring some more air outside their When they’d gotten as far as they could while keeping their heads above water, they quickly discovered that it was just about impossible to swim underwater while carrying a bucket full of air. After a long while trying, they figured out how to get themselves positioned upside-down in the water with their feet against the rock ceiling while holding a bucket full of air pulling them up against the rock ceiling, so they could walk along it. Both of them still had trouble carrying a bucket and a pair of sonar rods underwater at the same time, so they’d put their sonar rods back in the canoe. But they were able to make enough sound to echolocate their immediate surroundings by hitting the sides of their buckets. They both needed a breath by the time they got to the air pocket, as walking upside-down underwater was much slower than swimming. So they found flat portions of the rock ceiling to put their buckets down on, then turned around, exhaled, stuck their heads in their respective buckets, and took a breath. Then they exited the buckets, and Kimi approached the air pocket. She stuck her hand in, and made contact with the rock almost instantly; it was, evidently, a very shallow pocket. She stuck her nose in, being careful not to rise high enough to hit the rock ceiling, and, heeding Jerilyn’s warning, cautiously took a small breath of air. It was rancid. She coughed it up and recoiled out of the pocket, then scrambled for her bucket while fighting the urge to inhale. She finally got her head in the bucket, took deep breaths and kept coughing, while Jerilyn held her up so she could focus on regaining her breath instead of swimming. By the time Kimi got her breathing under control, the air in the bucket was quite stale and she was short of breath again, so she left the bucket for her big sister to deal with while she swam back to fresh air. Jerilyn took another breath from her own bucket, dumped the remaining air out of the buckets, and swam back while carrying them, which took her a lot longer than it took Kimi because of the drag caused by the buckets. “You were right. That was nasty,” Kimi commented, once Jerilyn surfaced. They decided to make another trip underwater to try to explore past the ridge where the rock ceiling pitched back up again. They retrieved their sonar rods and tied them to Kimi’s wrist to make them easier to carry at the same time as the buckets, and set off in the same direction as before. They set their buckets down near the air pocket, each took a breath, and then swam out to the ridge. Another ring of the sonar rods revealed that the rock ceiling pitched straight up into a vertical cliff, and that there was a wide expanse of air about thirty feet above them. They retreated to their buckets, each took a breath from them, and then dumped the remaining air out and swam back with their buckets. “I don’t understand how the water went so high up. The surface is definitely much higher on the other side than it is here,” said Jerilyn, after they surfaced again. “Yeah, weird, isn’t it? Also, how are we supposed to get there? We canoed around the edge for miles in each direction and didn’t find any tunnels or places where it bends around or anything that could lead to that place,” said Kimi. “There probably isn’t any route there going over the surface. If there was, it would be even harder to understand why the water level is different there than here,” said Jerilyn. “A completely separate world! Do you think it’s the place Mom told us about?” “I don’t know.” They left for home, and on their next trip, they brought four buckets, with the intention of going all the way to the surface on the other side of the rock. Then they repeated the previous expedition’s trick of walking upside-down underwater with buckets of air, this time each carrying a bucket in each hand, which was even harder to get into position for, but eventually they figured it out. This made it not only difficult to ring the sonar rods, but also difficult to hit the buckets, and they resorted to periodically letting their buckets hit the rock ceiling to make enough noise to navigate. They stopped briefly near the air pocket to turn rightside-up and take a breath from their buckets, and then turned back upside-down and kept going, buckets still in hand, all the way until the point where the ridge pitched back up again. They set down the buckets in stable locations, turned rightside-up, exhaled, took deep breaths from their full buckets, and swam up towards the surface, leaving four half-filled buckets on the underside of the ridge behind them, Kimi periodically ringing the sonar rods on their way up so they wouldn’t collide with the rock. They surfaced and each began to take a deep breath, then stopped in shock, and cautiously started to breath again. Something was off about the air. It smelled… not stale, exactly, but strange, not like any air they’d ever smelled before. It smelled overly fresh, in a way, as if all the air they’d beathed until that point had been a bit stale, and they hadn’t noticed. Jerilyn raised a hand out of the water, shook some water off of it, and snapped. For the briefest of instants, they both thought that perhaps there wasn’t a rock ceiling above them at all. But then they heard the echo, and realized that there was a rock ceiling above them at perhaps three times the height that they were accustomed to at home. And they couldn’t see any bright lights in the sky, or anything at all for that matter, so they couldn’t be in the place Mom had described in the myth. Aside from the rock on one side of them and bending into a ceiling far above them, there was nothing around them, just water for as far as they could hear. “We gotta get the canoe in here so we can explore this place,” said Kimi. “How in the world are we going to do that?” asked Jerilyn, realizing as she spoke that perhaps it should have been “how out of the world” rather than how in it. “I don’t know,” said Kimi. They swam around a bit, but didn’t find anything interesting, and decided to go home. They dove down under the ridge, retrieved their buckets and inhaled from them, surfaced on the other side, got in their canoe, and headed home. Later, they did some experimenting at home, and discovered that their canoe was almost exactly the same density as water. Armed with this fortuitus fact, several buckets, and a lot of rope, they set off again for the other world. A test run revealed that their rope wasn’t quite long enough to stretch from where they could park their canoe to the air on the other side of the rock. Finding this out resulted in Jerilyn dropping the rope on her way up after crossing the ridge so she could surface and breath, and then returning to their canoe, and they reeled the rope back in. They set up three buckets of fresh air on the underside of the ridge, and one by the air pocket. Then Jerilyn took the sonar rods and swam out to the ridge and treaded water with her head in a bucket while Kimi filled the canoe with water, and pushed it underwater and forward, Jerilyn ringing the sonar rods in the water to help Kimi tell what she was doing as she swam under, and periodically ducking down into the water to keep herself updated on Kimi’s progress. Kimi wasn’t getting very good resolution from the sonar rods, but it helped that she remembered the path. Kimi, pushing the canoe ahead of her, reached the bucket by the air pocket and took a breath in it. Jerilyn took one last big breath from a fresh bucket and took off for the surface as Kimi continued forward pushing the canoe. When Jerilyn surfaced, she was able to help by reeling in the canoe, holding onto the rock cliff for leverage. Kimi went ahead of the canoe so she wouldn’t run out of air, and together they finished reeling in the canoe to the surface. With some difficulty, they emptied the water out of the canoe, righted it, and got back in. Righting the canoe had been a lot of work, and they took a quick break to catch their breath. Then they set off in their canoe, keeping their old world to their left. They heard sounds of civilization coming to them before they echolocated the island from their own snapping. They turned towards it and approached. They were noticed, and it seemed that they had caused a fair amount of consternation. They got close, and a man was standing on the end of a peninsula near them holding a long, straight stick, facing them and snapping repeatedly. There were also boulders sticking above the water a ways to either side of them. “Hello,” said Kimi, “Who are you? I’m Kimi.” The man did not respond, but he did stop snapping and started clicking his tongue. The tongue-clicking wasn’t giving them good resolution on him, but they could tell he was moving in some way. Jerilyn snapped, revealing that the man had both hands on the stick, which was pointed at them, and he was leaning back as if about to throw it. Jerilyn dug her paddle into the water and swung them around, just as the man threw the stick. It narrowly missed Kimi. “Hey, what was that for?” Kimi shouted. “Kimi, paddle forward hard!” said Jerilyn, as she began to do so herself. They heard splashing sounds to either side of them, followed by the sounds of people swimming towards them. The man on the shore began clicking his tongue again, and seemed to be preparing for another throw. Jerilyn swung the canoe around again, and the stick just missed her. She resumed paddling forward, and the man on the shore dove into the water. Someone grabbed the back of the canoe near Jerilyn and pulled himself up towards her. She moved her paddle between them just in time to block a thrust of a stick towards her. He grabbed her paddle with the hand that hand been on the canoe. Kimi lunged at him and hit him in the neck with her paddle with a surprising amount of force for someone her size. He dropped Jerilyn’s paddle as well as his own stick and fell back into the water. The recoil from Kimi’s lunge caused their canoe to collide with someone else as he pulled up towards the position Kimi had just left. Jerilyn hit him over the head with the edge of her paddle, and he too lost his grip on the boat. Jerilyn pushed him away from the boat with her paddle while he was too disoriented to grab it, and then Kimi and Jerilyn returned to their former positions and kept paddling hard. No one caught up to them, and they relaxed a bit once their pursuers had given up. It took a while before they encountered the next sign of civilization. They approached much more cautiously this time, coming to rest at shouting distance. A small gaggle of people were gathered at the shore closest to them. “Hello!” one of them shouted. They sounded funny. “Hello!” Kimi shouted back. “What is that thing?” asked the person on the shore. Their words were tricky to understand. “What thing?” asked Kimi. “The thing you’re sitting on floating in the water,” the stranger clarified. “The canoe?” asked Kimi. “The what?” “This is called a ‘canoe’,” said Kimi, slapping the side of the canoe. “Okay, so, what is it?” “You use it to cross the water,” said Kimi. She wasn’t sure what else to say about the concept of canoes. “What are you doing?” asked the stranger, giving up on getting more information about the canoes. “We’re exploring,” said Kimi. “The last people we encountered weren’t very nice,” she added. “Uh, were they the <unrecognizable word>?” “The what?” asked Kimi. “Did you encounter them over there?” asked the stranger, gesturing in the direction Kimi and Jerilyn had just come from, and snapping to give them good enough resolution to tell where he was “Yes,” said Kimi. “What the <unrecognizable> were you doing over there?” “Uh, we didn’t know not to go there.” “Uh, well now you know. Good thing you survived. Where are you from?” asked the stranger. “Elsewhere,” said Kimi, knowing the name of their island wouldn’t mean anything to them. “Uh-huh. Hey, do you guys need any supplies, like food or anything? We’d be happy to help out if you show us how the canoe works,” said the stranger. “That would be gr-” Kimi started. “Kimi, no,” Jerilyn interrupted, “They want to steal our canoe.” They were both getting hungry, but they’d have trouble getting back home without their canoe. It wasn’t worth the risk. They kept going. They were not pursued. It was a long time before they found land again. When they did find land, it wasn’t an island separated from the old world by water like the others had been, but instead, the rock wall separating them from the old world flattened out to become navigable by foot. They were ravenous, having serious regrets about having ventured so far without food, and on the verge of turning back. So they were quite gratified when they smelled vents. They pulled their canoe onto the shore, located the vents, and gorged themselves on ventmoss. Their hunger sated, they noticed they were getting quite tired, and they went to sleep. When they awoke, they decided to explore the new land they’d found. They walked inland for quite some time without finding another shore; they’d never imagined a land so vast before. Eventually they became tired again, gave up on finding water on the other side, and turned back. They lost track of the exact route they had taken, and when they reached the shore again, it wasn’t familiar territory. A gust of wind carried a faint smell of vents towards them, and, guessing that it was from the same vents they had found earlier, they followed the shore in the direction the wind had come from. This guess turned out to be correct, and they found their canoe right where they’d left it. They ate some more ventmoss, drank from the water, and rested for a while. Then they decided to venture uphill, in the direction of the old world; perhaps they would be able to walk on top of the rock ceiling of the old world. The ground gradually steepened, and they kept going long past the point where they had to crawl on all fours, and each step brought them more up than forward. At times, they had to rely on their voices for echolocating footholds when their hands were occupied clinging to the rock and they couldn’t snap. When they turned back, it was due to some combination of the steepness spooking them, and them getting quite tired. They downclimbed facing backwards until the ground had flattened out enough that they could walk upright without falling over, and then they walked their way back to their canoe and the vents, had another meal, and went to When they woke up again, they decided to return home. They followed the route they had taken last time, but steered clear of any signs of civilization. When they neared the place where they’d met the people who’d attacked them, they stayed very close to the rock wall separating them from the old world, paddled slowly, and instead of snapping, frequently gently tapped the rock next to them for guidance, in hopes of minimizing noise and not advertising their presence. When they reached approximately the place where they had first surfaced into the new world, they had some trouble figuring out exactly the right place. Kimi periodically dove into the water with the sonar rods, and in most places, it was easy to tell that they couldn’t be in the right place because the rock extended down too far vertically into the water. But eventually they found a point below them where the rock didn’t extend as far down, and theorized that that might be their route home. Jerilyn dove below the edge of the rock, rang the sonar rods, and sure enough, there were their buckets of air on the underside of the ridge. She went back to the surface, and they tied their rope to their canoe, filled the canoe with water, and pushed it under while Jerilyn held the rope. They surfaced again after pushing the canoe down a ways underwater, took deep breaths, dove all the way under the ridge until they got their heads in air buckets, and pulled the canoe further down by reeling in the rope until the canoe was below the ridge. Then they dumped the air out of their buckets and carried buckets and rope back to the other side, with a quick stop at the bucket they’d placed midway to take breaths and pack up that bucket too. They were desperate for air by the time they finally surfaced on the other side. After they finished panting for breath, they reeled in their canoe, and laboriously emptied the water out of it, righted it, and went home. Their parents were delighted to see them, cross at the prolonged absence, and skeptical of their tales of the new worlds they’d discovered. Some time later, Kimi and Jerilyn decided to make another expedition to the new world and try to climb further up the steep cliffs they’d found. Realizing that it would take a long time, and they’d want water and food other than ventmoss, they packed some dried fish and plenty of buckets, and fashioned some seals for their buckets so that water could be stored in them without spilling when jostled around. They set out along the same path as in their previous expeditions, although it took them some time to find again the indentation in the rock where they’d crossed over into the new world. Once they did, they repeated their usual procedure to get to the other side, after tying their extra buckets (two containing dried fish sealed inside) to the canoe, since carrying the extra buckets underwater themselves would have been too unwieldy. Once they reached the air on the other side, reeled in their canoe, righted it, and emptied the water out, they took a break to catch their breath. They then continued roughly in the same direction as their previous journey, with a detour to steer clear of signs of civilization before they rejoined their original route, which they successfully stayed on from then on, making for a lengthy but uneventful trip to the place they had landed at on their previous trip. Unlike on their previous trip, they were not hungry when they reached the land, as they had been snacking on dried fish the whole time. But they were quite tired, so they went to sleep before going any further. When they awoke, they filled the remaining space in their buckets of dried fish with ventmoss, filled two other buckets with water, and took off uphill, each carrying a food bucket over one shoulder and a water bucket over the other. They kept going past where they had turned back the previous time, and not long after, had to backtrack a bit because the route seemed too precariously steep. But after a little exploring, they were able to find a more navigable route up. After a long ascent and many quick breaks, they decided they needed some sleep. Unfortunately, they were on very steep ground. However, after a bit of exploring, they managed to find a crevice of flat ground big enough for both of them to lie down in, and they went to sleep. They continued their ascent when they awoke. At one point, Jerilyn, who was in the lead, slipped and fell on a steep stretch. Fortunately, she did not hit Kimi on the way down, and was not far above some flatter terrain on which she managed to stop her fall. Miraculously, the seals on both of her buckets had held. Kimi downclimbed to join Jerilyn, and asked if she was alright. Jerilyn reported that while she would probably develop some bruises from the fall, she was otherwise undamaged. They looked around for a safer way up, eventually found one, and continued on. The hill eventually flattened out considerably, and they were able to consistently walk upright without their hands on the ground, though still uphill. The rock ceiling got progressively lower, to the point where it wasn’t far above their heads. In places, they even had to duck under it, though there were also places where the rock ceiling was much higher. At one such point where they rock ceiling was anomalously high, they saw a few small points of light above them at an angle, and in that particular direction, the rock ceiling was further away than they could echolocate, if it was there at all. Eventually they grew tired, and went to sleep again. When they awoke, they noticed that the ground quite a ways behind them was glowing brightly. The air in a line connecting the rock ceiling to that patch of ground was also glowing faintly. They walked towards it, but the glowing patch narrowed and disappeared before they reached it. They turned back uphill and pressed on. Later, they saw another glowing patch of ground, again with accompanying faintly glowing ray of air shooting up to the rock ceiling, well to their right. They headed towards it, but it too narrowed and disappeared before they reached it, and they turned back uphill. The rock ceiling narrowed further, and they had to crawl to keep going. On multiple occasions, the rock ceiling come so close to the ground that they could not go further, or even merged with the ground, becoming a wall in front of them, and in such cases, they had to backtrack and find a different route up. At one point, the only route forward was so narrow between ground and ceiling that, in order to get through them, they had to take their buckets off their shoulders and push them ahead, and advance while lying flat. Kimi, being smaller, had an easier time of this, and at one point, Jerilyn got stuck, but Kimi was able to turn around in a slightly wider spot just ahead and give Jerilyn a hand, helping her get through. The ceiling rose further above them again, eventually to the point where they could walk upright without ducking. They saw a patch of little points of light ahead of them, and they went in that direction, which required a steep climb. As they drew close, it became apparent that the points of light were coming from a hole in the rock wall, as echoes bounced off rock to every side of the patch of points of light, but not from the patch itself. They passed through the hole. Though the ground continued to stretch out before them in all directions, there was no longer any wall to either side or in front of them, nor a ceiling above them, as far as they could tell from the echoes of their snaps. There was an almost-vertical wall behind them surrounding the hole, but rather than bending above them into a ceiling as it rose, it bent back in the other direction, as if to form high ground after flattening out further beyond their hearing range. There were many little points of light in every direction above them. There was one big source of bright light, almost a disk, but with one side blunted slightly inwards. There was faint light pervading through the air, so that they could see things in their immediate vicinity, including each other, clearly, despite the bright lights being far above them, and they could even see geological features much farther away than they could hear. “We found it!” said Kimi, “The place from the legend! Look, there’s the bright light Mom told us about!” She pointed to the big almost-disk of bright light above them. “Yeah,” said Jerilyn, “It doesn’t hurt to look at, though. And Mom didn’t mention all the other lights. Still, considering it was an ancient myth, it did turn out to be remarkably accurate. That sure is a lot of light.” They explored the new wide open land, snapping as they went to echolocate the ground, even though they could see it just fine, since they were not accustomed to using light to find their footing. They quickly discovered that it was far larger even than it had first appeared. For instance, they set off in the direction of what appeared to be a patch of vegetation low to the ground, which they could see but not echolocate, but the vegetation seemed to grow larger but draw further away as they approached, not coming within echolocating range until well after they expected it to. On their way, they heard a burbling sound, and investigating, they found a trail of fast-moving liquid flowing across the ground. Kimi tapped the surface of the liquid hesitantly, then cupped her hands, plunged them under the surface, and brought some of the liquid back up in her hands. It felt like water. She sipped it. It tasted like water. She reported her findings, and Jerilyn followed suit, and concurred. They had never come across such a wide stretch of such fast-flowing, shallow water before. It was a fortunate find, as they had been running low on water, and would have had a hard time on their way back if they hadn’t found more water. The water was fresher than the water in their buckets, so they refilled their buckets with it. By the time they finally reached the patch of vegetation they’d been headed towards, it became apparent that the vegetation, which they had initially thought to be low to the ground, was actually enormous, with thick stalks extending far over their heads, high enough to extend well past the rock ceiling from home, and branching out, with vegetation covering the branches far above them. A similar phenomenon occurred when they headed for some small hills in the distance. Again, the hills seems to draw further away as they approached. But unlike the vegetation, the hills did not also seem to grow as they approached. They pursued the hills longer than they had pursued the vegetation, but the hills still seemed no closer, and their size hadn’t changed. They speculated that perhaps the hills were simply illusions, or perhaps they were vastly further away than the vegetation had been. They were tired. They found a good spot to lie down, and went to sleep. Something was wrong. Kimi opened her eyes and screamed, waking Jerilyn, who also screamed. There was light everywhere. So much light, as if an anglerfish’s lure was right in front of their eyeballs, except that it was coming from all directions. They quickly identified the source of the light: an inconceivably bright light coming from the ground in the distance, which, true to the legend, it hurt to look at. They turned away from the light, held each others’ hands, and took deep breaths to calm themselves down while they got used to the incredible quantity of light all around them. Jerilyn speculated that, since this light was so bright it hurt to look at, and was located far away on the ground, and the light they’d seen before was merely bright and located far up above them, that perhaps the light that their Mom had spoken of in the myth, which was supposed to be painfully bright and high up above them, was a conflation of the two lights that they’d seen. Once they’d calmed down a bit, they kept exploring. The bright light slowly climbed above the ground and into the air, which Jerilyn noted meant she was probably wrong in her earlier speculations. At the same time, the light kept gradually getting even brighter, to the point where it hurt to look in any direction at all, and, counterintuitively, it actually got harder to see as the intensity of light increased. The novelty of so much light flooding their surroundings wore off quickly, so they ended up spending a lot of time with their eyes closed, but their ability to see farther than they could echolocate was useful for navigating, so sometimes they would squint or partially cover their eyes instead. Eventually, Kimi noticed that her skin hurt. She remarked on this, and Jerilyn noticed that her skin hurt as well. There was no obvious cause to their ailments. Jerilyn speculated that, since they’d been fine before the light got so bright, and the skin under their clothing didn’t hurt, that perhaps the light was hurting their skin. They decided to try getting out of the light. They found some more of the tall vegetation, which was dense enough to block much of the light from coming under it. They took a break under it. It was generally more pleasant there, as it was cooler (it had been warm earlier), and the reduced level of light didn’t hurt their eyes as much. Their skin kept getting worse, though. This gave them some doubt over whether it was the light that was hurting their skin, but it still seemed possible that it was because of the light, and their skin was continuing to hurt because of damage already done. And they didn’t have any better ideas than staying there; the hole in the rock that they had emerged from was far away, and they didn’t feel like making their way back to it in all the light, in case it was the light that was hurting their skin. They had no guarantee that the light would go away, but since it had been much dimmer earlier, that gave them some hope that it would dim again. Kimi began to cry out of some combination of fear and the pain of her skin. Jerilyn tried to comfort her, though her skin also hurt, and she was also concerned. They waited there a long time without the light going away. They were exhausted, as they had been woken up by the light well before they would have woken up on their own, but they also couldn’t get to sleep because of the light, stress, and pain in their skin. Their skins were growing blisters, and they were losing hope that the light would go away any time soon, so they were considering making their way back to the hole, when they noticed that the source of the light was slowly making its way back towards the ground. They decided to wait for it to get there to see what would happen. The light slowly dimmed as the bright light drew close to the ground, and Kimi and Jerilyn took off for the hole they’d emerged from. They’d gotten used to the way that their surroundings would seem to grow and draw away as they approached, and they were able to use landmarks they recognized by sight to navigate back to the hole. They refilled their water buckets again when they reached the fast-flowing vein of water. There was plenty of vegetation and wildlife around them, and they speculated that some of it might be edible, and they had gone through well over half their food, so it was tempting to attempt to restock on food for the return trip, but they didn’t know how to determine what was edible, as they didn’t recognize any of it. Jerilyn was concerned that, since light seemed to be toxic to their skin in high doses, perhaps consuming vegetation that had been exposed to that much light might also be toxic to them (she realized later that this could also be an issue with the water that they’d found, but it wasn’t like they could just not drink water, so it was a risk they’d have to take). They had to make do with the food that they’d already packed. The bright light in the sky was long gone, and, following its departure, the ambient light continued to dim. By the time they reached the cliff, the ambient light had returned to the level it had been at when they’d emerged, and they could see the little points of light, and the one big light in the sky that had so impressed them when they’d first seen it, but no longer seemed so grand, in comparison to the much brighter light that had replaced it for a time. They found the hole that they’d emerged from, walked in, and retreated inwards quite a ways from the hole before they collapsed on the ground, exhausted, and slept. Their skins were still painful and sensitive when they awoke, and the hole they’d traveled through was glowing intensely. They continued on their way back home, but when they got to approximately the point where they thought the narrow spot they’d crawled up through was, it took them a long time before they found it. Crawling back through it was quite painful, as it was impossible to climb through without scraping their sensitive skin. But after some painful struggle, they made it through. Their progress down was much slower than their progress up had been, both because of their skin sensitivity slowing down their crawling, and because it was difficult to retrace their steps. Recognizing this, they rationed their food and water so that it would last long enough. During the phase of their journey where they had to crawl under a low ceiling, they seemed hopelessly lost for a long time before they finally made their way to an area where they had enough room to stand up, and in that more open area, they were eventually able to find what seemed to be their previous path. Satisfied that they were no longer lost, they went to sleep before continuing. They had an easier time following their route up from then on. Despite their skin pain and weariness slowing them down, they actually exceeded their pace from the way up on the flatter portions of the trip, but they lost that extra time on the portions where they had to climb. Despite their efforts to conserve food, they ran out by the next time they stopped to sleep. After descending further for quite a long time, they ran out of water, but they realized that they were getting close to the water, the vents, and their canoe. They desperately needed more sleep, but they needed food and water more, so they pressed on. They were quite relieved when they finally reached the bottom. They drank from the water, sated their hunger with ventmoss, and went to sleep. When they woke up, they got in their canoe and set off for home. The Knot George and I were afraid we might be late for our meeting with the wizard, and not wishing to keep them waiting for us, we rushed there. Just before we arrived, I checked my phone and saw that we were two minutes early. I apprehensively prepared myself to knock on the door, but it swung open before I did so, revealing an impressively cluttered office. There was no one inside. George and I looked at each other. “Do you suppose we should go in?” George asked. “I don’t think the door would have opened if we weren’t supposed to,” I said. After some hesitation, I stepped inside, and George followed. The door closed behind us, causing us both to reflexively turn back towards the door. I tried the door handle, and found that the door offered no resistance to being opened again. Reassured that we weren’t trapped, I closed the door again, to return it to the condition that I presumed the wizard preferred it in. I noticed a loop of thin red rope hanging over the doorknob. There were no ends tied together in a knot. I picked it up to look for where the ends had been fused together, but could not find any joint; the rope appeared to have been constructed in a perfectly homogeneous circle. I placed the loop of rope back around the doorknob, and turned my attention to the other objects filling the room. There was a perfectly spherical orb sitting on the wizard’s desk. The orb had a cloudy appearance, and the clouds drifted aimlessly on the surface of the orb, despite the orb otherwise appearing to be solid. There was a shelf on a wall, holding an old oscilloscope, a set of the five platonic solids, each made out of smooth black material, and a beaker that held a liquid which was dancing around violently, but never spilling out of the beaker despite appearing to come close very frequently. There was a fireplace in the corner with a fire, but the only material in the fireplace was a bird sitting in the middle of the fire, but the bird wasn’t burning, and it looked like it was sleeping. The bird raised its head to look at us quizzically, and then went back to sleep. I heard a faint popping sound, which I soon figured out had come from the liquid in the beaker. There was a bookshelf, completely packed with books, covering an entire wall, and there were also a few open books and many loose sheets of paper covering the wizard’s desk, as well as a few sheets of paper that had fallen to the floor. Very few of the papers I saw were in English, and most weren’t in any script I recognized. Some didn’t appear to have any writing at all on them, consisting only of cryptic diagrams. I noticed a strand of rope sticking out from under some papers on the wizard’s desk. It appeared to be made out of very similar material to the loop of rope on the doorknob, except that it was green instead of red. I carefully moved the papers that were on top of it out of the way so I could see the rest of the rope. Like the rope hanging on the doorknob, it formed a closed loop. There were three points where the rope crossed over another part of the rope. The crossings alternated, in the sense that if you started at any crossing, and followed the strand on top around the loop, it would lead to the bottom strand of the next crossing it encounters, and then the top strand of the third crossing, and then the strand going under the point where you started, and so on. Only part of the rope was the green I had initially seen, another stretch of rope was red, matching the loop of rope on the doorknob, and part of it was blue. The rope was arranged so that the three points where the colors changed were hidden under the crossings. I moved the portion of red rope that crossed over the boundary between green and blue, so that I’d be able to see the point where the color of the rope changed from green to blue. To my surprise, the piece of rope that I had just uncovered was solid blue all the way up to the new point that the red strand crossed over it. George asked me how it had done that, but I didn’t know, and I ignored the question. I wiggled the red strand some more, but the portion of the rope it was moving over kept changing between blue and green so that the color switch always occurred exactly under the red strand. I tried holding the red strand in place and pulling the green strand under it, but again blue rope turned green just as it emerged out from under the crossing. I lifted the red strand into the air, and moved my head around to look under it from both directions. The color of the lower strand shifted in unison with my head, so that I never caught a glimpse of the boundary between the colors. I wiggled the strands going over the other two crossings to see if they would exhibit the same phenomenon, and they did. I paused for a moment to stare at the rope in confusion, and then picked up a piece of green rope and moved it over the blue portion of the rope, forming two additional crossings. Blue rope turned red as the green strand passed over it, forming an additional stretch of red rope in the middle of the blue part of the rope, again with the color change happening precisely under the crossings. Next I tried moving the green strand over the point where the blue strand crossed over the boundary between red and green. As I had anticipated, the stretch of rope going over the crossing turned from blue to red as the green strand passed over it, and an additional short stretch of blue rope had formed out of the red rope coming out from under the crossing, with all color boundaries being hidden behind other stretches of rope. I returned the loop of rope to its original configuration, and then tried twisting part of the blue portion of the rope, so that it crossed over itself. This did not cause any color changes, and I undid the twist. “Hey George, I want to try something. Can you go around to the other side of the desk for a minute?” I said. “Are you sure the wizard will be okay with us messing with his stuff like this?” George asked. “I’m sure it’ll be fine. Come on,” I said, pushing George in the intended direction. I actually had no idea whether or not the wizard would mind, but my curiosity had won out over my fear of offending the wizard. George walked around to the other side of the desk as I had requested. “Okay, now look closely at this crossing,” I instructed, pointing to where the green stretch of rope passed over the boundary between the red and blue strands, which we were looking at from opposite sides. I crouched so that I was looking at the knot from a shallower angle, and George followed my example. I lifted the green strand going over the crossing up in the air. I was looking at the crossing from the side that the red strand was coming out from, and the blue stretch of rope coming out the other side appeared to turn red as the green rope passed in front of it in my field of “What’s it look like?” I asked. “The rope under the green strand is now blue up until the point where it crosses behind the strand,” he said. I put my finger on the red rope directly under the green part I had lifted. “So this looks blue?” I asked. “Yeah,” he said. “So you can see my finger touching a blue stretch of rope?” I asked. “Yeah, that’s what I said,” George confirmed. I stood up and bent over to look at the rope from above, and pressed the green strand I was holding into my face running vertically between my eyes, so that I could see the piece of rope crossing under it from opposite sides of the green strand with each eye. It was a purple blur that could have been the result of red light reflecting off the rope into my right eye and blue light reflecting off the rope into my left eye. I unfocused my eyes so that the stretch of rope I was looking at would appear in different places in my field of vision in each eye, and indeed, it appeared as separate red and blue strands. Suddenly remembering the loop of rope on the doorknob, I dropped the rope I was holding and went to go get it. George walked back around the desk to the side facing the door. I returned with the red loop of rope and held it over the rope on the table. The green and blue portions of the rope that I could see through the red loop had switched colors, while the red portion of the rope on the table was not changed in appearance by viewing it through the red loop. I lifted part of the rope on the table, and slid the loop of red rope under it. The loop was no longer red all the way around, with color changes whenever it passed under a strand of rope of a different color. I grabbed the formerly red loop of rope by a blue stretch in the center of the loop of rope on the table, and pulled it out. I was holding a solid blue loop of rope. I put the blue loop of rope aside, took out my phone from my pocket, and opened the camera. I lifted the green strand and put my phone under it to take a picture of the spot where the rope crossing under it switched from red to blue. The camera image on the screen showed the strand changing from red to blue right under the spot where the green strand crossed over the phone, so that the boundary between red and blue wasn’t visible on the screen. I took a picture, and then moved the rope out of the way so that I could see the spot where the color changed. But the picture I saw on the phone screen was of a completely red strand of rope. I moved the phone back under the green strand, and saw that the still image of a strand of rope in my camera was changing from red to blue as I moved the green strand over it. I pulled the phone back out the other side of the green strand, and it bore an image of a completely blue strand of rope. I closed the picture so I could take another one. The image of the knot in the phone screen looked the same as the actual knot, except that the colors red and blue were switched. I put down the phone, and pulled a pen and small notebook out of my pocket. I tore off a page of the notebook, and wrote on it the current color of the loop of rope I had taken from the doorknob (blue). I folded up the piece of paper, slipped it under the multicolored loop of rope with the crossings, and pulled it out through the center. I unfolded it, and found the word “green” written on it, in my handwriting, instead of the “blue” that I had written. I picked up my phone and called a friend. She picked up, and before she said anything, I said, “Hi Kelly. Pick a color. Red, green, or blue?” “Blue. Why do you ask?” she said. “I’ll explain later. Thanks. Bye,” I said, and hung up. I wrote “blue” under the word “green” on the piece of paper, folded it back up, and slipped it under the knot and pulled it out through the center, as I had done before. I unfolded it, and saw that the word “green” that had been near the top of the paper had turned into “red”, while the word “blue” that I had written when Kelly picked it had remained unchanged. I also noticed that the pen I was using had blue ink, and the color of the ink on the page had never changed. There were a couple more things I wanted to try. I thought through what I was going to do, and then called Kelly back. “Can you pick a color again? Same options,” I asked. “Red,” said Kelly. “Thanks,” I said, and hung up. I lifted part of the knot into the air and stuck my right hand under it, so that my hand was sticking out through the center part of the knot. The plan was to hand the phone from my left hand to my right hand, and then pull it with my right hand back from under the knot, except that if Kelly had named the current color of the loop of rope that had been on the doorknob, I would only go through the motions of this without actually holding the phone. The loop of rope from the doorknob was blue, and Kelly had said red, so I kept the phone in my left hand as I moved my left hand towards my right, and I attempted to grasp the phone with my right hand. But while I saw my right hand grab the phone, I felt my fingers pass through thin air where I saw the phone. I withdrew my right hand out from under the knot, and while the phone was definitely pulled out of my left hand, and I saw my right hand holding the phone as it receded, I felt my right hand in a fist closed around nothing. As my hand passed out from under the knot, the fist became visible and the phone seemed to disappear. George gasped, as this was the first sign visible to him that anything was amiss. “Where’s your phone go?” he asked. “I don’t know. In retrospect, I probably shouldn’t have used my phone for that. At least we’ve still got your phone if we want to try taking more pictures,” I said. I felt rather foolish, as I had actually identified this outcome in advance as consistent with previous observations, but somehow hadn’t seriously considered the possibility that it would actually happen. “You just managed to lose your phone in the magic rope. I’m not letting you touch mine,” said George. He had a point. I thought about how I might get the phone back, but couldn’t think of anything, and besides, there was another experiment I’d been going to try. I reached for the red strand of rope (chosen because it was the color that Kelly had picked), but before I touched it, it started receding under the green strand, as if the blue strand on the other side was being pulled, but the blue strand itself was motionless, and rather than turning blue as it came out from under the green strand, the red rope would simply vanish as it passed under the green strand, leaving a significantly shortened stretch of red rope by the time this stopped. The point where the red strand disappeared under the green was no longer aligned with the point that the blue strand came out from under the green on the other side. I grabbed the red rope near where it crossed under the blue strand and pulled. More red rope came out of nowhere so that the red strand still continued all the way up to where it disappeared under the blue strand, even as I pulled it away, just as if the green strand on the other side were passing under the blue strand and turning red, but the green strand itself did not move. The point where the red strand passed under the blue strand and vanished also became misaligned with the point where the green strand emerged out the other side. When I stopped pulling on the red strand, there was about the same amount of red rope visible as there had been before some of it had vanished under the green strand. “Hello, folks. Sorry I’m late,” came a voice from behind us in a heavy accent that I didn’t recognize. George and I turned around and saw someone of unidentifiable gender in robes and a pointy hat, carrying a wooden staff with a hexagonal piece of metal attached to the side and a shiny truncated octahedron fastened to the top, and wearing a ring on each of their ten fingers, each in a different style. The door was closed behind the wizard. I hadn’t heard it open or close. The wizard’s eye caught the knot of rope on their desk. “Oy, the bloody thing’s out of sync again,” they said, and walked over to the desk, put the staff down leaning against the desk, pulled a wand out of their robes, and jabbed their wand at the knot. They put their wand back in their robes, picked up the knot of rope, and threw it up in the air. When it landed back on the desk, the strands were perfectly aligned with each other again. “There we go,” said the wizard. They picked up their staff and gestured with it towards a wall, out of which lept two folding chairs, which positioned themselves in front of the wizard’s desk and unfolded into chairs that did not look the least bit like folding chairs. “Have a seat,” said the wizard, indicating the chairs. I put my phone back in my pocket, and sat down.
{"url":"http://alexmennen.com/","timestamp":"2024-11-04T09:04:17Z","content_type":"text/html","content_length":"861340","record_id":"<urn:uuid:de66db3b-6662-497c-883f-66d1f22889d1>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00168.warc.gz"}
The 3D Morse-Smale complex is a fundamental topological construct that partitions the domain of a real-valued function into regions having uniform gradient flow behavior. In this paper, we consider the construction and selective presentation of cells of the Morse-Smale complex and their use in the analysis and visualization of scientific datasets. We take advantage of the fact that cells of different dimension often characterize different types of features present in the data. For example, critical points pinpoint changes in topology by showing where components of the level sets are created, destroyed or modified in genus. Edges of the Morse-Smale complex extract filament-like features that are not explicitly modeled in the original data. Interactive selection and rendering of portions of the Morse-Smale complex introduces fundamental data management challenges due to the unstructured nature of the complex even for structured inputs. We describe a data structure that stores the Morse-Smale complex and allows efficient selective traversal of regions of interest. Finally, we illustrate the practical use of this approach by applying it to cryo-electron microscopy data of protein molecules.
{"url":"https://escholarship.org/search/?q=author%3APascucci%2C%20Valerio","timestamp":"2024-11-08T04:47:02Z","content_type":"text/html","content_length":"63435","record_id":"<urn:uuid:9f76e0ac-24c8-4fa1-8e2f-35e5345e5ca0>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00890.warc.gz"}
Analysis of Variance with SPSS - Problem 10.2: Post Hoc Multiple Comparison Tests – HKT Consultant Analysis of Variance with SPSS – Problem 10.2: Post Hoc Multiple Comparison Tests Posted on by admin Now we will introduce the concept of post hoc multiple comparisons, sometimes called follow­up tests. When you compare three or more group means, you know that there will be a statistically significant difference somewhere if the ANOVA F (sometimes called the overall F or omnibus F) is significant. However, we would usually like to know which specific means are different from which other ones. In order to know this, you can use one of several post hoc tests that are built into the one­way ANOVA program. The LSD post hoc test is quite liberal and the Scheffe test is quite conservative so many statisticians recommend a more middle of the road test, such as the Tukey HSD (honestly significant differences) test, if the Levene’s test was not significant, or the Games-Howell test, if the Levene’s test was significant. Ordinarily, you do post hoc tests only if the overall F is significant. For this reason, we have separated Problems 10.1 and 10.2, which could have been done in one step. Fig. 10.3 shows the steps one should use in deciding whether to use post hoc multiple comparison Fig. 10.3. Schematic representation of when to use post hoc multiple comparisons with a one-way ANOVA. • If the overall F is significant, which pairs of means are significantly different? After you have examined Output 10.1 to see if the overall F (ANOVA) for each variable was significant, you will do appropriate post hoc multiple comparisons for the statistically significant variables. We will use the Tukey HSD if variances can be assumed to be equal (i.e., the Levene’s test is not significant) and the Games-Howell if the assumption of equal variances cannot be justified (i.e., the Levene’s test is significant). First we will do the Tukey HSD for grades in h.s. Open the One-Way ANOVA dialog box again by doing the following: • Select Analyze → Compare Means → One-Way ANOVA… to see Fig. 10.1 again. • Move visualization test out of the Dependent List: by highlighting it and clicking on the arrow pointing left because the overall F for visualization test was not significant. (See interpretation of Output 10.1.) • Also move math achievement to the left (out of the Dependent List: box) because the Levene’s test for it was (We will use it later.) • Keep grades in the Dependent List: because it had a significant ANOVA, and the Levene’s test was not significant. • Insure that father’s educ revised is in the Factor • Your window should look like Fig. 10.4. • Next, click on .. and remove the check for Descriptive and Homogeneity of variance test (in Fig. 10.2) because we do not need to do them again; they would be the same. • Click on Continue. • Then, in the main dialogue box (Fig. 10.1), click on Post Hoc. to get Fig. 10.5. • Check Tukey because, for grades in h.s., the Levene’s test was not significant so we assume that the variances are approximately equal. • Click on Continue and then OK to run this post hoc test. Compare your output to Output 10.2a Output 10.2a: Tukey HSD Post Hoc Tests ONEWAY grades BY faedRevis /POSTHOC = TUKEY ALPHA(0.05). After you do the Tukey test, let’s go back and do Games-Howell. Follow these steps: • Select Analyze → Compare Means → One-Way ANOVA… • Move grades in h.s. out of the Dependent List: by highlighting it and clicking on the arrow pointing left. • Move math achievement into the Dependent List: • Insure that father’s educ revised is still in the Factor: • In the main dialogue box (Fig. 10.1), click on Post Hoc. to get Fig. 10.4. • Check Games-Howell because equal variances cannot be assumed for math achievement. • Remove the check mark from Tukey. • Click on Continue and then OK to run this post hoc test. • Compare your syntax and output to Output 10.2b. Output 10.2b: Games-Howell Post Hoc Test ONEWAY mathach BY faedRevis /POSTHOC = GH ALPHA(0.05). Interpretation of Output 10.2 The first table in both Outputs 10.2a and 10.2b repeats appropriate parts of the ANOVA table from Output 10.1. The second table in Output 10.2a shows the Tukey HSD test for grades in h.s. that you would use if the three group sizes (n = 38, 16, 19 from the first table in Output 10.1) had been similar. For grades in h.s., this Tukey table indicates that there is only a small mean difference (.22) between the mean grades of students whose fathers were high school grads or less (M = 5.34 from Output 10.1) and those fathers who had some college (M = 5.56). The Homogeneous Subsets table shows an adjusted Tukey that is appropriate when group sizes are not similar, as in this case. Note that there is not a statistically significant difference (p = .880) between the grades of students whose fathers were high school grads or less (low education) and those with some college (medium education) because their means are both shown in Subset 1. In Subset 2, the medium and high education group means are shown, indicating that they are not significantly different (p = .096). By examining the two subset boxes, we can see that the low education group (M = 5.34) is different from the high education group (M = 6.53) because these two means do not appear in the same subset. Output 10.2b shows, for math achievement, the Games-Howell test, which we use for variables that have unequal variances. Note that each comparison is presented twice. The Mean Difference between students whose fathers were high school grads or less and those with fathers who had some college was -4.31. The Sig. (p = .017) indicates that this is a significant difference. We can also tell that this difference is significant because the confidence interval’s lower and upper bounds both have the same sign, which in this case was a minus, so zero (no difference) is not included in the confidence interval. Similarly, students whose fathers had a B.S. degree were significantly different on math achievement from those whose fathers had a high school degree or less (p = .008). An Example of How to Write About Outputs 10.1 and 10.2. A statistically significant difference was found among the three levels of father’s education on grades in high school, F (2, 70) = 4.09, p = .021, and on math achievement, F (2, 70) = 7.88, p = .001. Table 10.2a shows that the mean grade in high school is 5.34 for students whose fathers had low education, 5.56 for students whose fathers attended some college (medium), and 6.53 for students whose fathers received a BS or more (high). Post hoc Tukey HSD tests indicate that the low education group and high education group differed significantly in their grades with a large effect size (p < .05, d = .85). Likewise, there were also significant mean differences on math achievement between the low education and both the medium education group (p < .017, d = .80) and the high education group (p = .008, d = 1.0) using the Games-Howell post hoc test. Source: Morgan George A, Leech Nancy L., Gloeckner Gene W., Barrett Karen C. (2012), IBM SPSS for Introductory Statistics: Use and Interpretation, Routledge; 5th edition; download Datasets and Materials. Same Category Posts Leave a Reply Cancel reply
{"url":"https://phantran.net/analysis-of-variance-with-spss-problem-10-2-post-hoc-multiple-comparison-tests/","timestamp":"2024-11-13T11:54:43Z","content_type":"text/html","content_length":"130552","record_id":"<urn:uuid:40fcc62c-b47e-453a-a3d3-4be0eb9d9b41>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00760.warc.gz"}
Standardized Cargo Stowage and Arrangement of the cargo, that are Non-Standardized, always demand special preparation. This article provides calculation, norms, and costs. The Method of Calculation of Non-shift Criterion for Structurizing Cargoes Non-Standardized Cargo (according to the IMO terminology) means a cargo, which requires individual stowage and securing arrangements. Non-standardized cargoes are subdivided into two groups to ensure their securing: 1. cargo units; 2. cargo structures (structurizing cargoes). Cargo units are cargoes, stowage and securing of which are performed in an individual manner by connecting each separate cargo unit to the ship’s hull. All the rest non-standardized cargoes are considered as cargo structures (structurizing cargoes), which being stowed on a ship, arrange discrete structures – bodies (stacks). A cargo structure is a discrete structure, comprising of separate packages, metal rolled products, pig iron, metal scrap and so on (of general or break-bulk cargo) stacked according to the given stowage pattern. The properties of cargo structures depend not only on the characteristics of individual cargo elements, but also on the order, direction and method of their stowage. One and the same cargo, stowed by using different patterns, generate different structures, which possess different properties, including resistance to shift due to external forces. Static stability angle χ of a structure is used as a measuring criterion of ability of a stack of a structurizing cargo to resist shift. It is an acute angle between the horizontal plane and the inclining bottom plane of the structure at the initial moment of its collapse in any form: • tipping, • sliding or loss of stability of the structure. Examples of coils stowage patterns in the upper tier of a stack at different Static Stability Angles χ are given below: Original researches have been made to determine static stability angles χ for different kinds of structuring cargoes, the values of which are given in the national Rules for safe sea carriage of particular kinds of general cargoes. For each structure pattern the value of static stability angle χ should be determined in accordance with this article Rules for Safe Transportation of Cargoes by Sea on the Cargo Ship“Rules and Requirements for Safe Cargo Carriage by Sea”, to the present Rules, and be indicated by the elaborator of the Cargo Information. Non-shift Criterion Safe carriage by sea of Structurizing cargo is estimated with the help of the Non-shift Criterion in the following formula: ${\lambda }_{s}=\frac{{\Theta }_{s}}{{\Theta }_{dyn}}\ge 1,Form.1$ • Θ[s] – cargo structure dynamic stability angle, deg, Θ[s] = F(χ, τ[r], sea navigation area); • τ[r] – rolling period of the ship with cargo, sec; • Θ[dyn] – rolling amplitude of the ship with cargo in holds, or dynamic heeling angle of the ship with cargo on the upper deck (or high above-water sided ship) during rolling beam on to the resonance heaving, corresponding to the sea navigation area of the ship, which is not run, deg, Θ[dyn] = F(h[0], sea navigation area); • h[0] – initial metacentric height of the ship with cargo, m. Cargo Structure Dynamic Stability Angle Θ[s] – is determined on the basis of the known Cargo Structure Static Stability Angle χ, taking into account the nature of the ship’s loading and sea navigation area, which define the dynamics of the ship’s rolling and pitching. Read also: Rules for Safe Transportation of Cargoes by Sea on the Cargo Ship Depending on the location of the surface of shifting of a stack of the cargo (above the ship’s centre of gravity or below it), two different dynamic models are applied, and each of them consists of two variants: • on the basis of the rolling amplitude of a low above-water sided ship; • on the basis of the dynamic heeling angle of a ship with large side windage area. When transporting the cargo, whose surface is above the ship’s centre of gravity, Θ[s] – cargo dynamic stability angle should be determined according to the diagrams with the respective χ or by solving, in relation to Θ[s] (in radians), the following equations. On the basis of the rolling amplitude of a low above-water sided ship: $tg\chi \frac{Sin{\Theta }_{s}+z\frac{4{\mathrm{\pi }}^{2}}{g{\tau }_{r}^{2}}{\Theta }_{s}}{Cos{\Theta }_{s}–{r}_{0}\frac{4\mathrm{\pi }}{g{\tau }_{r}^{2}}Cos{\Theta }_{s}}=0Form.2$ On the basis of the dynamic heeling angle of a ship with large side windage area: $tg\chi –\frac{Sin{\Theta }_{s}+z\frac{4{\mathrm{\pi }}^{2}}{g{\tau }_{r}^{2}}{\Theta }_{s}}{Cos{\Theta }_{s}}=0Form.3$ When transporting a cargo, the surface of which is below the ship’s centre of gravity, Cargo Dynamic Stability Angle Θ[s] should be determined by diagrams with the respective χ or by solving in relation to Θ[s] the following equations. On the basis of the lolling amplitude of a low above-water sided ship: $tg\chi –\frac{Sin{\Theta }_{s}}{Cos{\Theta }_{s}–{r}_{0}\frac{4{\mathrm{\pi }}^{2}}{g{\tau }_{r}^{2}}Cos{\Theta }_{s}}=0Form.4$ On the basis of the dynamic heeling angle of a ship with large side windage area: $tg\chi –\frac{Sin{\Theta }_{s}}{Cos{\Theta }_{s}}=0Form.5$ • τ[r] – rolling period of a ship with cargo, sec; • χ – cargo static stability angle, deg.; • g – acceleration of gravity, g = 9,81 м/с^2; • r[0] – semi-height if a wave, corresponding to the sea navigation area, m; • z – vertical distance between the cargo shifting surface and the ship’s centre of gravity, m. Dynamic heeling angle Θ[dyn] and the rolling amplitude of the ship for non-shift criterion calculation should be determined by the following method. There are the following accepted definitions for sea navigation areas: • Unrestricted – navigation in oceans and sea areas at seas with design value of wave height with 3 % probability of 11 m; • Restricted I – navigation in sea areas at seas with a wave height with 3 % probability of exceeding 8,5 m and with the ships proceeding not more than 200 miles away from the place of refuge and with an allowable distance between the places of refuge not more than 400 miles; • Restricted II – navigation in sea areas at seas with a wave height with 3 % probability of exceeding 7,0 m, with ships proceeding from the place of refuge not more than 100 miles and with an allowable distance between the places of refuge not more than 200 miles; • Restricted II СП – river-sea navigation at seas with a wave height with 3 % probability of exceeding 6,0 m with ships proceeding from the place of refuge: □ in open seas up to 50 miles and with an allowable distance between the places of refuge not more than 100 miles; □ in enclosed seas up to 100 miles and with an allowable distance between the places of refuge not more than 200 miles; • Restricted III СП – river-sea navigation at seas with a wave height with 3 % probability of exceeding 3,5 m with due regard for particular restrictions on the area and conditions of navigation resulting from the wind and wave conditions of the basins with determination of a maximum allowable distance from the place of refuge which in no case should be more than 50 miles; • M-СП – river-sea navigation with a wave height with 3 % probability of exceeding 3,5 m at sea regions according to the ship’s Class Certificate. Calculation of the heeling moment caused by wind pressure The heeling moment M[v], kN·m, is assumed to be equal to the product of wind pressure р[v], Pa, by windage area A[v], m^2, and by the distance z, m, between the centre of windage area and the actual waterline plane: M[v] = 0,001p[v]A[v]z.Form. 6 The value of heeling moment is assumed to be constant over the whole ship’s heeling period. The values of wind pressure р[v], should be taken from table 1 depending upon the ship’s sea navigation area and the arm of windage area, z in m. Table 1. Wind pressure p[v], Pa Navigation area z, m 0,5 1,0 1,5 2,0 2,5 3,0 3,5 4,0 4,5 5,0 5,5 6,0 6,5 7,0 & over Unrestricted – 706 785 863 922 971 1 010 1 049 1 079 1 108 1 138 1 167 1 196 1 216 Restricted I 0,567 of the wind pressure value adopted for the unrestricted service Restricted II 0,275 of the wind pressure value adopted for the unrestricted service Open in new window Calculation of the amplitude of roll The amplitude (in deg.) of roiling of a ship with a round bilge, which doesn’t have bilge keels and a bar keel should be calculated according to the formula: Θ[1r] = X[1]X[2]Y,Form. 7 • X[1], X[2] – non-dimensional factors; • Y – factor in deg. The values of factor Y shall be taken from Table 2 depending on the ship’s navigation area and Table 2. Factor Y and design value of wave height Navigation area Assumed waves height, m $\sqrt{{h}_{0}}/B$ 0,04 & less 0,05 0,06 0,07 0,08 0,09 0,10 0,11 0,12 0,13 & above Unrestricted 11,0 24,0 25,0 27,0 29,0 30,7 32,0 33,4 34,4 35,3 36,0 Restricted I 8,5 19,0 20,0 22,4 25,1 27,4 29,2 30,8 32,0 32,9 33,5 Restricted II 7,0 16,0 17,0 19,7 22,8 25,4 27,6 29,2 30,5 31,4 32,0 Open table in new window The values of factor X[1] shall be taken from Table 3 depending upon B/d ratio, where: • B – breadth of the ship, m; • d – draught, m. Table 3. Factor X[1] B/d X[1] B/d X[1] 2,4 and below 1,0 3,0 0,90 2,5 0,98 3,1 0,88 2,6 0,96 3,2 0,86 2,7 0,95 3,3 0,84 2,8 0,93 3,4 0,82 2,9 0,91 3,5 and above 0,80 The values of factor X[2] shall be taken from Table 4 depending upon the ship’s total block coefficient С[B]. Table 4. Factor X[2] C[B] 0,45 and below 0,5 0,55 0,6 0,65 0,7 and above X[2] 0,75 0,82 0,89 0,95 0,97 1,0 If the ship has bilge keels or a bar keel, or both, the rolling amplitude, in deg., should be calculated by the formula: Θ[m] = kΘ[1r],Form. 8 • k – factor, which is accepted according to table 5, depending upon ratio A[k]/LB; • A[k] – total overall area of bilge keels, or lateral projection area of the bar keel, or a sum of these areas, m^2; • L – length of the ship between the perpendiculars, m. Table 5. Factor k $\frac{{A}_{k\sigma }}{LB},%$ 0 1,0 1,5 2,0 2,5 3,0 3,5 4,0 and above k 1,00 0,98 0,95 0,88 0,79 0,74 0,72 0,70 Bilge keels are not to be taken into consideration where ships have the ice category in their class notation. The value of rolling amplitude for a ship having a sharp bilge should be assumed to be equal to 70 % of the amplitude, calculated according to formula 7. The rolling amplitude of ships provided with anti-roll devices should be determined taking no account of operation of these devices. The calculated values of the rolling amplitude should be rounded off to the tenth of degrees. Calculated values of the roiling amplitude for river-sea navigation ships should be determined in the same way as for the ships of Restricted II navigation area or according to individual methods approved in the established order. Determination of the ship’s dynamic heeling angle at simultaneously action of a suddenly applied moment of wind squall and the ship’s roiling The maximum dynamic heel occurs, if at the instant of sudden application of heeling moment, the ship had the largest list towards the opposite side caused by rolling. In order to determine the dynamic heel angle, the diagram of the dynamic stability should be extended in the direction of the negative abscissa, and point A corresponding to the amplitude of rolling Θ[m] should be fixed on it. Then a straight line should be drawn from point A parallel to the axis of abscissa, and segment AB, equal to one radian (57.3°), should be laid off on it. From point B segment BC, equal to the lever of the given heeling moment l[heel] should be laid off perpendicularly upwards. The abscissa of point E at the crossing of the straight line AC with the diagram of dynamic stability indicates the required dynamic heeling angle Θ[dyn]. Dynamic heel angle The longitudinal stability of cargo units and stacks of structurizing cargoes should be additionally checked at the calculated pitch amplitude of the ship or at arbitrarily calculated value of static trim angle of 17°. Such inspection should be carried out taking into account the friction coefficients of the used materials and keeping the balance of the respective moments. If, as a result of a calculation according to formula 1, non-shift criterion’s value of the cargo proves to be less than 1,0, it indicates that the necessity of its securing exists, specifying that the strength of the securing devices from each side of the ship is determined by the load Q, t(f), arising in case the dynamic heeling angle exceeds the cargo dynamic stability angle and is calculated according to the formula: $Q=n·p·\left(tg{\Theta }_{dyn}–tg{\Theta }_{s}\right),Form.9$ • n – quality of cargo units to be secured, pcs.; • P – average mass of a cargo unit, t. The quantity of the required lashings N is determined by the scheme of their attachment and safe (maximum) working load SWL or breaking load BL (see tables below). In case a lashing line coincides with the direction of load’s action, this quantity N of the required lashings per each part of the cargo to be secured is determined by the scheme of their attachment and their safe (maximum) working load (SWL), if the cargo is stowed in cargo compartments, or their breaking load (BL), if the cargo is stowed on the upper deck and hatch covers, by the formula: N = Q/SWL (BL). Form. 10 The Standards for strength of Cargo Securing Devices and Approximate consumption of them Table 6. The Standards for Strength of Cargo Securing Devices Kind of device Safe (Maximum) Working Load SWL Proof load PL Breaking load BL Safety factor k Wire Rope, single use 0,8 BL 1,00 SWL 1,25 SWL 1,25 Wire Rope, re-useable 0,33 BL 1,25 SWL 3,0 SWL 3 Chain lashings 0,5 BL 1,25 SWL 2,0 SWL 2 Shackles, eye rings and eye plates, turnbuckles of mild steel 0,5 BL 1,25 SWL 2,0 SWL 2 Other devices 0,5 BL 1,25 SWL 2,0 SWL 2 Steel band, single use 0,5 BL – 2,0 SWL 2 Open table in new window Table 7. Approximate consumption of Cargo Securing Materials and Devices per 1 t of cargo Cargo Timber, m^3 Wire, kg Nails, kg Rope, m Turnbuckles, pcs. Clamps, pcs. Metal products 0,020 3,4 0,080 6,0 0,8 2 Vehicles of up to 2 t 0,005 2,6 0,300 6,0 1,2 7 Vehicles of from 3 to 12 t mass 0,008 2,4 0,100 3,6 2,0 7 Vehicles of above 12 t mass 0,009 – 0,060 1,2 0,6 2 Steel pipes of big diameter 0,020 – 0,060 2,8 0,6 3 Large-dimensional cargoes of cylindrical shape 0,008 – 0,080 2,5 0,6 4 Metal barrels and drums 0,005 2,5 0,006 – – – Break bulk cargoes, boxes, sacks, etc. 0,002 – 0,02 – – – Boxes and unpackaged equipment of 2-20 t mass 0,020 2,1 0,400 4,0 0,6 2 Equipment of above 20 t mass 0,020 – 0,400 3,2 0,6 3 On average 0,011 1,3 0,087 3,0 0,7 3 Open table in new window Add a comment
{"url":"https://sea-man.org/cargo-stowage.html","timestamp":"2024-11-09T06:58:07Z","content_type":"text/html","content_length":"187369","record_id":"<urn:uuid:ff102440-4d9d-4742-9d0f-142946e05a8e>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00621.warc.gz"}
Propeller Turbine Mixer Design Calculator - Laminar Mixing Constant Online Web Apps, Rich Internet Application, Technical Tools, Specifications, How to Guides, Training, Applications, Examples, Tutorials, Reviews, Answers, Test Review Resources, Analysis, Homework Solutions, Worksheets, Help, Data and Information for Engineers, Technicians, Teachers, Tutors, Researchers, K-12 Education, College and High School Students, Science Fair Projects and Scientists By Jimmy Raymond Contact: aj@ajdesigner.com Privacy Policy, Disclaimer and Terms Copyright 2002-2015
{"url":"https://www.ajdesigner.com/phpmixing/propeller_mixing_power_laminar_constant.php","timestamp":"2024-11-12T23:48:17Z","content_type":"text/html","content_length":"28229","record_id":"<urn:uuid:00e12186-8777-41df-8ac4-96c64c937007>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00584.warc.gz"}
What are the 6 properties of a parallelogram? There are six important properties of parallelograms to know: • Opposite sides are congruent (AB = DC). • Opposite angels are congruent (D = B). • Consecutive angles are supplementary (A + D = 180°). • If one angle is right, then all angles are right. • The diagonals of a parallelogram bisect each other. Do parallelograms always equal 180? Therefore, the sum of any two adjacent angles of a parallelogram is equal to 180°. Hence, it is proved that any two adjacent or consecutive angles of a parallelogram are supplementary. Can a parallelogram have 4 right angles? A rectangle is a parallelogram with four right angles, so all rectangles are also parallelograms and quadrilaterals. Can a parallelogram have 90 degree angles? A Parallelogram can be defined as a quadrilateral whose two s sides are parallel to each other and all the four angles at the vertices are not 90 degrees or right angles, then the quadrilateral is called a parallelogram. If one angle is 90 degrees, then all other angles are also 90 degrees. Do parallelograms have 90 degree angles? A Parallelogram can be defined as a quadrilateral whose two s sides are parallel to each other and all the four angles at the vertices are not 90 degrees or right angles, then the quadrilateral is called a parallelogram. The opposite sides of parallelogram are also equal in length. Do all parallelograms have 4 equal sides Yes or no? A parallelogram has two parallel pairs of opposite sides. A rectangle has two pairs of opposite sides parallel, and four right angles. It is also a parallelogram, since it has two pairs of parallel sides. A square has two pairs of parallel sides, four right angles, and all four sides are equal. Do parallelograms equal 360? Parallelograms have angles totalling 360 degrees, but also have matching pairs of angles at the ends of diagonals. Do all parallelograms have 4 right angles? Right Angles in Parallelograms In a parallelogram, if one of the angles is a right angle, all four angles must be right angles. If a four-sided figure has one right angle and at least one angle of a different measure, it is not a parallelogram; it is a trapezoid.
{"url":"https://diaridelsestudiants.com/what-are-the-6-properties-of-a-parallelogram/","timestamp":"2024-11-07T15:51:25Z","content_type":"text/html","content_length":"46092","record_id":"<urn:uuid:be526439-b14f-4121-8a71-7fb320acd27c>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00500.warc.gz"}
Crackway for GATE 2014 Graduate Aptitude Test in Engineering (GATE) is an examination conducted by 7 IITs and IISc, which is two headed opportunity for engineering & Science students those who are pursuing final year. GATE score is equal to an eligibility card to pursue master education in top colleges (IISc & IITs) and also for job in relevant streams of public sectors. Each year, one among the group of 7 IITs and IISc get a chance to conduct GATE. It has been declared that GATE 2014 will be conducted by IIT Kharagpur. The crack way for GATE 2014 lies in three stages. 1. Structure of GATE 2014. 2. Preparation. 3. Practice. Analyzing about GATE 2014: GATE 2014 is different from its history. The question paper has 65 questions as it is for the last years. The main variation is GATE 2014 has become ONLINE in all of its streams of engineering and sciences. The new approach in GATE 2014 is it also has Numerical answer questions. The student is expected to type the answer using the virtual Keypad. A total mark for 65 questions is 100. Out of 65 questions, 30 questions carry 1 mark each and 35 questions carry 2 marks each. For wrong answer, 1/3^rd of mark is deducted for 1 mark questions and 2/3^rd for 2 mark In general, Aptitude carries 15% of the total mark and mathematics carries 13% of the total mark. So, 10 questions (5 from 1 mark & 5 from 2 mark) will be asked in aptitude and 8 questions (3 from 1 mark & 5 from 2 mark) will be asked in mathematics. 42 questions bounds to technical. For papers related to Engineering sciences, Mathematics and Aptitude are the sections which are compulsory and they need to choose 2 another technical subjects which are on their choice. For papers related to Life Sciences, Chemistry and Aptitude are the sections which are compulsory and they need to choose 2 technical subjects which are on their Choice. The subject selection for engineering sciences and life sciences are published in IIT Kharagpur website and you can check it in here. From the analysis, we have got the whole concept of GATE. The next step is preparation. Preparation: Questions at GATE test the students in 4 areas. Recall, Comprehension, Applications, Analysis and Synthesis. Recall: This test the technical knowledge of student in formulae, principles and facts. Comprehension: It is the area where the basic technical knowledge of student is tested. Applications: The approach of applying theoretical knowledge to practical problems and its practical uses is tested. Analysis and Synthesis: The questions are asked with the data depicted with pictures, graphs etc… Relevantly, the questions are to be answered. From today, only five and half months are available for the preparation. From many people who have secured top scores in GATE, this preparation plan is framed. If your gate score is above 60 for 100, then your percentile will be 98. Preparation Plan: 1. Do not try to finish all the topics in this short span. Compare with the past year papers and Concentrate on the 4 to 5 technical subjects (excluding mathematics and aptitude) which have relevantly higher weightage in GATE. 2. Put a plan to complete those 4 or 5 technical subjects and mathematics within December 31^st 2013. It is possible to complete a subject by allotting 4 to 5 hours a day. 3. Know the basics of the concept that you are reading, after reading the particular concept a day, practice problems from Standard Textbooks. 4. Before, reading a new concept the next day, just take a glance on the concepts that you have read till yesterday. 5. After completing a particular subject, on the next day, review the whole concepts, set time and take a self attempt in answering many questions as you can. Last but not least… 6. Don’t ignore the problem/ question if you have got wrong answers even though you knew the concept. This tells, you have not understood concept completely. The final step to crack GATE is PRACTICE. After completing the technical topics and mathematics within 31^st December 2013, start practicing from 1^stJanuary 2014. Practice: Daily, take GATE model papers, question papers which are available from internet, set time and practice it. Meanwhile, practice aptitude questions daily during the months of January and Before the day of GATE– Revise the concepts that you have read in these 5 months. Stay cool, be calm and don’t eat some unhygienic food. This makes you worse. On the day of GATE– Don’t be nervous. Don’t panic if 1^st question is not asked from the subjects what you have learned. Select the questions, which are relevant to what you have studied. Take 2 to 3 minutes to solve the question. DO NOT TRY TO ATTEMPT QUESTIONS WITHOUT KNOWING THE CONCEPT. Negative marking plays a major role in GATE. Final words, if you plan perfectly, execute it with dedication & passion then YOU ARE THE ONE WHOLE WILL ACHIEVE IT. *You can check out more details regarding application, admit car and examination date in the official site of IIT Kharagpur. You must be logged in to post a comment.
{"url":"https://youthopia.in/crackway-for-gate-2014/","timestamp":"2024-11-10T13:04:58Z","content_type":"text/html","content_length":"72537","record_id":"<urn:uuid:09d51935-7355-43a5-88ce-ac620f1e6179>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00888.warc.gz"}
Physics - Online Tutor, Practice Problems & Exam Prep Guys, in this video, I want to show you a specific example of calculating the work that's done by gravity, but in a specific case where you have an object on an inclined plane. The reason I'm doing this is because these problems can be kind of confusing and tricky. They sometimes try to trick you into plugging the wrong number into your equations. So I want to go over this so to make sure that you actually don't make that mistake. So we're actually going to start with the example and we'll come back to this point in just a second here. In this problem, we have a 100 kilogram block that's on an incline, and it's going to slide down a distance of 12 meters. We also know that the angle of the incline is 37 degrees. What we want to do in this problem is we want to calculate the works done by mg[x], mg[y], and mg. So, basically, the works done by each one of these, the components of mg and then also mg itself. So how do I do that? Well, I want to go ahead and draw those forces in my diagram. Right? In the inclined plane, my mg points straight down. My normal points perpendicular to the surface. And because we tilt our coordinate system, then we just have to break up this mg into its components. So this is going to be my mg[x] and this is going to be my mg[y]. So I'm trying to figure out the works done by the components of mg and then mg itself. Alright? So how do I figure out works? I'm always just going to use fdcosine theta. So we're just going to use mg[x] × d × cosine(θ), and we're going to use mg[y] × d × cosine(θ), and then mg × d × cosine(θ). What I want you to remember is that the θ term that we use in our fdcosin(θ) always refers to the angle that is between the force and the displacement or the distance. Basically, it's the angle between the force and the motion. That's why I always draw an arrow between this θ and my f and my d just so I don't forget that. Okay. So what I have to do is I want to figure out mg[x] × d × cosine between these two vectors here. So let's get started. How do I figure out mg[x] in an inclined plane? Well, remember, mg[x] is going to be mg × the sine of the incline angle. This 37 degrees here is actually θ[x]. It's the angle with respect to the horizontal. That's my 37 degrees there. So really what happens is I can replace this mg[x] with mg × sin(θ[x]) here. So we're doing mg × sin(θ[x]) × d × the cosine of θ here. So here's what I want to point out, is that these problems, with inclined planes, be careful not to plug in the incline angle, this θ[x] here. Don't plug in this θ[x] here for cosine(θ), which is really in your fdcosine(θ). Right? So don't get these two angles confused. This is the angle of the incline, your θ[x]. This is the angle that is between these two forces. They're not the same thing. Alright? So let's go ahead and do this. Right? We can actually go ahead and plug in all of our numbers. We have m as 100, we have 9.8 for g. And then we have the sine of θ[x]. That's my 37 degrees here. Now I multiply it by the displacement or the distance, and that's 12 meters down the ramp. Now the cosine is going to be the cosine of the angle between your mg[x] and your displacement. If you take a look at your diagram here, mg[x] points down the ramp, and your displacement also points down the ramp, which means that the angle that we plug in here is actually 0, not 37 degrees. And remember that the cosine of 0 always evaluates to 1. So, once you plug all this stuff into your calculator, you get 7,080 joules. So that's the work done by mg[x]. Now let's take a look at mg[y]. Basically, we're going to do the exact same thing here except now we're going to do mg × the cosine of θ[x], right because that's mg[y] × d × cosine(θ). And we plug this we're just going to plug in all the numbers. This is going to be 100 times 9.8 times the cosine of 37. Now we have times 12. And now we just figure out the cosine of the angle between these two forces, right between my mg × cosine(θ) or my mg[y] and the distance. What is that angle? Well, if you take a look at your diagram, your mg[y] points into the incline and your displacement points down the incline. Those are perpendicular to each other. Right? This is actually a right angle like this. So what happens here is we actually plug 90 into our cosine term, not 37 again. And remember that the cosine of 90 always evaluates to 0. So even though you have a bunch of numbers out in front of here, you're going to multiply it all by 0. And the work that's done by mg[y] is always just going to be equal to 0 joules. So this is actually always going to be the case. And you can think about it like this. Right? Your mg[y] always points into the incline, but you're always moving down the incline so it can never do any work like that. The last thing we do is mg × d × cosine(θ) here. So what we have to do now is actually look at this force, which is our mg, and we have to figure out the angle between this mg and this displacement here. And this is where it gets pretty complicated because this angle here in our inclined plane problems is actually θ[y]. It's the angle with respect to the y-axis which we're almost never given. So this actually ends up being really complicated here and we're not going to use this approach to solving the work done by gravity. Fortunately, we can add them up just like regular numbers. So if I want to calculate the work done by gravity, it's really just the addition of the work done by mg[x] and the work done by mg[y]. I can add these two works together even though they come from forces that point in the different directions like x and y because works are actually scalars, they're not vectors. And what this allows us to do is our work done by mg[y], remember, is always just equal to 0, so we can just cross that out. So, really, what happens here is that the work done by gravity is always just the work done by mg[x]. And this should make some sense. So this is going to write W [mg] is always equal to the work done by mg[x]. And the reason for that is if you think about this, the component of gravity that pulls the block down the ramp is mg[x]. That's the only thing that can do work on the box. So what happens here is that our work done by mg is just 7,080 joules, just like your mg was. And that's the answer. So you have 7,080 joules and so your work done is equal to mg[x]. By the way, this is also always going to be the case. Alright. So that's it for this one, guys. Hopefully, that made sense. And let me know if you have any questions.
{"url":"https://www.pearson.com/channels/physics/learn/patrick/work-energy/work-by-gravity-inclined-planes?chapterId=8fc5c6a5","timestamp":"2024-11-12T02:11:42Z","content_type":"text/html","content_length":"514133","record_id":"<urn:uuid:5256fae3-1331-46b1-8c21-23ae08d937af>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00161.warc.gz"}
Integrate tool's XY tolerance 01-12-2016 10:17 AM I'm wondering how the Integrate tool takes into account the XY tolerance that is specified. For example, I am running the Integrate tool on a point dataset, as I want to move points that are within a certain distance from each other so that they are coincident. Then I can run the Collect Events tool to get a final result that sums up the total number of points that are coincident throughout the dataset. However, I can't figure out how the XY tolerance works. I entered an XY tolerance of 3km, and expected the result to move any points that are within 3km of each other, but leave points that are further than 3 km away. Instead, it's moving points that are as much as 8.48km apart so that they are coincident (but as soon as the points are 8.49km apart, then the tool leaves them alone). I'm using ArcGIS 10.2, and my point dataset is in NAD83 UTM Zone 12. Any explanation for how this XY tolerance works would be appreciated! 01-12-2016 12:07 PM 01-12-2016 10:25 AM 01-12-2016 10:43 AM 01-12-2016 11:58 AM 01-12-2016 12:07 PM 01-12-2016 12:18 PM 01-12-2016 12:20 PM 01-12-2016 01:36 PM 05-16-2019 08:14 PM 04-12-2024 05:33 AM
{"url":"https://community.esri.com/t5/data-management-questions/integrate-tool-s-xy-tolerance/td-p/641002","timestamp":"2024-11-09T17:29:48Z","content_type":"text/html","content_length":"411873","record_id":"<urn:uuid:c65a6ae1-1927-4459-9597-081b2de48cf3>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00367.warc.gz"}
Java Program to Count Positive Negative and Zero from Array - BTech Geeks Java Program to Count Positive Negative and Zero from Array In the previous article, we have seen Java Program to Remove Even Numbers from Array In this article we are going to see how we can count the number of negative, positive and zero elements by using Java programming language. Java Program to Count Positive Negative and Zero from Array Array is a data structure which stores a fixed size sequential collection of values of single type. Where with every array elements/values memory location is associated. Each array elements have it’s own index where array index starts from 0. In Array set of variables referenced by a single variable name and it’s array index position. It is also called as a container object which contains elements of similar type. Declaration of an array: dataType[] arrayName; (or) //Declaring an array dataType []arrayName; (or) dataType arr[]; Instantiation of an Array: arrayName = new datatype[size]; //Allocating memory to array Combining both Statements in One: dataType[] arrayName = new dataType[size] //Declaring and Instantiating array Initialization of an Array: arrayName[index-0]= arrayElement1 //Initializing the array arrayName[index-s]= arrayElementS Combining all Statements in One: dataType arrayName[ ]={e1,e2,e3}; //declaration, instantiation and initialization Let’s see different ways to count positive, negative and zero element in an array. Method-1: Java Program to Count Positive Negative and Zero from Array By Static Initialization of Array Elements • Declare and initialize an array. • Iterate the array. • If array element is greater than zero then it is positive number so increase the positive count. • If array element is smaller than zero then it is negative number so increase the negative count. • Else element is equal to zero then it is zero so increase the zero count. • Then print the result. public class Main public static void main(String args[]) // Array with elements int arr[] = {10,-15,1,-30,50,7,1,0,0}; //variable to store count of positive, negative and zero elements //initialized with to 0 int positive=0,negative=0,zero=0; //Loop to count positive,negative and zero elements for(int row=0;row<arr.length;row++) //if array element is greater than zero it is positive //incrementing positive count value //if array element is smaller than zero it is negative else if(arr[row]<0) //incrementing negative count value //if array element is not greater or smaller than zero then it is equal to zero //incrementing zero count value System.out.println("Number of positive elements are : "+positive); System.out.println("Number of negative elements are : "+negative); System.out.println("Number of zero elements are : "+zero); Number of positive elements are : 5 Number of negative elements are : 2 Number of zero elements are : 2 Method-2: Java Program to Count Positive Negative and Zero from Array By Dynamic Initialization of Array Elements • Take the array size input from user. • Take the input of array elements. • Print the array. • Then Iterate the array. • If array element is greater than zero then it is positive number so increase the positive count. • If array element is smaller than zero then it is negative number so increase the negative count. • Else element is equal to zero then it is zero so increase the zero count. • Then print the result. import java.util.*; public class Main public static void main(String args[]) Scanner scan = new Scanner(System.in); // Array with elements int arr[] = null; System.out.print("Enter the length of the array : "); int length = scan.nextInt(); arr = new int[length]; int iter; // Entering the array elements System.out.println("Enter the array elements : "); System.out.println("The array elements are : "); //For Loop to print the elements System.out.print(arr[iter]+" "); //variable to store count of positive,negative and zero elements //initialized with to 0 int positive=0,negative=0,zero=0; //Loop to count positive,negative and zero elements for(int row=0;row<arr.length;row++) //if array element is greater than zero it is positive //incrementing positive count value //if array element is smaller than zero it is negative else if(arr[row]<0) //incrementing negative count value //if array element is not greater or smaller than zero then it is equal to zero //incrementing zero count value System.out.println("\nNumber of positive elements are : "+positive); System.out.println("Number of negative elements are : "+negative); System.out.println("Number of zero elements are : "+zero); Enter the length of the array : 10 Enter the array elements : 1 -2 3 -4 5 0 6 0 -7 8 The array elements are : 1 -2 3 -4 5 0 6 0 -7 8 Number of positive elements are : 5 Number of negative elements are : 3 Number of zero elements are : 2 Practice Java programming from home without using any fancy software just by tapping on this Simple Java Programs for Beginners tutorial. Related Java Programs:
{"url":"https://btechgeeks.com/java-program-to-count-positive-negative-and-zero-from-array/","timestamp":"2024-11-09T00:08:55Z","content_type":"text/html","content_length":"66416","record_id":"<urn:uuid:73539fe5-cc54-4d59-b7b4-d81abcaeb6b9>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00159.warc.gz"}
In the beetles described in the animation, there were two alleles for color, brown and green. suppose that you discover a very small population of these beetles, consisting of the individuals shown below in the beetles described in the animation, there were two alleles for color, brown and green. suppose that you discover a very small population of these beetles, consisting of the individuals shown below. how can you calculate the frequency of each allele in this population? To calculate the frequency of each allele in the population, you need to count the number of individuals carrying each allele and divide it by the total number of individuals in the population. Let’s say we have a population of beetles consisting of the following individuals: • 10 beetles with brown color (BB) • 6 beetles with green color (GG) • 4 beetles with a heterozygous genotype (BG) To calculate the allele frequencies, follow these steps: 1. Determine the total number of individuals in the population. In this case, we have 10 + 6 + 4 = 20 individuals. 2. Count the number of individuals carrying each allele. In this case, we have 10 beetles with the brown allele (BB) and 6 beetles with the green allele (GG). Since individuals with the heterozygous genotype (BG) carry both alleles, we count them as one for each allele. 3. Calculate the frequency of each allele by dividing the count by the total number of individuals. □ Frequency of brown allele (B): 10 BB individuals / 20 total individuals = 0.5 or 50% □ Frequency of green allele (G): 6 GG individuals / 20 total individuals = 0.3 or 30% Therefore, in this small population of beetles, the frequency of the brown allele is 0.5 or 50%, and the frequency of the green allele is 0.3 or 30%.
{"url":"https://en.sorumatik.co/t/in-the-beetles-described-in-the-animation-there-were-two-alleles-for-color-brown-and-green-suppose-that-you-discover-a-very-small-population-of-these-beetles-consisting-of-the-individuals-shown-below/2351","timestamp":"2024-11-10T14:36:55Z","content_type":"text/html","content_length":"23371","record_id":"<urn:uuid:841d52e3-40b5-4271-8f4c-1fcafa35ff3e>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00890.warc.gz"}
Data Science Training in Hyderabad | Vcube Software Solutions DATA SCIENCE : It is an inter-disciplinary field that uses scientific methods, algorithms and systems to extract knowledge and insights from many structural and unstructured data. Power Bi is an all-in-one high-level tool for the data analytics part of data science. It can be thought of as less of a programming-language type application, but more of a high-level application akin like Microsoft Excel. Duration:60 days Data Analysis Courses in Hyderabad Description of the Data Science Course Data Analysis Courses in Hyderabad Why data science is so popular? Data Science: In the wonderful and vast world of data science in 2020, there are a plethora of options and approaches to analytics and machine-learning. While most data scientists approach a solution using popular programming languages such as Python, R, Scala, or even Julia, there are some higher-level implementations that can get the job done in some cases. Microsoft’s Power-Bi is a great example of this. You will need a Windows system to install Power-Bi. I’m a Fedora user, so I used Gnome’s Boxes to boot up a new virtual kernel. Boxes is a QEMU GUI that makes it incredibly simple to run multiple virtual systems under one operating system at the same time. Alternatively, you could always use the application’s online version. In my limited experience with the web-friendly version, I discovered that the features are rather lacking in comparison and are frequently split between the two. Curriculum for the Data Science Data Science Course in Hyderabad What is a Data Science? Who is a Data Scientist? Who can become a Data Scientist? What is an Artificial Intelligence? What is a Machine Learning? What is a Deep Learning? Artificial Intelligence Vs Machine Learning Vs Deep Learning Real Time Process of Data Science Data Science Real Time Applications Technologies used in Data Science Prerequisites Knowledge to Learn Data Science What is a Machine Learning? Machine Learning Vs Statistics Traditional Programming Vs Machine Learning How Machine Will Learn like Human Learning Machine Learning Engineer Responsibilities Types of Machine Learning Supervised learning Un-Supervised learning Reinforcement Learning PYTHON Programming Introduction History of Python Python is Derived from? Python Features Python Applications Why Python is Becoming Popular Now a Day? Existing Programming Vs Python Programming Writing Programs in Python Top Companies Using Python Python Programming Modes o Interactive Mode Programming o Scripting Mode Programming Flavors in Python, Python Versions Download & Install the Python in Windows & Linux How to set Python Environment in the System? Anaconda – Data Science Distributor Downloading and Installing Anaconda, Jupyter Notebook & Python IDE – Jupyter Notebook Environment Python IDE – Spyder Environment Python Identifiers(Literals), Reserved Keywords Variables, Comments Lines and Indentations, Quotations Assigning Values to Variables Data Types in Python Mutable Vs Immutable Fundamental Data Types: int, float, complex, bool, str Number Data Types: Decimal, Binary, Octal, Hexa Decimal & Number Conversions Inbuilt Functions in Python Data Type Conversions Priorities of Data Types in Python Python Operators o Arithmetic Operators o Comparison (Relational) Operators o Assignment Operators o Logical Operators o Bitwise Operators o Membership Operators o Identity Operators Slicing & Indexing o Forward Direction Slicing with +ve Step o Backward Direction Slicing with -ve Step Decision Making Statements o if Statement o if-else Statement o elif Statement Looping Statements o Why we use Loops in python? o Advantages of Loops o for Loop o Nested for Loop o Using else Statement with for Loop o while Loop o Infinite while Loop o Using else with Python while Loop Conditional Statements o break Statement o continue Statement o Pass Statement Advanced Data Types: List, Tuple, Set, Frozenset, Dictionary, Range, Bytes & Bytearray, None List Data Structure o List indexing and splitting o Updating List values o List Operations o Iterating a List o Adding Elements to the List o Removing Elements from the List o List Built-in Functions o List Built-in Methods Tuple Data Structure o Tuple Indexing and Splitting o Tuple Operations o Tuple Inbuilt Functions o Where use Tuple o List Vs Tuple o Nesting List and Tuple Set Data Structure o Creating a Set o Set Operations o Adding Items to the Set o Removing Items from the Set o Difference Between discard() and remove() o Union of Two Sets o Intersection of Two Sets o Difference of Two Sets o Set Comparisons Frozenset Data Structure Dictionary Data Structure o Creating the Dictionary o Accessing the Dictionary Values o Updating Dictionary Values o Deleting Elements Using del Keyword o Iterating Dictionary o Properties of Dictionary Keys o Built-in Dictionary Functions o Built-in Dictionary Methods List Vs Tuple Vs Set Vs Frozenset Vs Dict Range, Bytes, Bytearray & None Python Functions o Advantage of Functions in Python o Creating a Function o Function Calling o Parameters in Function o Call by Reference in Python o Types of Arguments Required Arguments Keyword Arguments Default Arguments Variable-Length Arguments Scope of Variables Python Built-in Functions Python Lambda Functions String with Functions o Strings Indexing and Splitting o String Operators o Python Formatting Operator o Built-in String Functions Python File Handling o Opening a File o Reading the File o Read Lines of the File o Looping through the File o Writing the File o Creating a New File o Using with Statement with Files o File Pointer Positions o Modifying File Pointer Position o Renaming the File & Removing the File o Writing Python Output to the Files o File Related Methods Python Exceptions o Common Exceptions o Problem without Handling Exceptions o except Statement with no Exception o Declaring Multiple Exceptions o Finally Block o Raising Exceptions o Custom Exception Python Packages o Python Libraries o Python Modules Collection Module Math Module OS Module Random Module Statistics Module Sys Module Date & Time Module o Loading the Module in our Python Code import Statement from-import Statement o Renaming a Module Regular Expressions Command Line Arguments Object Oriented Programming (OOPs) o Object-oriented vs Procedure-oriented Programming o Object o Class o Method o Inheritance o Polymorphism o Data Abstraction o Encapsulation Python Class and Objects o Creating Classes in Python o Creating an Instance of the Class Python Constructor o Creating the Constructor in Python o Parameterized Constructor o Non-Parameterized Constructor o In-built Class Functions o In-built Class Attributes Python Inheritance o Python Multi-Level Inheritance o Python Multiple Inheritance o Method Overriding o Data Abstraction in Python Graphical User Interface (GUI) Programming Python TKinter o Tkinter Geometry pack() Method grid() Method place() Method o Tkinter Widgets NumPy Introduction o What is NumPy o The Need of NumPy NumPy Environment Setup N-Dimensional Array (Ndarray) o Creating a Ndarray Object o Finding the Dimensions of the Array o Finding the Size of Each Array Element o Finding the Data Type of Each Array Item o Finding the Shape and Size of the Array o Reshaping the Array Objects o Slicing in the Array o Finding the Maximum, Minimum, and Sum of the Array o NumPy Array Axis o Finding Square Root and Standard Deviation o Arithmetic Operations on the Array o Array Concatenation NumPy Datatypes o NumPy dtype o Creating a Structured Data Type Numpy Array Creation o Numpy.empty o Numpy.Zeros o NumPy.ones Numpy Array from Existing Data o Numpy.asarray Numpy Arrays within the Numerical Range o Numpy.arrange o NumPy.linspace o Numpy.logspace NumPy Broadcasting o Broadcasting Rules NumPy Array Iteration o Order of Iteration F-Style Order C-Style Order o Array Values Modification NumPy String Functions NumPy Mathematical Functions o Trigonometric Functions o Rounding Functions NumPy Statistical functions o Finding the Min and Max Elements from the Array o Calculating Median, Mean, and Average of Array Items NumPy Sorting and Searching NumPy Copies and Views NumPy Matrix Library NumPy Linear Algebra NumPy Matrix Multiplication in Python Pandas Introduction & Pandas Environment Setup o Key Features of Pandas o Benefits of Pandas o Python Pandas Data Structure Pandas Series o Creating a Series Create an Empty Series Create a Series using Inputs o Accessing Data from Series with Position o Series Object Attributes o Retrieving Index Array and Data Array of a Series Object o Retrieving Types (dtype) and Size of Type (itemsize) o Retrieving Shape o Retrieving Dimension, Size and Number of Bytes o Checking Emptiness and Presence of NaNs o Series Functions Pandas DataFrame o Create a DataFrame Create an Empty DataFrame Create a DataFrame using Inputs Column Selection, Addition & Deletion Row Selection, Addition & Deletion DataFrame Functions Merging, Joining & Combining DataFrames Pandas Concatenation Pandas Time Series o Datetime o Time Offset o Time Periods o Convert String to Date Viewing/Inspecting Data (loc & iloc) Data Cleaning Filter, Sort, and Groupby Statistics on DataFrame Pandas Vs NumPy DataFrame Plotting o Line: Line Plot (Default) o Bar: Vertical Bar Plot o Barh: Horizontal Bar Plot o Hist: Histogram Plot o Box: Box Plot o Pie: Pie Chart o Scatter: Scatter Plot DBMS - Structured Query Language Introduction & Models of DBMS SQL & Sub Language of SQL Data Definition Language (DDL) Data Manipulation Language (DML) Data Query/Retrieval Language (DQL/DRL) Transaction Control Language (TCL) Data Control Language (DCL) Installation of MySQL & Database Normalization Sub Queries & Key Constraints Aggregative Functions, Clauses & Views Importing & Exporting Data Data Extraction from CSV (pd.read_csv) Data Extraction from TEXT File (pd.read_table) Data Extraction from CLIPBOARD (pd.read_clipboard) Data Extraction from EXCEL (pd.read_excel) Data Extraction from URL (pd.read_html) Writing into CSV (df.to_cs Writing into EXCEL (df.to_excel) Data Extraction from DATABASES o Python MySQL Database Connection Import mysql.connector Module Create the Connection Object Create the Cursor Object Execute the Query Data Visualization Introduction Tasks of Data Visualization Benefit of Data Visualization Plots for Data Visualization Matplotlib Architecture General Concept of Matplotlib MatPlotLib Environment Setup Verify the MatPlotLib Installation Working with PyPlot Formatting the Style of the Plot Plotting with Categorical Variables Multi-Plots with Subplot Function Line Graph Bar Graph Scatter Plot Pie Plot 3Dimensional – 3D Graph Plot Functions of MatPlotLib Contour Plot, Quiver Plot, Violin Plot 3D Contour Plot 3D Wireframe Plot 3D Surface Plot Box Plot o What is a Boxplot? o Mean, Median, Quartiles, Outliers o Inter Quartile Range (IQR), Whiskers o Data Distribution Analysis o Boxplot on a Normal Distribution o Probability Density Function o 68–95–99.7 Rule (Empirical rule) Bar Chart Box whisker plot Line plot Scatter Plot and Heat Maps Machine Learning What is Machine Learning Importance of Machine Learning Need for Machine Learning Statistics Vs Machine Learning Traditional Programming Vs Machine Learning How Machine Learning like Human Learning How does Machine Learning Work? Machine Learning Engineer Responsibilities Life Cycle of Machine Learning o Gathering Data o Data preparation o Data Wrangling o Analyze Data o Train the model o Test the model o Deployment Features of Machine Learning History of Machine Learning Applications of Machine Learning Types of Machine Learning o Supervised Machine Learning o Unsupervised Machine Learning o Reinforcement Learning Supervised Machine Learning How Supervised Learning Works? Steps Involved in Supervised Learning Types of supervised Machine Learning Algorithms o Classification o Regression Advantages of Supervised Learning Disadvantages of Supervised Learning Unsupervised Machine Learning How Unsupervised Learning Works? Why use Unsupervised Learning? Types of Unsupervised Learning Algorithm o Clustering o Association Advantages of Unsupervised Learning Disadvantages of Unsupervised Learning Supervised Vs Unsupervised Learning Reinforcement Machine Learning How to get Datasets for Machine Learning? o What is a Dataset? o Types of Data in Datasets o Popular Sources for Machine Learning Datasets Data Preprocessing in Machine Learning Why do we need Data Preprocessing? o Getting the Dataset o Importing Libraries o Importing Datasets o Finding Missing Data By Deleting the Particular Row By Calculating the Mean o Encoding Categorical Data o Splitting Dataset into Training and Test Set o Feature Scaling Classification Algorithms in Machine Learning What is the Classification Algorithm? Types of Classifications o Binary Classifier o Multi-class Classifier Learners in Classification Problems o Lazy Learners o Eager Learners Types of ML Classification Algorithms o Linear Models Logistic Regression Support Vector Machines o Non-linear Models K-Nearest Neighbors Naïve Bayes Decision Tree Classification Random Forest Classification Kernel SVM Evaluating a Classification Model o Confusion Matrix What is a Confusion Matrix? True Positive True Negative False Positive – Type 1 Error False Negative – Type 2 Error Why need a Confusion matrix? Precision vs Recall Confusion Matrix in Scikit-Learn Confusion Matrix for Multi-Class Classification o Log Loss or Cross-Entropy Loss o AUC-ROC curve Use cases of Classification Algorithms K-Nearest Neighbor(KNN) Algorithm in Machine Learning Why do we Need a K-NN Algorithm? How does K-NN work? o What is Euclidean Distance o How it Calculates the Distance How to Select the Value of K in the K-NN Algorithm? Advantages of KNN Algorithm Disadvantages of KNN Algorithm Python Implementation of the KNN Algorithm Analysis on Social Network Ads Dataset Steps to Implement the K-NN Algorithm o Data Pre-processing Step o Fitting the K-NN algorithm to the Training Set o Predicting the Test Result o Test Accuracy of the Result (Creation of Confusion Matrix) o Visualizing the Test Set Result. o Improve the Performance of the K-NN Model Why is it Called Naïve Bayes? o Naïve Means? o Bayes Means? Bayes’ Theorem o Posterior Probability o Likelihood Probability o Prior Probability o Marginal Probability Working of Naïve Bayes’ Classifier Advantages of Naïve Bayes Classifier Disadvantages of Naïve Bayes Classifier Applications of Naïve Bayes Classifier Types of Naïve Bayes Model o Gaussian Naïve Bayes Classifier o Multinomial Naïve Bayes Classifier o Bernoulli Naïve Bayes Classifier Python Implementation of the Naïve Bayes Algorithm Steps to Implement the Naïve Bayes Algorithm o Data Pre-processing Step o Fitting Naive Bayes to the Training set o Predicting the Test Result o Test Accuracy of the Result (Creation of Confusion matrix) o Visualizing the Test Set Result o Improve the Performance of the Naïve Bayes Model Decision Tree Classification Algorithm in Machine Learning Why use Decision Trees? Types of Decision Trees o Categorical Variable Decision Tree o Continuous Variable Decision Tree Decision Tree Terminologies How does the Decision Tree Algorithm Work? Attribute Selection Measures o Entropy o Information Gain o Gini index o Gain Ratio Algorithms used in Decision Trees o ID3 Algorithm → (Extension of D3) o C4.5 Algorithm→ (Successor of ID3) o CART Algorithm → (Classification & Regression Tree) How to Avoid/Counter Overfitting in Decision Trees? o Pruning Decision Trees o Random Forest Pruning: Getting an Optimal Decision tree Advantages of the Decision Tree Disadvantages of the Decision Tree Python Implementation of Decision Tree Steps to Implement the Decision Tree Algorithm o Data Pre-processing Step o Fitting a Decision-Tree Algorithm to the Training Set o Predicting the Test Result Test Accuracy of the Result (Creation of Confusion matrix) o Visualizing the Test Set Result o Improve the Performance of the Decision Tree Model Random Forest Classifier Algorithm in Machine Learning Working of the Random Forest Algorithm Assumptions for Random Forest Why use Random Forest? How does Random Forest Algorithm Work? o Ensemble Techniques o Bagging (Bootstrap Aggregation) Applications of Random Forest Disadvantages of Random Forest Python Implementation of Random Forest Algorithm Steps to Implement the Random Forest Algorithm: o Data Pre-processing Step o Fitting the Random Forest Algorithm to the Training Set o Predicting the Test Result o Test Accuracy of the Result (Creation of Confusion Matrix) o Visualizing the Test Set Result o Improving the Performance of the Random Forest Model Logistic Regression Algorithm in Machine Learning Logistic Function (Sigmoid Function) Assumptions for Logistic Regression Logistic Regression Equation Type of Logistic Regression o Binomial Logistic Regression o Multinomial Logistic Regression o Ordinal Logistic Regression Python Implementation of Logistic Regression (Binomial) Steps to Implement the Logistic Regression: o Data Pre-processing Step o Fitting Logistic Regression to the Training Set o Predicting the Test Result o Test Accuracy of the Result (Creation of Confusion Matrix) o Visualizing the Test Set Result o Improve the Performance of the Logistic Regression Model Support Vector Machine Algorithm Types of Support Vector Machines o Linear Support Vector Machine o Non-Linear Support Vector Machine Hyperplane in the SVM Algorithm Support Vectors in the SVM Algorithm How does SVM Works? o How does Linear SVM Works? o How does Non-Linear SVM Works? Python Implementation of Support Vector Machine Steps to Implement the Support Vector Machine: o Data Pre-processing Step o Fitting Support Vector Machine to the Training Set o Predicting the Test Result o Test Accuracy of the Result (Creation of Confusion Matrix) o Visualizing the Test Set Result o Improve the Performance of the Support Vector Machine Regression Algorithms in Machine Learning Terminologies Related to the Regression Analysis o Dependent Variable o Independent Variable o Outliers o Multi-collinearity o Under fitting and Overfitting Why do we use Regression Analysis? Types of Regression o Linear Regression o Logistic Regression o Polynomial Regression o Support Vector Regression o Decision Tree Regression o Random Forest Regression o Ridge Regression o Lasso Regression Linear Regression in Machine Learning Types of Linear Regression o Simple Linear Regression o Multiple Linear Regression Linear Regression Line o Positive Linear Relationship o Negative Linear Relationship Finding the Best Fit Line o Cost Function o Gradient Descent o Model Performance o R-Squared Method Assumptions of Linear Regression Simple Linear Regression in Machine Learning SLR Model Implementation of Simple Linear Regression Algorithm using o Data Pre-processing Step o Fitting Simple Linear Regression to the Training Set o Predicting the Test Result o Test Accuracy of the o Visualizing the Test Set Result. o Try to Improve the Performance of the Model Multiple Linear Regression in Machine Learning MLR Equation Assumptions for Multiple Linear Regression Implementation of Multiple Linear Regression model using Python o Data Pre-processing Step o Fitting Multiple Linear Regression to the Training Set o Predicting the Test Result o Test Accuracy of the o Visualizing the Test Set Result. o Try to Improve the Performance of the Model Backward Elimination What is Backward Elimination? Steps of Backward Elimination Need for Backward Elimination: An optimal Multiple Linear Regression model Implement the Steps for Backward Elimination method Polynomial Regression in Machine Learning Need for Polynomial Regression Equation of the Polynomial Regression Model Implementation of Polynomial Regression using Python Steps for Polynomial Regression: o Data Pre-processing o Build a Linear Regression Model o Build a Polynomial Regression Model o Visualize the Result for Linear Regression Model o Visualize the Result for Polynomial Regression Model o Predicting the Final Result with the Linear Regression Model o Predicting the Final Result with the Polynomial Regression Support Vector Regression (SVR) Decision Tree Regression Random Forest Regression Ridge Regression Lasso Regression Linear Regression Vs Logistic Regression Classification vs Regression Clustering Algorithms in Machine Learning Types of Clustering Methods o Partitioning Clustering o Density-Based Clustering o Distribution Model-Based Clustering o Hierarchical Clustering o Fuzzy Clustering Clustering Algorithms o K-Means Algorithm o Mean-shift Algorithm o DBSCAN Algorithm o Expectation-Maximization Clustering using GMM o Agglomerative Hierarchical Algorithm o Affinity Propagation Applications of Clustering Hierarchical Clustering Algorithm in Machine Learning Hierarchical Clustering Technique Approaches Why Hierarchical Clustering? Agglomerative Hierarchical Clustering How the Agglomerative Hierarchical Clustering Work? Measure for the Distance between two Clusters o Single Linkage o Complete Linkage o Average Linkage o Centroid Linkage Working of Dendrogram in Hierarchical Clustering Hierarchical Clustering Example with Scratch Data Python Implementation of Agglomerative Hierarchical Clustering Steps for Implementation of Agglomerative Hierarchical Clustering using Python o Data Pre-processing o Finding the Optimal Number of Clusters using the o Training the Hierarchical Clustering Model o Visualizing the Clusters K-Means Clustering Algorithm in Machine Learning What is K-Means Algorithm? How does the K-Means Algorithm Work? How to Choose the Value of “K Number of Clusters” in K-Means o Elbow Method o Within Cluster Sum of Squares (WCSS) K-Means Clustering Example with Scratch Data Python Implementation of K-means Clustering Algorithm Steps to Implement of K-means Clustering Algorithm o Data Pre-processing o Finding the Optimal Number of Clusters using the Elbow o Training the K-means Algorithm on the Training Dataset o Visualizing the Cluster Association Rules in Machine Learning Association Rules Pattern Detection Market Basket Analysis Support, Confidence, Expected Confidence, Lift Finding Item Sets with High Support Finding Item Rules with High Confidence or Lift Apriori Algorithm in Machine Learning Apriori Algorithm How does Apriori Algorithm Works? Apriori Algorithm Example Implementation of Apriori Algorithm using Python Limitations of Apriori Algorithm Dimensionality Reduction & Model Selection Boosting Dimensionality Reduction o Principal Component Analysis (PCA) o Linear Discriminant Analysis (LDA) o Kernel PCA Model Selection Boosting o Model Selection Grid Search K-Fold Cross Validation o XGBoost Mean, Median and Mode Data Variability, Range, Quartiles IQR, Calculating Percentiles Variance, Standard Deviation, Statistical Summaries Types of Distributions – Normal, Binomial, Poisson Probability Distributions & Skewness Data Distribution, 68–95–99.7 rule (Empirical rule) Descriptive Statistics and Inferential Statistics Statistics Terms and Definitions, Types of Data Data Measurement Scales, Normalization, Standardization Measure of Distance, Euclidean Distance Probability Calculation – Independent & Dependent Entropy, Information Gain Natural Language Processing Introduction o What is NLP? o History of NLP o Advantages of NLP o Disadvantages of NLP Components of NLP o Natural Language Understanding (NLU) o Natural Language Generation (NLG) o Difference between NLU and NLG Applications of NLP How to build an NLP Pipeline? Phases of NLP o Lexical Analysis and Morphological o Syntactic Analysis (Parsing) o Semantic Analysis o Discourse Integration o Pragmatic Analysis Why NLP is Difficult? NLP APIs NLP Libraries Natural Language Vs Computer Language Exploring Features of NLTK o Open the Text File for Processing o Import Required Libraries o Sentence Tokenizing o Word Tokenizing o Find the Frequency Distribution o Plot the Frequency Graph o Remove Punctuation Marks o Plotting Graph without Punctuation Marks o List of Stopwords o Removing Stopwords o Final Frequency Distribution Word Cloud o Word Cloud Properties o Python Code Implementation of the Word Cloud o Word Cloud with the Circle Shape o Word Cloud Advantages o Word Cloud Disadvantages o Stemmer Examples o Stemming Algorithms Porter’s Stemmer Lovin’s Stemmer Dawson’s Stemmer Krovetz Stemmer Xerox Stemmer Snowball Stemmer o Difference between Stemmer and Lemmatizer o Demonstrating how a lemmatizer works o Lemmatizer with default PoS value o Demonstrating the power of lemmatizer o Lemmatizer with different POS values Part-of-Speech (PoS) Tagging o Why do we need Part of Speech (POS)? o Part of Speech (PoS) Tags o Categories of Phrases o Phrase Structure Rules Named Entity Recognition (NER) o Use-Cases o Commonly used Types of Named Entity WordNet Bag of Words o What is the Bag-of-Words method? o Creating a basic Structure on Sentences o Words with Frequencies o Combining all the Words o Final Model of our Bag of Words o Applications & Limitations o Term Frequency o Inverse Document Frequency o Term Frequency – Inverse Document Frequency Deploying a Machine Learning Model on a Web using Flask What is Model Deployment? What is Flask? Installing Flask on your Machine Understanding the Problem Statement Build our Machine Learning Model Create the Webpage Connect the Webpage with the Model Working of the Deployed Mode What is Deep Learning? Deep learning Process Types of Deep Learning Networks o Deep Neural Networks o Artificial Neural Networks o Convolutional Neural Networks o Recurrent Neural Networks o History of TensorFlow o Components of TensorFlow o Use Cases/Applications of TensorFlow o Features of TensorFlow Installation of TensorFlow through pip & conda Advantage and Disadvantage of TensorFlow TensorFlow Playground Introduction to Keras, OpenCV & Theano Implementation of Deep Learning What is Artificial Intelligence? o Why Artificial Intelligence? o Goals of Artificial Intelligence o What Comprises to Artificial Intelligence? o Advantages of Artificial Intelligence o Disadvantages of Artificial Intelligence Applications of Artificial Intelligence History of Artificial Intelligence Types of Artificial Intelligence Types of AI Agents o Simple Reflex Agent o Model-Based Reflex Agent o Goal-Based Agents o Utility-Based Agent o Learning Agent Search Algorithms in Artificial Intelligence o Search Algorithm Terminologies o Properties of Search Algorithms o Types of Search Algorithms Subsets of Artificial Intelligence Implementation of Artificial Intelligence Why R Programming is Important? Why Learn R? History of Python Features of R Applications of R Comparison between R and Python Which is Better to Choose Pros and Cons of R Companies using R R Packages Downloading and Installing R What is CRAN? Setting R Environment: o Search Packages in R Environment o Search Packages in Machine with inbuilt function and manual searching o Attach Packages to R Environment o Install Add-on Packages from CRAN o Detach Packages from R Environment o Functions and Packages Help R Programming IDE o RStudio o Downloading and Installing RStudio Variable Assignment o Displaying Variables o Deleting Variables o Single Line o Multi Line Comments Data Types o Logical o Integer o Double o Complex o Character Operators Naming List Elements o Accessing List Elements o Manipulating List Elements o Merging Lists o Converting List to Vector o Creating a Matrix o Accessing Elements of a Matrix o Matrix Manipulations o Dimensions of Matrix o Transpose of Matrix Data Frames o Create Data Frame o Vector to Data Frame o Character Data Converting into Factors: StringsAsFactors o Convert the columns of a data frame to characters o Extract Data from Data Frame o Expand Data Frame, Column Bind and Row Bind Merging / Joining Data Frames o Inner Join o Outer Join o Cross Join o Create Array with Multiple Dimensions o Naming Columns and Rows o Accessing Array Elements o Manipulating Array Elements o Calculations across Array Elements o Factors in Data Frame o Changing the Order of Levels o Generating Factor Levels o Deleting Factor Levels o Arithmetic Operators o Relational Operators o Logical Operators o Assignment Operators o R as Calculator o Performing different Calculations o Inbuilt Functions o User Defined Functions o Vector o List o Matrix o Data frame o Array o Factors Inbuilt Constants & Functions o Vector Creation o Single Element Vector o Multiple Element Vector o Vector Manipulation o Sub setting & Accessing the Data in Vector o Creating a List Loading and Reading Data in R Data Extraction from CSV o Getting and Setting the Working Directory o Input as CSV File, Reading a CSV File o Analyzing the CSV File, Writing into a CSV File Data Extraction from URL Data Extraction from CLIPBOARD Data Extraction from EXCEL Data Extraction from CSV o Getting and Setting the Working Directory o Input as CSV File, Reading a CSV File o Analyzing the CSV File, Writing into a CSV File Data Extraction from URL Data Extraction from CLIPBOARD Data Extraction from EXCEL o Install “xlsx” Package o Verify and Load the “xlsx” Package, Input as “xlsx” File o Reading the Excel File, Writing the Excel File Data Extraction from DATABASES o RMySQL Package, Connecting to MySql o Querying the Tables, Query with Filter Clause o Updating Rows in the Tables, Inserting Data into the Tables o Creating Tables in MySql, Dropping Tables in MySql o Using dplyr and tidyr package Machine Learning using R Data Pre-processing Classification Algorithms o K Nearest Neighbors Classification o Naive Bayes Classification o Decision Tree Classification o Random Forest Classification o Support Vector Machine Classification o Logistic Regression o Kernel SVM Regression Algorithms o Simple Linear Regression o Multiple Linear Regression o Polynomial Regression o Support Vector Regression o Decision Tree Regression o Random Forest Regression Clustering Algorithms o K-Means Clustering o Hierarchical Clustering Association Rule Algorithms o Apriori o Eclat o Principal Component Analysis o Linear Discriminant Analysis o Kernal PCA Model Selection & Boosting o Grid Search o K Fold Cross Validation o XGBoost Natural Language Processing Deep Learning – Artificial Neural Networks Explore Weka Machine Learning Toolkit o Installation of WEKA o Features of WEKA Toolkit o Explore & Load data sets in Weka Perform Data Preprocessing Tasks o Apply Filters on Data Sets Performing Classification on Data Sets o J48 Classification Algorithm o Decision Trees Algorithm o K-NN Classification Algorithm o Naive-Bayes Classification Algorithm o Comparing Classification Results Performing Regression on Data Sets o Simple Linear Regression Model o Multi Linear Regression Model o Logistic Regression Model o Cross-Validation and Percentage Split Performing Clustering on Data Sets o Clustering Techniques in Weka o Simple K-means Clustering Algorithm o Association Rule Mining on Data Sets o Apriori Association Rule Algorithm o Discretization in the Rule Generation Process Graphical Visualization in Weka o Visualization Features in Weka o Visualize the data in various dimensions o Plot Histogram, Derive Interesting Insights Upskill & Reskill For Your Future With Our Software Courses Best Data Science Institute in Hyderabad
{"url":"https://www.vcubesoftsolutions.com/data-science-training-in-hyderabad/","timestamp":"2024-11-07T16:30:52Z","content_type":"text/html","content_length":"251955","record_id":"<urn:uuid:3db03e4f-9b2d-4205-9a05-8d7315ef990a>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00188.warc.gz"}
Factors in R Tutorial Learn about the factor function in R, along with an example, and it's structure, order levels, renaming of the levels, and finally, with the ordering of categorical values. This tutorial takes course material from DataCamp's free Intro to R course and allows you to practice Factors. Discover what R functions are, the different type of functions in R, and how to create your own functions in R. Learn all about R's matrix, naming rows and columns, accessing elements also with computation like addition, subtraction, multiplication, and division. Discover the R formula and how you can use it in modeling- and graphical functions of well-known packages such as stats, and ggplot2. In this tutorial, you'll learn all about R variables including how to define variables, remove variables, and much more. Learn about several useful functions for data structure manipulation, nested-lists, regular expressions, and working with times and dates in the R programming language.
{"url":"https://www.datacamp.com/tutorial/factors-in-r","timestamp":"2024-11-09T17:09:04Z","content_type":"text/html","content_length":"332561","record_id":"<urn:uuid:3bce22cf-1862-4836-9e89-bda5603a667d>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00018.warc.gz"}
Spectral analysis and pseudodifferential operators Viorel Iftime, Gheorghe Nenciu. The seminar has been initiated in 1985 at the Faculty of Mathematics and Computer Science of the University of Bucharest, by a group of researchers and PhD students coordinated by Vladimir Georgescu (at the time senior researcher at the Department of Theoretical Physics of the National Institute of Physics and Nuclear Engineering in Bucharest - Magurele, now Directeur de Recherches CNRS at the Cergy-Pontoise University) and by Viorel Iftimie (professor at the Faculty of Mathematics and Computer Science of the University of Bucharest). The aim of the seminar was the study of quantum Hamiltonians and their scattering theory. After May 1990, the seminar has been organized constantly by the Institute of Mathematics of the Romanian Among the people active in this seminar during its existence let us cite: Gruia Arsu, Ingrid Beltita, Horia Cornean, Sylvain Golenia, Andrei Iftimovici, David Krejcirik, Horia Manda, Nicolae Mandache, Marius Mantoiu, Petre Mironescu, Mihai Pascu, Teodor Octavian Popp, Radu Purice, Radu Teodor. Among the guests that gave several talks in the seminar let us cite: Yves Dermenjian, Pierre Duclos, Pavel Exner, Bernard Helffer, Arne Jensen. • A series of lectures on The Melin calculus on homogeneous Lie groups, following two papers by P. Glowacki has been given by Mihai Pascu. (Seminar notes) • Development of the magnetic pseudodifferential calculus and its applications for the Foldy-Wouthuysen transformation and the Peierls-Onsager effective Hamiltonian. --- On May 10, 2017, Ademir Feranado Pazoto from Universidade Federal de Rio de Janeiro gave the talk: Stabilization of a Boussinesq system of Benjamin-Bona-Mahony type . --- On September 30, 2013, Arne Jensen from Aalborg University gave the talk: Resolvent expansion for the discrete one dimensional Schrödinger operator . --- On August 7, 2013, Ashkan Nikeghbali from University of Zürich gave the talk: Random permutations and prime numbers . --- On March 27, 2013, M.W. Wong from York University, Toronto gave the talk: Hilbert-Schmidt and Trace Class Pseudo-Differential Operators on the Heisenberg Group. --- On January 1, 2013, Christof Sparber from University of Illinois at Chicago gave the talk: On Wigner and Bohmian measures in semi-classical quantum dynamics. The presentation can be found here. --- On May 30, 2012, 2012, Jose E. Gale from Universidad de Zaragoza gave the talk: A Poisson-to-Neumann characterization of fractional powers of operators. --- On October 24 - 27, 2011, Atle Hahn from Lisbon University gave the Lecture Series on Mathematical Physics. --- On May 11, 2011, Francesco Fidaleo from Universita di Roma Tor Vergata gave the talk: Harmonic analysis on perturbed Cayley trees and the Bose Einstein condensation.
{"url":"http://www.imar.ro/organization/activities/seminars/pags/133/ap.php","timestamp":"2024-11-08T11:33:05Z","content_type":"text/html","content_length":"17229","record_id":"<urn:uuid:00800f8c-b559-461a-a098-32170878d8ba>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00335.warc.gz"}
Leonardo Petrini Dec 30, 2020 Abstract:Deep learning algorithms are responsible for a technological revolution in a variety of tasks including image recognition or Go playing. Yet, why they work is not understood. Ultimately, they manage to classify data lying in high dimension -- a feat generically impossible due to the geometry of high dimensional space and the associated curse of dimensionality. Understanding what kind of structure, symmetry or invariance makes data such as images learnable is a fundamental challenge. Other puzzles include that (i) learning corresponds to minimizing a loss in high dimension, which is in general not convex and could well get stuck bad minima. (ii) Deep learning predicting power increases with the number of fitting parameters, even in a regime where data are perfectly fitted. In this manuscript, we review recent results elucidating (i,ii) and the perspective they offer on the (still unexplained) curse of dimensionality paradox. We base our theoretical discussion on the $(h,\ alpha)$ plane where $h$ is the network width and $\alpha$ the scale of the output of the network at initialization, and provide new systematic measures of performance in that plane for MNIST and CIFAR 10. We argue that different learning regimes can be organized into a phase diagram. A line of critical points sharply delimits an under-parametrised phase from an over-parametrized one. In over-parametrized nets, learning can operate in two regimes separated by a smooth cross-over. At large initialization, it corresponds to a kernel method, whereas for small initializations features can be learnt, together with invariants in the data. We review the properties of these different phases, of the transition separating them and some open questions. Our treatment emphasizes analogies with physical systems, scaling arguments and the development of numerical observables to quantitatively test these results empirically.
{"url":"https://www.catalyzex.com/author/Leonardo%20Petrini","timestamp":"2024-11-05T07:32:01Z","content_type":"text/html","content_length":"176549","record_id":"<urn:uuid:f3b6d16e-d02c-4577-875e-966f10e7a913>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00320.warc.gz"}
Srinivasan Raghuraman Srinivasan Raghuraman Oblivious Accumulators Abstract A cryptographic accumulator is a succinct set commitment scheme with efficient (non-)membership proofs that typically supports updates (additions and deletions) on the accumulated set. When elements are added to or deleted from the set, an update message is issued. The collection of all the update messages essentially leaks the underlying accumulated set which in certain applications is not desirable. In this work, we define oblivious accumulators, a set commitment with concise membership proofs that hides the elements and the set size from every entity: an outsider, a verifier or other element holders. We formalize this notion of privacy via two properties: element hiding and add-delete indistinguishability. We also define almost-oblivious accumulators, that only achieve a weaker notion of privacy called add-delete unlinkability. Such accumulators hide the elements but not the set size. We consider the trapdoorless, decentralized setting where different users can add and delete elements from the accumulator and compute membership proofs. We then give a generic construction of an oblivious accumulator based on key-value commitments (KVC). We also show a generic way to construct KVCs from an accumulator and a vector commitment scheme. Finally, we give lower bounds on the communication (size of update messages) required for oblivious accumulators and almost-oblivious accumulators. Improved Alternating-Moduli PRFs and Post-Quantum Signatures Abstract We revisit the alternating moduli paradigm for constructing symmetric key primitives with a focus on constructing highly efficient protocols to evaluate them using secure multi-party computation (MPC). The alternating moduli paradigm of Boneh et al. (TCC 2018) enables the construction of various symmetric key primitives with the common characteristic that the inputs are multiplied by two linear maps over different moduli, first over F_2 and then over F_3. The first contribution focuses on efficient two-party evaluation of alternating moduli PRFs, effectively building an oblivious pseudorandom function. We present a generalization of the PRF proposed by Boneh et al. (TCC 18) along with methods to lower the communication and computation. We then provide several variants of our protocols, with different computation and communication tradeoffs, for evaluating the PRF. Most are in the OT/VOLE hybrid model while one is based on specialized garbling. Our most efficient protocol effectively is about 3x faster and requires 1.3x lesser communication. Our next contribution is the efficient evaluation of the OWF f(x) = B *_3 (A *_2 x) proposed by Dinur et al. (CRYPTO 21) where A \in F^{m x n}_2, B \in F^{t x m}_3 and *_p is multiplication mod p. This surprisingly simple OWF can be evaluated within MPC by secret sharing [x] over F_2, locally computing [v] = A *_2 [x], performing a modulus switching protocol to F_3 shares, followed by locally computing the output shares [y] = B *_3 [v]. We design a bespoke MPC-in-the-Head (MPCitH) signature scheme that evaluates the OWF, achieving state of art performance. The resulting signature has a size ranging from 4.0-5.5 KB, achieving between 2-3x reduction compared to Dinur et al. To the best of our knowledge, this is only 5% larger than the smallest signature based on symmetric key primitives, including the latest NIST PQC competition submissions. We additionally show that our core techniques can be extended to build very small post-quantum ring signatures for small-medium sized rings that are competitive with state-of-the-art lattice based schemes. Our techniques are in fact more generally applicable to set membership in MPCitH. Round-Optimal Oblivious Transfer and MPC from Computational CSIDH Abstract We present the first round-optimal and plausibly quantum-safe oblivious transfer (OT) and multi-party computation (MPC) protocols from the computational CSIDH assumption - the weakest and most widely studied assumption in the CSIDH family of isogeny-based assumptions. We obtain the following results: - The first round-optimal maliciously secure OT and MPC protocols in the plain model that achieve (black-box) simulation-based security while relying on the computational CSIDH assumption. - The first round-optimal maliciously secure OT and MPC protocols that achieves Universal Composability (UC) security in the presence of a trusted setup (common reference string plus random oracle) while relying on the computational CSIDH assumption. Prior plausibly quantum-safe isogeny-based OT protocols (with/without setup assumptions) are either not round-optimal, or rely on potentially stronger assumptions. We also build a 3-round maliciously-secure OT extension protocol where each base OT protocol requires only 4 isogeny computations. In comparison, the most efficient isogeny-based OT extension protocol till date due to Lai et al. [Eurocrypt 2021] requires 12 isogeny computations and 4 rounds of communication, while relying on the same assumption as our construction, namely the reciprocal CSIDH assumption. Expand-Convolute Codes for Pseudorandom Correlation Generators from LPN Abstract The recent development of pseudorandom correlation generators (PCG) holds tremendous promise for highly efficient MPC protocols. Among other correlations, PCGs allow for the efficient generation of oblivious transfer (OT) and vector oblivious linear evaluations (VOLE) with sublinear communication and concretely good computational overhead. This type of PCG makes use of a so-called LPN-friendly error-correcting code. That is, for large dimensions the code should have very efficient encoding and have high minimum distance. We investigate existing LPN-friendly codes and find that several candidates are less secure than was believed. Beginning with the recent expand-accumulate codes, we find that for their aggressive parameters, aimed at good concrete efficiency, they achieve a smaller minimum distance than conjectured. This decreases the resulting security parameter of the PCG but it remains unclear by how much. We additionally show that the recently proposed and extremely efficient silver codes achieve only very small minimum distance and result in concretely efficient attacks on the resulting PCG protocol. As such, silver codes should not be used. We introduce a new LPN-friendly code which we call expand-convolute. These codes have provably high minimum distance and faster encoding time than suitable alternatives, e.g. expand-accumulate. The main contribution of these codes is the introduction of a convolution step that dramatically increases the minimum distance. This in turn allows for a more efficient parameter selection which results in improved concrete performance. In particular, we observe a 2 times improvement in running time. Synchronizable Fair Exchange Abstract Fitzi, Garay, Maurer, and Ostrovsky (J.\ Cryptology 2005) showed that in the presence of a dishonest majority, no primitive of cardinality $n - 1$ is complete for realizing an arbitrary $n$-party functionality with {\em guaranteed output delivery}. In this work, we introduce a new $2$-party primitive $\mathcal{F}_{\mathsf{SyX}}$ (``synchronizable fair exchange'') and show that it is complete for realizing any $n$-party functionality with {\em fairness} in a setting where all parties are pairwise connected by instances of $\mathcal{F}_{\mathsf{SyX}}$. In the $\mathcal{F}_{\mathsf{SyX}} $-hybrid model, the two parties {\em load} $\mathcal{F}_{\mathsf{SyX}}$ with some input, and following this, either party can {\em trigger} $\mathcal{F}_{\mathsf{SyX}}$ with a ``witness'' at a later time to receive the output from $\mathcal{F}_{\mathsf{SyX}}$. Crucially the other party also receives output from $\mathcal{F}_{\mathsf{SyX}}$ when $\mathcal{F}_{\mathsf{SyX}}$ is triggered. The trigger witnesses allow us to {\em synchronize} the trigger phases of multiple instances of $\mathcal{F}_{\mathsf{SyX}}$, thereby aiding in the design of fair multiparty protocols. Additionally, a pair of parties may {\em reuse} a single {\em a priori} loaded instance of $\mathcal{F}_{\mathsf{SyX}}$ in any number of multiparty protocols (involving different sets of parties). Just How Fair is an Unreactive World? Abstract Fitzi, Garay, Maurer, and Ostrovsky (J. Cryptology 2005) showed that in the presence of a dishonest majority, no primitive of cardinality n − 1 is complete for realizing an arbitrary n-party functionality with guaranteed output delivery. In this work, we show that in the presence of n − 1 corrupt parties, no unreactive primitive of cardinality n − 1 is complete for realizing an arbitrary n-party functionality with fairness. We show more generally that for t > n/2, in the presence of t malicious parties, no unreactive primitive of cardinality t is complete for realizing an arbitrary n-party functionality with fairness. We complement this result by noting that (t + 1)-wise fair exchange is complete for realizing an arbitrary n-party functionality with fairness. In order to prove our results, we utilize the primitive of fair coin tossing and the notion of predictability. While this notion has been considered in some form in past works, we come up with a novel and non-trivial framework to employ it, one that readily generalizes from the setting of two parties to multiple parties, and also to the setting of unreactive functionalities. On Black-Box Verifiable Outsourcing Abstract We study the problem of verifiably outsourcing computation in a model where the verifier has black-box access to the function being computed. We introduce the problem of oracle-aided batch verification of computation (OBVC) for a function class F. This allows a verifier to efficiently verify the correctness of any f \in F evaluated on a batch of n instances x_1, ...., x_n, while only making \lambda calls to an oracle for f (along with O(n \lambda) calls to low-complexity helper oracles), where \lambda denotes a security parameter. We obtain the following positive and negative results: 1. We build OBVC protocols for the class F of all functions that admit random-self-reductions. Some of our protocols rely on homomorphic encryption schemes. 2. We show that there cannot exist OBVC schemes for the class F of all functions mapping \lambda-bit inputs to \lambda-bit outputs, for any n = \poly(\lambda). On the Quantum Security of OCB Abstract The OCB mode of operation for block ciphers has three variants, OCB1, OCB2 and OCB3. OCB1 and OCB3 can be used as secure authenticated encryption schemes whereas OCB2 has been shown to be classically insecure (Inoue et al., Crypto 2019). Even further, in the presence of quantum queries to the encryption functionality, a series of works by Kaplan et al. (Crypto 2016), Bhaumik et al. (Asiacrypt 2021) and Bonnetain et al. (Asiacrypt 2021) have shown how to break the unforgeability of the OCB modes. However, these works did not consider the confidentiality of OCB in the presence of quantum queries.We fill this gap by presenting the first formal analysis of the IND-qCPA security of OCB. In particular, we show the first attacks breaking the IND-qCPA security of the OCB modes. Surprisingly, we are able to prove that OCB2 is IND-qCPA secure when used without associated data, while relying on the assumption that the underlying block cipher is a quantum-secure pseudorandom permutation. Additionally, we present new quantum attacks breaking the universal unforgeability of OCB. Our analysis of OCB has implications for the post-quantum security of XTS, a well-known disk encryption standard, that was considered but mostly left open by Anand et al. (PQCrypto 2016). A More Complete Analysis of the Signal Double Ratchet Algorithm 📺 Abstract Seminal works by Cohn-Gordon, Cremers, Dowling, Garratt, and Stebila [Journal of Cryptology 2020] and Alwen, Coretti, and Dodis [EUROCRYPT 2019] provided the first formal frameworks for studying the widely-used Signal Double Ratchet (DR for short) algorithm. In this work, we develop a new Universally Composable (UC) definition F_DR that we show is provably achieved by the DR protocol. Our definition captures not only the security and correctness guarantees of the DR already identified in the prior state-of-the-art analyses of Cohn-Gordon et al. and Alwen et al., but also more guarantees that are absent from one or both of these works. In particular, we construct six different modified versions of the DR protocol, all of which are insecure according to our definition F_DR, but remain secure according to one (or both) of their definitions. For example, our definition is the first to capture CCA-style attacks possible immediately after a compromise — attacks that, as we show, the DR protocol provably resists, but were not captured by prior definitions. We additionally show that multiple compromises of a party in a short time interval, which the DR should be able to withstand, as we understand from its whitepaper, nonetheless introduce a new non-trivial (albeit minor) weakness of the DR. Since the definitions in the literature (including our F_DR above) do not capture security against this more nuanced scenario, we define a new stronger definition F_TR that does. Finally, we provide a minimalistic modification to the DR (that we call the Triple Ratchet, or TR for short) and show that the resulting protocol securely realizes the stronger functionality F_TR. Remarkably, the modification incurs no additional communication cost and virtually no additional computational cost. We also show that these techniques can be used to improve communication costs in other scenarios, e.g. practical Updatable Public Key Encryption schemes and the re-randomized TreeKEM protocol of Alwen et al. [CRYPTO 2020] for Secure Group Messaging. Multi-Party Threshold Private Set Intersection with Sublinear Communication 📺 Abstract In multi-party threshold private set intersection (PSI), $n$ parties each with a private set wish to compute the intersection of their sets if the intersection is sufficiently large. Previously, Ghosh and Simkin (CRYPTO 2019) studied this problem for the two-party case and demonstrated interesting lower and upper bounds on the communication complexity. In this work, we investigate the communication complexity of the multi-party setting $(n\geq 2)$. We consider two functionalities for multi-party threshold PSI. In the first, parties learn the intersection if each of their sets and the intersection differ by at most $T$. In the second functionality, parties learn the intersection if the union of all their sets and the intersection differ by at most $T$. For both functionalities, we show that any protocol must have communication complexity $\Omega(nT)$. We build protocols with a matching upper bound of $O(nT)$ communication complexity for both functionalities assuming threshold FHE. We also construct a computationally more efficient protocol for the second functionality with communication complexity $\widetilde{O}(nT)$ under a weaker assumption of threshold additive homomorphic encryption. As a direct implication, we solve one of the open problems in the work of Ghosh and Simkin (CRYPTO 2019) by designing a two-party protocol with communication cost $\widetilde{O}(T)$ from assumptions weaker than FHE. As a consequence of our results, we achieve the first "regular" multi-party PSI protocol where the communication complexity only grows with the size of the set difference and does not depend on the size of the input sets. Silver: Silent VOLE and Oblivious Transfer from Hardness of Decoding Structured LDPC Codes 📺 Abstract We put forth new protocols for oblivious transfer extension and vector OLE, called \emph{Silver}, for SILent Vole and oblivious transfER. Silver offers extremely high performances: generating 10 million random OTs on one core of a standard laptop requires only 300ms of computation and 122KB of communication. This represents 37% less computation and ~1300x less communication than the standard IKNP protocol, as well as ~4x less computation and ~4x less communication than the recent protocol of Yang et al. (CCS 2020). Silver is \emph{silent}: after a one-time cheap interaction, two parties can store small seeds, from which they can later \emph{locally} generate a large number of OTs \emph{while remaining offline}. Neither IKNP nor Yang et al. enjoys this feature; compared to the best known silent OT extension protocol of Boyle et al. (CCS 2019), upon which we build up, Silver has 19x less computation, and the same communication. Due to its attractive efficiency features, Silver yields major efficiency improvements in numerous MPC protocols. Our approach is a radical departure from the standard paradigm for building MPC protocols, in that we do \emph{not} attempt to base our constructions on a well-studied assumption. Rather, we follow an approach closer in spirit to the standard paradigm in the design of symmetric primitives: we identify a set of fundamental structural properties that allow us to withstand all known attacks, and put forth a candidate design, guided by our analysis. We also rely on extensive experimentations to analyze our candidate and experimentally validate their properties. In essence, our approach boils down to constructing new families of linear codes with (plausibly) high minimum distance and extremely low encoding time. While further analysis is of course warranted to confidently assess the security of Silver, we hope and believe that initiating this approach to the design of MPC primitives will pave the way to new secure primitives with extremely attractive efficiency features. Efficient Constructions for Almost-everywhere Secure Computation 📺 Abstract We study the problem of {\em almost-everywhere reliable message transmission}; a key component in designing efficient and secure MPC protocols for sparsely connected networks. The goal is to design low-degree networks which allow a large fraction of honest nodes to communicate reliably even while linearly many nodes can experience byzantine corruption and deviate arbitrarily from the assigned protocol.\\ \noindent In this paper, we achieve a $\log$-degree network with a polylogarithmic work complexity protocol, thereby improving over the state-of-the-art result of Chandran {\em et al.} (ICALP 2010) who required a polylogarithmic-degree network and had a linear work complexity. In addition, we also achieve: \begin{itemize} \item A work efficient version of Dwork et. al.'s (STOC 1986) butterfly network. \item An improvement upon the state of the art protocol of Ben-or and Ron (Information Processing Letters 1996) in the randomized corruption model---both in work-efficiency and in resilience. KVaC: Key-Value Commitments for Blockchains and Beyond 📺 Abstract As blockchains grow in size, validating new transactions becomes more and more resource intensive. To deal with this, there is a need to discover compact encodings of the (effective) state of a blockchain --- an encoding that allows for efficient proofs of membership and updates. In the case of account-based cryptocurrencies, the state can be represented by a key-value map, where keys are the account addresses and values consist of account balance, nonce, etc. We propose a new commitment scheme for key-value maps whose size does not grow with the number of keys, yet proofs of membership are of constant-size. In fact, both the encoding and the proofs consist of just two and three group elements respectively (in groups of unknown order like class groups). Verifying and updating proofs involves just a few group exponentiations. Additive updates to key values enjoy the same level of efficiency too. Key-value commitments can be used to build dynamic accumulators and vector commitments, which find applications in group signatures, anonymous credentials, verifiable databases, interactive oracle proofs, etc. Using our new key-value commitment, we provide the most efficient constructions of (sub)vector commitments to date.
{"url":"https://iacr.org/cryptodb/data/author.php?authorkey=8830","timestamp":"2024-11-07T12:22:17Z","content_type":"text/html","content_length":"57730","record_id":"<urn:uuid:32419651-0744-40cb-a85a-c383e9909b27>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00301.warc.gz"}
On the poor performance of classifiers in insurance models | R-bloggersOn the poor performance of classifiers in insurance models On the poor performance of classifiers in insurance models [This article was first published on R-english – Freakonometrics , and kindly contributed to ]. (You can report issue about the content on this page ) Want to share your content on R-bloggers? if you have a blog, or if you don't. Each time we have a case study in my actuarial courses (with real data), students are surprised to have hard time getting a “good” model, and they are always surprised to have a low AUC, when trying to model the probability to claim a loss, to die, to fraud, etc. And each time, I keep saying, “yes, I know, and that’s what we expect because there a lot of ‘randomness’ in insurance”. To be more specific, I decided to run some simulations, and to compute AUCs to see what’s going on. And because I don’t want to waste time fitting models, we will assume that we have each time a perfect model. So I want to show that the upper bound of the AUC is actually quite low ! So it’s not a modeling issue, it is a fondamental issue in insurance ! By ‘perfect model’ I mean the following : \(\Omega\) denotes the heterogeneity factor, because people are different. We would love to get \(\mathbb{P}[Y=1|\Omega]\). Unfortunately, \(\Omega\) is unobservable ! So we use covariates (like the age of the driver of the car in motor insurance, or of the policyholder in life insurance, etc). Thus, we have data \((y_i,\boldsymbol{x}_i)\)‘s and we use them to train a model, in order to approximate \(\mathbb{P}[Y=1|\boldsymbol{X}]\). And then, we check if our model is good (or not) using the ROC curve, obtained from confusion matrices, comparing \(y_i\)‘s and \(\widehat{y}_i\)‘s where \(\widehat{y}_i=1\) when \(\mathbb{P}[Y_i=1|\boldsymbol{x}_i]\) exceeds a given threshold. Here, I will not try to construct models. I will predict \ (\widehat{y}_i=1\) each time the true underlying probability \(\mathbb{P}[Y_i=1|\omega_i]\) exceeds a threshold ! The point is that it’s possible to claim a loss (\(y=1\)) even if the probability is 3% (and most of the time \(\widehat{y}=0\)), and to not claim one (\(y=0\)) even if the probability is 97% (and most of the time \(\widehat{y}=1\)). That’s the idea with randomness, right ? So, here \(p(\omega_1),\cdots,p(\omega_n)\) denote the probabilities to claim a loss, to die, to fraud, etc. There is heterogeneity here, and this heterogenity can be small, or large. Consider the graph below, to illustrate, In both cases, there is, on average, 25% chance to claim a loss. But on the left, there is more heterogeneity, more dispersion. To illustrate, I used the arrow, which is a classical 90% interval : 90% of the individuals have a probability to claim a loss in that interval. (here 10%-40%), 5% are below 10% (low risk), and 5% are above 40% (high risk). Later on, we will say that we have 25% on average, with a dispersion of 30% (40% minus 10%). On the right, it’s more 25% on average, with a dispersion of of 15%. What I call dispersion is the difference between the 95% and the 5% quantiles. Consider now some dataset, with Bernoulli variables \(y\), drawn with those probabilities \(p(\omega)\). Then, let us assume that we are able to get a perfect model : I do not estimate a model based on some covariates, here, I assume that I know perfectly the probability (which is true, because I did generate those data). More specifically, to generate a vector of probabilities, here I use a Beta distribution with a given mean, and a given variance (to capture the heterogeneity I mentioned above) from those probabilities, I generate occurences of claims, or deaths, Y=rbinom(n,size = 1,prob = p) Then, I compute the AUC of my “perfect” model, And then, I will generate many samples, to compute the average value of the AUC. And actually, we can do that for many values of the mean and the variance of the Beta distribution. Here is the code ab_beta = function(m,inter){ a=uniroot(function(a) qbeta(.95,a,a/m-a)-qbeta(.05,a,a/m-a)-inter, essai = try(ab<-ab_beta(m,i),TRUE) if(inherits(essai,what="try-error")) a=-1 if(!inherits(essai,what="try-error")){ a=ab[1] b=ab[2] } if((a>=0)&(b>=0)){ for(s in 1:ns){ Y=rbinom(n,size = 1,prob = p) V_auc[s]=as.numeric([email protected])} V=outer(X = Vm,Y = Vi, Vectorize(function(x,y) xlab="Probability (Average)", ylab="Dispersion (Q95-Q5)", colorRampPalette(brewer.pal(n = 9, name = "YlGn"))(101)) On the x-axis, we have the average probability to claim a loss. Of course, there is a symmetry here. And on the y-axis, we have the dispersion : the lower, the less heterogeneity in the portfolio. For instance, with a 30% chance to claim a loss on average, and 20% dispersion (meaning that in the portfolio, 90% of the insured have between 20% and 40% chance to claim a loss, or 15% and 35% chance), we have on average a 60% AUC. With a perfect model ! So with only a few covariates, having 55% should be great ! My point here is that with a low dispersion, we cannot expect to have a great AUC (again, even with a perfect model). In motor insurance, from my experience, 90% of the insured are between 3% chance and 20% chance to claim a loss ! That’s less than 20% dispersion ! and in that case, even if the (average) probability is rather small, it is very difficult to expect an AUC above 60% or 65% !
{"url":"https://www.r-bloggers.com/2019/03/on-the-poor-performance-of-classifiers-in-insurance-models/","timestamp":"2024-11-03T13:08:33Z","content_type":"text/html","content_length":"101346","record_id":"<urn:uuid:63a124b4-549d-4fd5-9a21-aa377d6e1d36>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00125.warc.gz"}
A-level Computing 2009/AQA/The Computing Practical Project - Wikibooks, open books for an open world This chapter aims to give you the skills to complete your A2 Computing project. Unlike the rest of the course this unit is entirely based on coursework submitted in May. This is great because if you work hard enough then you can make sure you get some really good marks, you have access to the mark scheme after all! The classic waterfall model Your project involves making a complex programming project that will likely involve data processing, and then writing a report about it. You have to make a program for a real user, this is very important, you can't just make them up. It doesn't have to be incredibly complicated, but you need a degree of complexity in order to write a good report. You can write a computer game but they can be an awful lot of work (2–3 years with a team of 40+ people for the latest console games) and it's often difficult to find a real life user and a real need. Over the course of the project you will be creating a report. This is really important and lots of people spend so much time coding that they forget to complete the write up. The write up is worth nearly 70% of the mark and will take you through a waterfall model of system development. There are many other forms of system development out there and you might find yourself completing sections of the course not necessarily in the order given, the important thing is that you make it a cohesive whole and complete everything. Please use the links below to get you started and note where all the marks are awarded, you might create a brilliant bit of code but if you don't complete your writeup your grade will suffer.
{"url":"https://en.m.wikibooks.org/wiki/A-level_Computing_2009/AQA/The_Computing_Practical_Project","timestamp":"2024-11-02T11:51:13Z","content_type":"text/html","content_length":"32134","record_id":"<urn:uuid:ad021035-8fb4-4d8f-9482-ea216ba03058>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00880.warc.gz"}
[Solved] Consider the hyperbola H:x2−y2=1 and a circle S with c... | Filo Consider the hyperbola and a circle with centre . Suppose that and touch each other at a point with and . The common tangent to and at intersects the -axis at point . If is the centroid of the triangle , then the correct expression(s) is(are) Not the question you're searching for? + Ask your question : Circle with centre Common tangent to and at is Also radius of circle with centre through point of contact is perpendicular to tangent. is the point of intersection of tangent at and -axis Centroid of is Using , Also, lies on , Was this solution helpful? Found 6 tutors discussing this question Discuss this question LIVE for FREE 5 mins ago One destination to cover all your homework and assignment needs Learn Practice Revision Succeed Instant 1:1 help, 24x7 60, 000+ Expert tutors Textbook solutions Big idea maths, McGraw-Hill Education etc Essay review Get expert feedback on your essay Schedule classes High dosage tutoring from Dedicated 3 experts Practice questions from Coordinate Geometry for JEE Main and Advanced (Dr. S K Goyal) View more Practice more questions from Conic Sections Practice questions on similar concepts asked by Filo students View more Stuck on the question or explanation? Connect with our Mathematics tutors online and get step by step solution of this question. 231 students are taking LIVE classes Question Consider the hyperbola and a circle with centre . Suppose that and touch each other at a point with and . The common tangent to and at intersects the -axis at point . If is the centroid of Text the triangle , then the correct expression(s) is(are) Topic Conic Sections Subject Mathematics Class Class 11 Answer Text solution:1 Upvotes 148
{"url":"https://askfilo.com/math-question-answers/consider-the-hyperbola-h-x2-y21-and-a-circle-s-with-centre-nleftx_2-0right","timestamp":"2024-11-14T13:40:50Z","content_type":"text/html","content_length":"660067","record_id":"<urn:uuid:a74d6c95-985a-4967-9ef5-a5c656ee0de3>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00692.warc.gz"}
Bracewell Nulling Interferometry Enables Astronomers to See the Glow of Alien Stardust - NASA Science 7 min read Bracewell Nulling Interferometry Enables Astronomers to See the Glow of Alien Stardust Large Binocular Telescope Interferometer (LBTI) The search for and characterization of extra-solar planets (‘exoplanets’) that might host life is one of NASA’s highest ambitions. But observing exoplanets and distinguishing them from the scattered light coming from the stars that they orbit is a difficult challenge. Nulling interferometry is a method by which starlight can be suppressed to a high degree, enabling the planetary systems to be directly observed. This technique has already been used to look for dust (debris material from comets and asteroids) that could obscure the light reflected from rocky planets, and may provide a pathway for future imaging of Earthlike planets. Interferometry takes advantage of the wave-like nature of light by combining light from different beams in such a way that the wave ‘crests’ in one beam overlap with the ‘troughs’ in another, canceling each other. Bracewell nulling interferometry, proposed by R. N. Bracewell in 1978, combines the light beams from two separate telescopes that are both pointed at the same star. If the light path lengths from the star through the two telescopes to the detector are kept one half-wavelength away from being perfectly matched, then the images of the star in the two beams will cancel one another. The light from objects next to that star will generally take paths through the two telescopes that do not meet this condition and thus images of those objects will not cancel. It is as though the sky were being observed through a mask whose transmission varies sinusoidally along the axis separating the two telescopes. In this way, Bracewell nulling interferometry enables faint planetary systems to be observed, free from the glare from their host stars. To observe a star’s habitable zone—the region surrounding the star where the temperature permits liquid water to exist—these nulling observations are best done at light wavelengths near ten micrometers since the thermal radiation of an object at the temperature of liquid water is strongest at these wavelengths. NASA’s Jet Propulsion Laboratory and its partners have executed a series of projects over the past two decades to demonstrate and refine the capabilities of ground-based nulling interferometry for exoplanetary research. The first demonstrations were led by Phil Hinz of the University of Arizona at the Multiple Mirror Telescope (MMT) on Mount Hopkins in Arizona. Images of the stars R Leonis and alpha Orionis were nulled using light combined between two of the six segments of MMTs’ primary mirror. The nulled images revealed dust surrounding the stars that was previously not visible. These nulling observations did not correct for the optical pathlength variations due to atmospheric fluctuations, and gathered null images by rapidly collecting many frames and selecting those which happened to best cancel the starlight. This approach was refined in the Bracewell Infrared Nulling Cryostat, or BLINC, which was also installed on the MMT. BLINC actively controlled the optical pathlengths for the two light beams to maintain the interference null. BLINC also employed a dispersive element in one of the beam paths. This made the central fringe on the star a null for a much broader wavelength range than was otherwise possible, and allowed BLINC to actively control the optical pathlengths using light at 2.2 microns wavelength using faster, less noisy detectors, while simultaneously performing science observations using light at 11 microns. BLINC also made use of the adaptive optics on the MMT, which correct for atmospheric turbulence, providing flat wavefronts in the two beams so they would interfere with each other more completely. The Keck Interferometer Nuller, or KIN, at the Keck Observatory on Mauna Kea in Hawaii, dramatically increased the sensitivity of the technique by combining the light from two 10-meter telescopes that were 85 meters apart. Because the fringe spacing is finer the farther apart the two telescopes are, this wide baseline allowed the KIN to observe faint objects much closer to the star than could BLINC, whose mirror subapertures were only a few meters apart. The KIN further separated the light from the left and right halves of each telescope’s primary mirror. By combining the two left sides together and the two right sides together, they created two distinct images of the same region of sky with the central star nulled. These left and right null images were then interferometrically combined with a deliberately modulated pathlength difference between them, which effectively switched on and off the bright fringes, helping to distinguish them from the strong thermal background at the detector. The KIN used these techniques to survey the levels of dust surrounding nearby stars to unprecedented precision. While the KIN setup provided improved sensitivity to close-in objects, the widely separated Keck telescopes with their individual mounts required the nuller to use optical delay lines tens of meters long to equalize the pathlengths of the starlight from the two telescopes. The length of the delay lines needed to be varied as the telescopes viewed targets in different regions of the sky. This introduced additional background noise to these observations. Like KIN, the Large Binocular Telescope Interferometer (LBTI) at Mount Graham, Arizona, uses two large primary mirrors (at 8 meters, nearly as wide as Keck’s 10 meters), but the mirrors are installed on a common mount, similar to the two subapertures of BLINC. Thus, like BLINC, the LBTI does not need delay lines. The 14.4 meter separation of the mirrors results in a fringe spacing that allows LBTI to observe the habitable zones of relatively nearby stars. Like both KIN and BLINC, the Large Binocular Telescope uses adaptive optics, in this case adaptive secondary mirrors. One common problem shared by LBTI, KIN, and BLINC is ensuring that the pathlength control at 2.2 microns is maintaining the precise null at 10 microns. As the amount of water vapor in the atmosphere varies, the degree of light dispersion caused by the atmosphere changes, making the phase relationship between the two observed wavelengths drift over long exposure times. LBTI employs an additional technique, called ‘null self calibration’ (NSC), to account for this effect. NSC was developed by another NASA-funded interferometer project called the Palomar Fiber Nuller on Mount Palomar in The combination of all these techniques culminated in an LBTI survey called the Hunt for Observable Signatures of Terrestrial Systems (HOSTS). Our own Sun is surrounded by dust originating from comets and asteroids, which is visible in the night sky as zodiacal light. HOSTS measured the dust levels around 38 stars in the solar neighborhood, and showed that the likely median dust level is about three times the dust level in our own solar system. This result shows that this exozodiacal light will not prevent currently proposed large space-based telescopes from being able to directly image Earthlike planets around other stars. Even more sensitive surveys are possible in the future by capitalizing on further advances in adaptive optics technology to permit deep nulling of even fainter stars and by using improved, lower-noise mid-infrared detectors. (For additional information, see the list of LBTI HOSTS publications on this Exoplanet Exploration Program webpage.) This research was carried out by the Jet Propulsion Laboratory, California Institute of Technology, and its partners, under a contract with the National Aeronautics and Space Administration Prof. Phil Hinz, University of Arizona (now at UC- Santa Cruz)
{"url":"https://science.nasa.gov/science-research/science-enabling-technology/technology-highlights/bracewell-nulling-interferometry-enables-astronomers-to-see-the-glow-of-alien-stardust/","timestamp":"2024-11-11T14:45:28Z","content_type":"text/html","content_length":"344914","record_id":"<urn:uuid:1027eb77-23b4-4c67-a3cb-9848157c00e8>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00615.warc.gz"}
Monad - (Category Theory) - Vocab, Definition, Explanations | Fiveable from class: Category Theory A monad is an abstract data type that encapsulates computations defined by a type constructor and provides a way to chain operations together. It serves as a framework for managing side effects, allowing for the composition of functions while maintaining a clear separation between pure and impure computations. Monads are often used to handle various types of effects, such as state or exceptions, in a structured manner. congrats on reading the definition of Monad. now let's actually learn it. 5 Must Know Facts For Your Next Test 1. Monads consist of three main components: a type constructor, a unit (or return) function, and a bind (or flatMap) function that allows for chaining operations. 2. The unit function wraps a value into the monadic context, while the bind function takes a monadic value and a function that produces another monadic value, facilitating seamless composition. 3. In the context of free algebras, monads can be used to represent different computational effects and can lead to the construction of free algebras that capture specific behaviors associated with those effects. 4. The Kleisli category associated with a monad allows for reasoning about computations within the monadic structure, where morphisms represent computations producing monadic results. 5. Monads provide a powerful tool for managing side effects in functional programming, promoting cleaner and more maintainable code through explicit handling of contexts. Review Questions • How do monads facilitate the composition of functions while managing side effects? □ Monads facilitate function composition by providing a structured way to handle side effects through their bind function. This allows developers to chain operations together without having to explicitly manage the underlying context or state at each step. By using the unit function to wrap values and the bind function to pass these values through various computations, monads help maintain purity in functional programming while still dealing with necessary side effects. • In what ways do Kleisli categories enhance our understanding of monads and their applications? □ Kleisli categories enhance our understanding of monads by providing a categorical framework in which we can reason about computations producing monadic values. In these categories, morphisms represent sequences of operations that yield results within the context of a specific monad. This abstraction allows for clearer composition rules and better insight into how different types of computations interact with each other, particularly when dealing with multiple monads or complex computational effects. • Evaluate how the concept of free algebras connects to monads and what implications this has for computational structures. □ The concept of free algebras connects to monads through their ability to define structures that capture various computational behaviors. By constructing free algebras from monads, we can explore how different types of effects can be modeled in a modular way. This connection leads to greater flexibility in designing computational structures, as it allows developers to build systems that can easily incorporate or change behaviors without altering the foundational logic, ultimately resulting in more robust and adaptable software architectures. "Monad" also found in: © 2024 Fiveable Inc. All rights reserved. AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.
{"url":"https://library.fiveable.me/key-terms/category-theory/monad","timestamp":"2024-11-05T10:28:50Z","content_type":"text/html","content_length":"154842","record_id":"<urn:uuid:1e2b0113-b74a-4aeb-a670-9dde7c958498>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00366.warc.gz"}
国際会議発表International Conferences 1. Z. Dai, W. Lin, S. Ji, H. Sakumoto, M. Takenaka and S. Iwamoto, “Unidirectional lasing in a ring resonator with an S-shaped waveguide”, The 29th MICROOPTICS CONFERENCE (MOC 2024), B-3, Kaohsiung, Taiwan, Sep. 2024. 2. Y. Zhao, N. Harada, and S. Iwamoto, “Design of Broadband Dispersion Compensation Characteristics of Photonic Crystal Slow-light Waveguide by Lightweight Machine Learning”, 2024 International Conference on Solid State Devices and Materials (SSDM2024), E-2-04, Hyogo, Japan, Sep. 2024. 3. (Invited) S. Iwamoto, “Magneto-optical responses of epsilon-near-zero indium tin oxide thin films”, SPIE Optics + Photonics 2024, 13111-20, San Diego, US, Aug. 2024. 4. H X. Dinh, A. Balcytis, T. Ozawa, Y. Ota, T. Baba, S. Iwamoto, A. Mitchell, and T. G. Nguyen, “Observation of Su-Schrieffer-Heeger Topological Model Band Structure in Integrated LNOI Coupled Ring Cavities”, CLEO Pacific Rim 2024 (CLEO-PR 2024), Mo1D-2, Incheon, Korea, Aug. 2024. 5. S. M. Trushin, Y. Ishii, T. Ito, S. Iwamoto, and Y. Ota, “Flat Band Light Localization in One Dimensional Moiré Bilayer Photonic Crystals with Staggered Potential”, CLEO Pacific Rim 2024 (CLEO-PR 2024), Mo1J-3, Incheon, Korea, Aug. 2024. 6. T. Ito, Y. Ishii, S. M. Trushin, G. Lu, S. Iwamoto, and Y. Ota, “Investigation of One-dimensional Moiré Photonic Crystal Nanobeam Cavities”, CLEO Pacific Rim 2024 (CLEO-PR 2024), Mo1J-4, Incheon, Korea, Aug. 2024. 7. A. Balčytis, H. Dinh, T. Ozawa, Y. Ota, T. Baba, S. Iwamoto, A. Mitchell, and T. G. Nguyen, “Photonic Frequency Space Simulation of Tight-Binding Lattices Using Integrated LNOI Ring Resonators”, CLEO Pacific Rim 2024 (CLEO-PR 2024), We2D-5, Incheon, Korea, Aug. 2024. 8. R. Zhang, L. Li, M. Kamata, T. Baba, T. Ozawa, Y. Ota, and S. Iwamoto, “Measurement of Synthetic-Dimension Band Structures in Silicon Coupled Ring Resonators”, CLEO Pacific Rim 2024 (CLEO-PR 2024), Fr2F-2, Incheon, Korea, Aug. 2024. 9. K. Taniguchi, T. Kitai, T. Yambe, S. Gao, S. Iwamoto, and Y. Ota, “Fabrication of Photonic Crystal Nanocavities based on Monocrystalline Yttrium Iron Garnet”, CLEO Pacific Rim 2024 (CLEO-PR 2024), P2-043, Incheon, Korea, Aug. 2024. 10. T. Yambe, T. Kitai, K. Taniguchi, S. Gao, R. Imamura, H. Kumazaki, S. Fujii, S. Iwamoto, T. Tanabe, and Y. Ota, “Demonstration of Magneto-optical Microdisk Resonators Based on Yttrium Iron Garnet”, CLEO Pacific Rim 2024 (CLEO-PR 2024), P2-049, Incheon, Korea, Aug. 2024. 11. (Invited) Y. Ota, S. Gao, T. Kitai, K. Taniguchi, T. Yambe, and S. Iwamoto, “Si metasurface on a magneto-optical thin film for enhancing nonreciprocal polarization rotation”, 14th International Conference on Metamaterials, Photonic Crystals and Plasmonics (META2024), Session 3A8, Toyama, Japan, July 2024. 12. (Invited) S. Iwamoto, “Semiconductor-based Topological Waveguides: Potential and Challenges”, The 21st International Symposium on the Physics of Semiconductors and Applications (ISPSA 2024), TuK1-1, Jeju, Korea, June 2024. 13. (Invited) S. Iwamoto, (Tutorial) “Topological Photonics in Semiconductor Photonic Crystal Platforms: Potential and Challenges”, The 2024 CLEO Conference and Exhibition, STh4P.1, Charlotte, USA, May 2024. 14. T. Hiraki, Y. Maeda, T. Fujii, K. Takeda, T. Aihara, T. Segawa. Y. Ota, S. Iwamoto, Y. Arakawa, and S. Matsuo, “Transfer Printed InP-based Membrane Laser on Sapphire-Si Bonded Substrate” , 2024 IEEE Silicon Photonics Conference (SiPhotonics), WP6, Tokyo Bay, Japan, Apr. 2024. 1. C. Zhang, Y. Ota, and S. Iwamoto, “Valley photonic crystal heterostructure based waveguide supporting slow-light modes with a large mode area”, The 13th International Symposium on Photonics and Electronics Convergence -Advanced Nanophotonics and Silicon Device Systems- (ISPEC2023), P-11, Tokyo, Japan, Nov. 2023. 2. A. Fujita, N. Pholsen, R. Kawata, S. Ji, T. Kamei , M. Okano, S. Iwamoto and Y. Ota, ” Two dimensional photonic crystal nanocavity with quantum dots integrated on a glass clad Si waveguide using transfer printing”, 1st International Workshop on Quantum Information Engineering (QIE2023), P13, Okinawa Institute of Science and Technology (OIST), Okinawa, Japan. Oct. 2023. 3. K. Iijima, S. Ishida, H. Matsukiyo, S. Ji, H. Otsuki, M. Nishioka, T. Makino, H. Kato and S. Iwamoto, ” Fabrication and optical characterization of air bridged diamond photonic crystal cavities”, 1st International Workshop on Quantum Information Engineering (QIE2023), P8, Okinawa Institute of Science and Technology (OIST), Okinawa, Japan. Oct. 2023. 4. (Invited) S. Iwamoto, ” How Can Integrated Photonics Boost Quantum Technology?”, 28th Microoptics Conference (MOC2023), SQ-1, Miyazaki, Japan, Sep.2023. 5. C. Zhang and S. Iwamoto, ” Design of a Heterostructured Valley Photonic Crystal Waveguide Supporting a Slow-Light Mode with a Large Mode Width”, 28th Microoptics Conference (MOC2023), G-2, Miyazaki, Japan, Sep.2023. 6. Z. Dai, W. Lin, and S. Iwamoto, ” Rate Equation Analysis for Deterministic and Unidirectional Lasing in Ring Resonators with an S-shaped Coupler”, 2023 International Conference on Solid State Devices and Materials (SSDM2023), H-3-04, Nagoya, Japan, Sep. 2023. 7. (Invited) S. Iwamoto, H. Yoshimi, K. Kuruma, R. Miyazaki, and Y. Ota, ” Slow light waveguide based on semiconductor valley photonic crystal”, Plasmonics: Design, Materials, Fabrication, Characterization, and Applications XXI in SPIE Optics + Photonics 2023, 12648-32, San Diego, USA, Aug. 2023. 8. (Invited) S. Iwamoto, ” Topological states of light in nanophotonic structures”, 22nd International Conference on Electron Dynamics in Semiconductors, Optoelectronics and Nanostructures (EDISON 22), Mo A-1, Munster, Germany, Aug. 2023. 9. H. T. Phan, S. Takahashi, S. Iwamoto, and K. Wakabayashi,” Topological states in 3D Woodpile photonic crystal”, The 4th International Workshop on Advanced Materials and Devices (IWAMD 2023), Thai Nguyen, Viet Nam, Aug. 2023. 10. N. Ishida, M. Ezawa, G. Lu. W. Lin, Y. Ota, and S. Iwamoto, ” Topological corner states in photonic bilayer square lattice with π flux”, 13th International Conference on Metamaterials, Photonic Crystals and Plasmonics (META2023), Session 1P2, P17, Paris, France, July 2023. 11. (Invited) Y. Ota, S. Gao, and S. Iwamoto, ” Ultrathin magneto-optical devices based on all-dielectric metasurfaces”, 13th International Conference on Metamaterials, Photonic Crystals and Plasmonics (META2023), Session 2A41, Paris, France, July 2023. 12. G. Lu, Y. Ota, and S. Iwamoto, ” Lasing oscillation in twisted quadrupole topological photonic crystals”, 25th International Conference on the Electronic Properties of Two-Dimensional Systems & 21st International Conference on Modulated Semiconductor Structures (EP2DS-25&MSS-21), Grenoble, France, July 2023. 13. (Invited) F. Tian, Y. Ota, and S. Iwamoto, ” Microwave Design of a Nano-optomechanical System with Exceptional Point at Room Temperature”, The Progress In Electromagnetics Research Symposium (PIERS) 2023, Prague, Czech, July 2023.(online presentation) 14. (Invited) S. Takahashi, Y. Imanishi, Y. Ashida, S. Tamaki, K. Yamashita, T. Ueda, H. T. Phan, K. Wakabayashi, Y. Hatsugai and S. Iwamoto, ” Microwave Topological Edge States in Three-Dimensional Structures”, The 7th A3 metamaterial forum 2023, I-22, Kyoto, Japan, June 2023. 15. K. Ikeda, T. Liu, Y. Ota, S. Iwamoto, N. Kobayashi, ” Realization of large magneto-optical effect by diagonal permittivity change in epsilon-near-zero materials”, INTERMAG2023, Sendai, Japan, May 16. A. Balčytis, X, H. Dinh, T. Ozawa, Y. Ota, T. Baba, S. Iwamoto, A. Mitchell, and T. G Nguyen, ” Synthetic frequency dimension state coupling in modulated LNOI ring cavity devices”, The 2023 CLEO Conference and Exhibition, SW3O. 1, San Jose, USA, May 2023. 17. Y. Yang, S. Gao and S. Iwamoto, ” Design of a nanocavity in an AlN-diamond hybrid nanobeam structure with a photonic band gap”, The 13th International Symposium on Photonic and Electromagnetic Crystal Structures (PECS-XIII), P-46, Tokyo, Japan, Mar. 2023. 18. H. Thanh Phan, S. Takahashi, S. Iwamoto and K. Wakabayashi, “Zak Phase and the Existence of Topological States in Three-Dimensional Photonic Crystals”, The 13th International Symposium on Photonic and Electromagnetic Crystal Structures (PECS-XIII), P-37, Tokyo, Japan, Mar. 2023. 19. S. Takahashi, Y. Ashida, H. T. Phan, K. Yamashita, T. Ueda, K. Wakabayashi and S. Iwamoto, “Microwave Observation of a Second-Order Topological Boundary State in a Three-Dimensional Woodpile Photonic Crystal”, The 13th International Symposium on Photonic and Electromagnetic Crystal Structures (PECS-XIII), P-34, Tokyo, Japan, Mar. 2023. 20. N. Pholsen, Y. Ota and S. Iwamoto, “Numerical Design of a GaAs Nanobeam Cavity on a Silicon Nitride Waveguide for Efficiently Generating Single Photons by Resonant Excitation”, The 13th International Symposium on Photonic and Electromagnetic Crystal Structures (PECS-XIII), P-23, Tokyo, Japan, Mar. 2023. 21. G. Lu, Y. Ota and S. Iwamoto, “A topological nanocavity in photonic crystal slabs exhibiting quadrupole topological phase”, The 13th International Symposium on Photonic and Electromagnetic Crystal Structures (PECS-XIII), P-20, Tokyo, Japan, Mar. 2023. 22. R. Kawata, A. Fujita, P. Natthajuks, S. Iwamoto and Y. Ota, “High-Q Two-dimensional Photonic Crystal Nanocavity on Glass with a Top Glass Plate”, The 13th International Symposium on Photonic and Electromagnetic Crystal Structures (PECS-XIII), P-19, Tokyo, Japan, Mar. 2023. 23. H. Yoshimi, H. Kagami, S. Okada, W. Lin, T. Amemiya, Y. Ota, N. Nishiyama and S. Iwamoto, “Experimental Investigation for Meron Polarization Textures in Band Structures of Valley Photonic Crystals”, The 13th International Symposium on Photonic and Electromagnetic Crystal Structures (PECS-XIII), P-12, Tokyo, Japan, Mar. 2023. 24. Y. Ishii, S. Trushin, G. Lu, S. Iwamoto and Y. Ota, “Analysis of photonic band structure for InP/Si hetero twist-stacked photonic crystal slabs”, The 13th International Symposium on Photonic and Electromagnetic Crystal Structures (PECS-XIII), P-05, Tokyo, Japan, Mar. 2023. 25. F. Tian, M. Takiguchi, E. Kuramochi, H. Sumikura, M. Notomi and S. Iwamoto, “Demonstration of optically tunable coupling between two nanomechanical oscillators”, The 13th International Symposium on Photonic and Electromagnetic Crystal Structures (PECS-XIII), 30-E-03, Tokyo, Japan, Mar. 2023. 26. S. Gao, Y. Ota and S. Iwamoto, “Design of an Ultra-thin Faraday Rotator based on a Magneto-Photonic Crystal”, The 13th International Symposium on Photonic and Electromagnetic Crystal Structures (PECS-XIII), 29-A-05, Tokyo, Japan, Mar. 2023. 27. T. Nakama, R. Nakamura, A. Balcytis, H. Ito, T. Baba, T. Ozawa, Y. Ota and S. Iwamoto, “Observation of topological states in Si photonics SSH structure”, The 13th International Symposium on Photonic and Electromagnetic Crystal Structures (PECS-XIII), P-59, Tokyo, Japan, Mar. 2023. 28. S. Ji, T. Tajiri, X. F. Liu, H. Kiyama, A. Oiwa, J. Ritzmann, A. Ludwig, A. D. Wieck and S. Iwamoto, “Polarization-independent absorption enhancement of a GaAs quantum well embedded in an air-bridge bull’s-eye cavity with metal”,The 13th International Symposium on Photonic and Electromagnetic Crystal Structures (PECS-XIII), P-57, Tokyo, Japan, Mar. 2023. 1. A. Balcytis, T. Nguyen, X. H. Dinh, T. Ozawa, Y. Ota, T. Baba, S. Iwamoto and A. Mitchell, “LNOI ring resonators for synthetic frequency dimension photonics”, 24th Australian Institute of Physics Congress, Adelaide, Australia, Dec. 2022. 2. S. Iwamoto, J. Kwoen, and Y. Arakawa, “Development of 1.5-um InAs Quantum Dots on InP Substrate towards On-Chip Light Sources and Design of Photonic Nanostructured Waveguide for Dispersion Compensation”, The 12nd International Symposium on Photonics and Electronics Convergence -Advanced Nanophotonics and Silicon Device Systems- (ISPEC2022), D-3, Tokyo, Japan, Dec. 2022. 3. Z. Dai, W. Lin, and S. Iwamoto, “Numerical Analysis of unidirectional lasing in a semiconductor ring resonator”, The 12nd International Symposium on Photonics and Electronics Convergence -Advanced Nanophotonics and Silicon Device Systems- (ISPEC2022), P-03, Tokyo, Japan, Dec. 2022. 4. (Keynote) S. Iwamoto “Semiconductor-based topological nanophotonics: fundamentals and recent progress”, 35th International Microprocesses and Nanotechnology Conference (MNC 2022), Tokushima, Japan, Nov. 2022. 5. (Invited)S. Iwamoto “Topological Slow-Light Waveguide Based On Semiconductor Vallery Photonic Crystal”, Asia Communications and Photonics Conference (ACP) and International Conference on Information Photonics and Optical Communications (IPOC)(ACP/IPOC 2022), Shenzhen, China and virtual, Nov. 2022. 6. N. Pholsen, Y. Ota, and S. Iwamoto, “Design of a Quantum-Dot Single-Photon Source on a Silicon Nitride Waveguide for Efficient Photon Generation by Resonant Excitation”, 28th International Semiconductor Laser Conference (ISLC 2022), Matsue, Japan, Oct. 2022. 7. Y. Maeda, T. Aihara, T. Fujii, T. Hiraki, K. Takeda, T. Tsuchizawa, H. Sugiyama, T. Sato, T. Segawa,Y. Ota, S. Iwamoto, Y. Arakawa, and S. Matsuo, “High-temperature Operation of Membrane DR Lasers Integrated with Si Waveguide by Micro-transfer Printing Method”, 28th International Semiconductor Laser Conference (ISLC 2022), Matsue, Japan, Oct. 2022. 8. S. Ji, T. Tajiri, X. F. Liu , H. Kiyama, A. Oiwa, J. Ritzmann, A. Ludwig, A. D. Wieck and S. Iwamoto “Polarization-Independent Enhancement of Optical Absorption in a GaAs Quantum Well Embedded in an Air-bridge Bull’s-eye Cavity”, 2022 International Conference on Solid State Devices and Materials (SSDM2022), A-10-03, Chiba, Japan Sept. 2022. 9. H. Yoshimi, T. Yamaguchi, S. Ishida, Y. Ota and S. Iwamoto “Demonstration of an Efficient Light Coupler to a Valley Photonic Crystal Waveguide Formed at a Bearded Interface”, 2022 International Conference on Solid State Devices and Materials (SSDM2022), A-10-01, Chiba, Japan Sept. 2022. 10. W. Lin, Y. Ota and S. Iwamoto, “A Method for Generating Spatiotemporal Polarization Topologies in Dichromatic Light Beams”, 27th Microoptics Conference (MOC2022), Jena, Germany Sept. 2022. 11. T. Yamaguchi , H. Yoshimi, M. Seki , M. Ohtsuka, N. Yokoyama, Y. Ota , M. Okano, and S. Iwamoto, “Optical characterization of valley photonic crystal waveguides fabricated with CMOS-compatible process”, 27th Microoptics Conference (MOC2022), Jena, Germany Sept. 2022. 12. Y. Maeda, T. Fujii, T. Aihara, T. Hiraki, K. Takeda, T. Tsuchizawa, H. Sugiyama, T. Sato, T. Segawa, Y. Ota, S. Iwamoto, Y. Arakawa, and S. Matsuo, “Micro-transfer-printed Membrane DR Lasers on Si Waveguide Modulated with 50-Gbit/s NRZ Signal”, 2022 European Conference on Optical Communication (ECOC 2022), Basel, Switzerlan, Sept. 2022. 13. (Invited) Y. Ota, Y. Arakawa and S. Iwamoto, “Semiconductor topological nanophotonics incorporating light emitters”, 12th International Conference on Metamaterials, Photonic Crystals and Plasmonics (META2022), Session 1A6, online (Spain), Jul. 2022. 14. T. Xiao, Z. Luo, K. Hiramatsu, A. Isozaki, T. Itoh, Z. Cheng, M. Nomura, S. Iwamoto and K. Goda, ” On-chip chiral-field-enhanced Raman optical activity for biosensing”, Conference on Lasers and Electro-Optics Pacific Rim 2022 (CLEO-OR 2022), CPDP-06, Sapporo, Japan Jul.2022. 15. S. Ji, T. Tajiri, X.F. Liu, H. Kiyama, A. Oiwa, J. Ritzmann, A. Ludwig, A. D. Wieck and S. Iwamoto, “Polarization-independent Light Emission from Air-bridge Bull’s-eye Cavities Containing a GaAs Quantum Well”, Conference on Lasers and Electro-Optics Pacific Rim 2022 (CLEO-PR 2022), CTuP8A-04, Sapporo, Japan Jul.2022. 16. (Invited) R. Nakamura, T. Nakama, A. Balcytis, T. Ozawa, Y. Ota, S. Iwamoto, H. Ito and T. Baba, ” Topological modes observed in Si photonics SSH integrated circuit”, Conference on Lasers and Electro-Optics Pacific Rim 2022 (CLEO-PR 2022), CThA8D-02, Sapporo, Japan Jul.2022. 17. Y. Ashida, K. Yamashita, T. Ueda, K. Wakabayashi, S. Iwamoto and S. Takahashi, “Microwave Hinge State in a Three-Dimensional Photonic Crystal Composed of Simple Cubic Lattices”, Conference on Lasers and Electro-Optics Pacific Rim 2022 (CLEO-PR 2022), CFA8H-03, Sapporo, Japan Jul.2022 18. S. Takahashi, T. Tajiri, Y. Arakawa, S. Iwamoto and W. L. Vos, “Transport of Circularly Polarized Light in Three Dimensional Chiral Photonic Crystals”, Conference on Lasers and Electro-Optics Pacific Rim 2022 (CLEO-PR 2022), CFP8J-06, Sapporo, Japan Jul.2022 19. K. Kuruma, B. Pngault, C. Chia, D.Renaud, P. Hoffmann, S. Iwamoto, C. Ronning and M. Lončar, “Optical Coupling between a Single Tin-vacancy Center and a Photonic Crystal Nanocavity in Diamond”, Conference on Lasers and Electro-Optics Pacific Rim 2022 (CLEO-PR 2022), CPDP-04, Sapporo, Japan Jul.2022 20. N. Pholsen, Y. Ota, R.Katsumi, Y. Arakawa and S. Iwamoto, “Design of a quantum-dot single-photon source on a silicon nitride waveguide for efficient and indistinguishable photon generation”, Conference on Lasers and Electro-Optics Pacific Rim 2022 (CLEO-PR 2022), P-CTu8-22, Sapporo, Japan Jul.2022. 21. S. Gao, Y. Ota, T. Liu and S. Iwamoto, “Design of an All-dielectric Magneto-optical Metasurface with Giant Faraday Effect and High Light Transmission”, Conference on Lasers and Electro-Optics Pacific Rim 2022 (CLEO-PR 2022), CFA8G-02, Sapporo, Japan Jul.2022. 22. (Invited) S. Iwamoto “Semiconductor-based topological nanophotonics”, 2022 Asia-Pacific Workshop on Fundamentals and Applications of Advanced Semiconductor Devices (AWAD 2022), Virtual, Jul. 2022 23. S. Iwamoto “Creating photonic topology in integrated photonics platforms”, The 6th A3 Metamaterials Forum, Seoul, Korea and Virtual, Jun. 2022. 24. (Invited) S. Iwamoto “Optical cavities in topological photonic crystals”, The NanoPhoton conference on ‘Fundamentals and applications of semiconductor nanocavities’ , Copenhagen, Denmark, June 25. (Invited) Y. Ota, Y. Arakawa, and S. Iwamoto, “Topological Nano/Micro/High-Power Lasers”, The 2022 CLEO Conference and Exhibition, JF2B.4, San jose, USA and Virtual, May 2022. 26. W. Lin, Y. Ota, Y. Arakawa, and S. Iwamoto, “Demonstration of on-Chip Optical Skyrmionic Beam Generators”, The 2022 CLEO Conference and Exhibition, SM2N.4, San jose, USA and Virtual, May 2022. 27. C. F. Fong, Y. Ota, Y. Arakawa, S. Iwamoto, and Yuichiro K. Kato, “Intrinsically Chiral Modes Near Exceptional Points in Modified H1 Photonic Crystal Cavity Modes”, The 2022 CLEO Conference and Exhibition, SM3H.7, San jose, USA and Virtual, May 2022. 28. T. Feng, Y. Ota, and S. Iwamoto, “Exceptional-Point Encirclement in an Integrated Non-Hermitian Optomechanical System”, The 2022 CLEO Conference and Exhibition, JTh3A.64, San jose, USA and Virtual, May 2022. 29. S. Iwamoto “Topological photonics in integrated photonic platforms”, Bulk-Edge/Boundary Correspondence 2022 (BE/BC2022) International workshop,Online (Zoom) & University of Tsukuba, Japan, 30. C. Zhang “Theoretical analysis on photonic analog of quantum spin Hall effect in a square-lattice based photonic topological insulator”, Bulk-Edge/Boundary Correspondence 2022 (BE/BC2022) International workshop,P3, Online (Zoom) & University of Tsukuba, Japan, Feb.2022 1. (invited) Y. Ota, Y. Arakawa and S. Iwamoto “Semiconductor Topological Nanophotonics”, IEEE International Electron Devices Meeting (IEDM), San Francisco, CA, USA & on-demand, Dec.2021 2. (invited) Y. Ota,S. Iwamoto and Y. Arakawa, “Transfer printing for hybrid-integrated nanophotonics on chip”,OPJ2021 – Optics & Photonics Japan 2021 Joint Symposia on Optics Program, 27pCJ5, Tokyo, Japan, Oct.2021 3. (invited) S. Iwamoto “Topological Nanophotonics Based on Semiconductor Photonic Crystals”, OPTICA Webinar, Nov. 2021 (online) [web] 4. W. Zhan, J. Kwoen, T. Imoto, S. Iwamoto, and Y. Arakawa, “E-band InAs/GaAs Tri-layer Quantum Dot Lasers with Low Threshold Current Densities”, The 11th International Symposium on Photonics and Electronics Convergence -Advanced Nanophotonics and Silicon Device Systems- (ISPEC2021), P-24, Tokyo, Japan, Dec.2021 (online virtual symposium) 5. R. Nakamura, A. Balčytis, H. Ito, T. Baba, T. Ozawa, Y. Ota, and S. Iwamoto, “Wavefunction Observation of Topological Bulk & Edge States in Si Photonics SSH Structure”, The 11th International Symposium on Photonics and Electronics Convergence -Advanced Nanophotonics and Silicon Device Systems- (ISPEC2021), P-06, Tokyo, Japan, Dec.2021 (online virtual symposium) 6. S. Iwamoto, “Ring-cavity Laser Based on Valley Photonic Crystal Slow-light Waveguide Structure”, The 11th International Symposium on Photonics and Electronics Convergence -Advanced Nanophotonics and Silicon Device Systems- (ISPEC2021), C-2, Tokyo, Japan, Dec.2021(online virtual symposium) 7. T. Liu, Y. Ota and S. Iwamoto, “One-side excited one-way surface modes in epsilon-near-zero magneto-optical waveguides”, EP2DS-24/MSS-20, M-PS-3-04,Toyama, Japan, Nov. 2021 (online format) 8. F. Tian, Y. Ota and S. Iwamoto, “Theoretical analysis of optically controlled parity-time symmetry in optomechanics”, EP2DS-24/MSS-20, M-PS-4-07, Toyama, Japan, Nov. 2021 (online format) 9. (Invited) Y. Ota, S. Iwamoto and Y. Arakawa, “Hybrid integrated light sources on silicon assembled by transfer printing”, 2021 IEEE Photonics Conference (IPC), Oct.2021 (online format) 10. (Invited) Y. Ota, Y. Arakawa, and S. Iwamoto, “Topological nanocavity lasers and topological high-power lasers “, DPG and DPG-Tagung (DPG Meeting) of the Condensed Matter Section (SKM21), Oct.2021 (online format) 11. T. Yamaguchi, H. Yoshimi, M. Seki, M. Ohtsuka, N. Yokoyama, Y. Ota, M. Okano and S. Iwamoto, “Fabrication of valley photonic crystals with CMOS-compatible process”, 26th Microoptics Conference (MOC), G-3, Sept. 2021 (online format) 12. C. Zhang , H. Yoshimi , Y. Ota and S. Iwamoto, “Two-dimensional topological photonic crystals with helical edge states below the light line”, 26th Microoptics Conference (MOC), G-2, Sept. 2021 (online format) 13. R. Miyazaki, K. Kuruma, H. Yoshimi, R. Katsumi, T. Yamaguchi, Y. Ota, Y. Arakawa and S. Iwamoto, “Lasing from a valley photonic crystal ring resonator with a bearded interface”, 2021 International Conference on Solid State Devices and Materials (SSDM) E-5-04, 6-9 Sept. 2021 (virtual conference) 14. Armandas Balčytis, Tomoki Ozawa, Yasutomo Ota, Satoshi Iwamoto, Jun Maeda, Toshihiko Baba, “Synthetic Dimension Photonics on a Si CMOS Platform”, Conference on Lasers and Electro-Optics (CLEO), Stu1F. 4, May 2021. (A Virtual Conference) 15. Kazuhiro Kuruma, Hironobu Yoshimi, Yasutomo Ota, Ryota Katsumi, Masahiro Kakuda, Marko Lončar, Yasuhiko Arakawa and Satoshi Iwamoto, “Single photon generation in a topological slow light waveguide”, Conference on Lasers and Electro-Optics (CLEO), FW4I.2, May 2021. (A Virtual Conference) 16. Wenbo Zhan, Jinkwan Kwoen, Takaya Imoto, Satoshi Iwamoto1 and Yasuhiko Arakawa, “InAs/GaAs Tri-layer Quantum Dot Lasers”, COMPOUND SEMICONDUCTOR WEEK 2021 (CSW-2021), TuD2-5, 11 May 2021. (online 1. N. Ishida, Y. Ota, W. Lin, Y. Arakawa, and S. Iwamoto, “Analysis of Threshold Gain Difference in a Topological Edge State Laser”, 10th International Symposium on Photonics and lectronics Convergence (ISPEC 2020), S-29, November 2020 (web symposium) 2. W. Zhan, J. –K. Kwoen, K. Watanabe, M. Kakuda, S. Iwamoto, and Y. Arakawa, “Growth of InAs/GaAs Trilayer Quantum Dots with Emission Wavelength beyond 1.4 µm”, 10th International Symposium on Photonics and lectronics Convergence (ISPEC 2020), S-27, November 2020 (web symposium) 3. S. Iwamoto, H. Yoshimi, T. Yamaguchi, Y. Ota and Y. Arakawa, “Slow-Light Waveguide in Semiconductor Valley Photonic Crystal”, 10th International Symposium on Photonics and lectronics Convergence (ISPEC 2020), E-3, November 2020 (web symposium) 4. S. Ji, T. Tajiri, H. Kiyama, A. Oiwa, and S. Iwamoto, “Design of Semiconductor Bull’s-eye Optical Cavity for High-efficiency Quantum Media Conversion using a Gate-defined Quantum Dot”, 2020 International Conference on Solid State Devices and Materials (SSDM2020), E-8-02, Toyama, Japan, September (2020) (web conference format) 5. (Keynote) Satoshi Iwamoto,”Topological waveguides and nanocavities based on semiconductor photonic crystals”, METANANO 2020 (V International Conference on Metamaterials and Nanophotonics), Tbilisi, Republic of Georgia, Sept. 2020 (web conference format) 6. H.Yoshimi, T.Yamaguchi, R.Katsumi, Y.Ota, Y.Arakawa, and S.Iwamoto, “Slow Light Waveguide Based on Topological Edge States in Valley Photonic Crystals”, Conference on Lasers and Electro-Optics (CLEO), STu3J.7, San Jose, California, USA, May 2020. (web conference format) 7. R.Katsumi, Y.Ota, T.Tajiri, M.Kakuda, H.Akiyama, S.Iwamoto, and Y.Arakawa, “Efficient single photon sources transfer-printed on Si with unidirectional light output”, Conference on Lasers and Electro-Optics (CLEO), FF2D.3, San Jose, California, USA, May 2020. (web conference format) 8. N. Ishida, Y. Ota, W. Lin, Y. Arakawa, and S. Iwamoto, “Investigation on a single-mode array laser based on a topological edge state”, International workshop “Variety and universality of bulk-edge correspondence in topological phases:From solid state physics to transdisciplinary concepts“, P 16, Tokyo, Japan, February (2020). 9. (Invited) S. Iwamoto, “Light Propagation in Semiconductor Valley Photonic Crystals”, International workshop “Variety and universality of bulk-edge correspondence in topological phases:From solid state physics to transdisciplinary concepts“, Tokyo, Japan, February (2020). 10. (Invited) S. Iwamoto, Y. Ota, T. Yamaguchi, H. Yoshimi, and Y. Arakawa, “Topological waveguides and nanoacvities using semiconductor photonic crystals”, SPIE Photonics West 2020, Paper 11274-46, The Moscone Center,San Francisco, USA, February (2020). 11. S. Iwamoto, “Semiconductor-based topological photonics”, The 1st SNU-UT Workshop on Nanophotonics, Seoul National University, Seoul, Korea, January (2020). 12. T. Yamaguchi, “Topologically-protected valley kink state in a slab-type photonic crystal waveguide”, SThe 1st SNU-UT Workshop on Nanophotonics, Seoul National University, Seoul, Korea, January 13. R. Katsumi, “Quantum-dot single-photon sources transfer-printed on a CMOS Si photonic chip“, The 1st SNU-UT Workshop on Nanophotonics, Seoul National University, Seoul, Korea, January (2020). 1. (Invited) Y. Ota, R. Katsumi, A. Osada, M. Kakuda, S. Iwamoto, and Y. Arakawa, “Hybrid-integrated silicon quantum photonics toward scalable photonic quantum information processing”, The 42nd PhotonIcs & Electromagnetics Research Symposium (PIERS) in Xiamen, Xiamen, China, December (2019). 2. W. Lin, Y. Ota, Y. Arakawa, and S. Iwamoto, “Higher-order Poincaré sphere beam generation via a micro ring resonator”, International Symposium on Hybrid Quantum Systems 2019 (HQS2019), Wed-A1-4, Matsue, Shimane, Japan, December (2019). 3. Y. Ota, F. Liu, R. Katsumi, K. Watanabe, K. Wakabayashi, Y. Arakawa, and S. Iwamoto, “Light Trapping in a Higher-Order Topological Corner State”, The 9th International Symposium on Photonics and Electronics Convergence-Advanced Nanophotonics and Silicon Device Systems-(ISPEC2019), P-28, Tokyo, Japan, November (2019). 4. W. Zhan, J. –K. Kwoen, K. Watanabe, S. Iwamoto, and Y. Arakawa, “Emission at 1.39 μm from InAs/GaAs Trilayer Quantum Dots”, The 9th International Symposium on Photonics and Electronics Convergence-Advanced Nanophotonics and Silicon Device Systems-(ISPEC2019), P-32, Tokyo, Japan, November (2019). 5. R. Katsumi, Y. Ota, A. Osada, T. Yamaguchi, T. Tajiri, M. Kakuda, S. Iwamoto, and Y. Arakawa, “Hybrid Integration of a Quantum-Dot Single-Photon Emitter on a CMOS-Processed Si Waveguide Using Transfer Printing”, The 9th International Symposium on Photonics and Electronics Convergence-Advanced Nanophotonics and Silicon Device Systems-(ISPEC2019), P-30, Tokyo, Japan, November (2019). 6. R. Katsumi, Y. Ota, A. Osada, T. Tajiri, T. Yamaguchi, M. Kakuda, S. Iwamoto, H. Akiyama, and Y. Arakawa, “SLocal tuning of transfer-printed quantum-dot single-photon sources on a CMOS silicon chip”, THE TWENTY-FOURTH MICROOPTICS CONFERENCE, G-6, Toyama, Japan, November 2019. 7. (Invited) S. Iwamoto, “Integrated topological photonics using semiconductor-based photonic nanostructures”, The International Symposium on Plasmonics and Nanophtonics (iSPN2019), Kobe, Japan, November 2019. 8. (Invited) S. Iwamoto, “Semiconductor-based topological photonics”, Partners for International Business Workshop, Enschede, Netherlands, October 2019. 9. (Invited) Y. Ota, R. Katsumi, A. Osada, M. Kakuda, S. Iwamoto, and Y. Arakawa, “Hybrid integration of quantum dot-nanocavity systems on silicon”, Frontiers in Optics + Laser Science APS/DLS, FM3D.4, Washington, DC, USA, September 2019. 10. (Invited) S. Iwamoto, “Generation of Structured Light Using Spin-orbit Interaction of Light in Photonic Nanostructures”, Optomagnonics 2019 at Cambridge, Cambridge, UK, September 2019. 11. W. Lin, Y. Ota, Y. Arakawa, and S. Iwamoto, “Optical Skyrmionic Beam Generation Using a Micro Cavity”, Optomagnonics 2019 at Cambridge, Cambridge, UK, September 2019. 12. (Invited)Y. Ota, R. Katsumi, A. Osada, M. Kakuda, S. Iwamoto, and Y. Arakawa, “Hybrid integration of quantum/classical light sources on Si using transfer printing”, 2019 International Conference on Solid State Devices and Matrials (SSDM2019), B-1-01, Nagoya, Japan, September 2019. 13. K. Kuruma, Y. Ota, S. Iwamoto and Y. Arakawa, “Scheme for Coherent Control of Vacuum Rabi Oscillations in a Quantum Dot-Cavity System using Geometric Phases”, 2019 International Conference on Solid State Devices and Matrials (SSDM2019), B-6-04, Nagoya, Japan, September 2019. 14. T. Tajiri, K. Kuruma, Y. Sakai, H. Kiyama, A. Oiwa, J. Ritzmann, A. Ludwig, A. D. Wieck, Y. Ota, Y. Arakawa, S. Iwamoto, “Fabrication and Optical Characterization of Photonic Crystal Nanocavities with Electrodes for Gate-Defined Quantum Dot”, 2019 International Conference on Solid State Devices and Matrials (SSDM2019), E-2-04, Nagoya, Japan, September 2019. 15. (Invited) S. Iwamoto, “Topological photonics using semiconductor photonic crystals”, The 4th A3 Metamaterials Forum 2019 (A3META2019), Sapporo, Japan, August 2019. 16. (Invited) Y. Ota, R. Katsumi, A. Osada, M. Kakuda, S. Iwamoto, Y. Arakawa, “Integrated quantum dot single photon sources on Si”, The 11th International Conference on Information Optics and Photonics(CIOP2019), Xi’an China, August 2019. 17. (Invited) S. Iwamoto, “Semiconductor topological photonic crystal nanocavities”, International Workshop on New Trends in Topological Insulators 2019 & Variety and Universality of Bulk-edge Correspondence in Topological Phases (NTTI2019 and BEC2019), IP-09, Hiroshima, Japan, July 2019. 18. Y. Ota, F. Liu, R. Katsumi, K. Watanabe, K. Wakabayashi, Y. Arakawa, and S. Iwamoto, “Observation of a topological corner state in a two-dimensional photonic crystal in the optical regime”, International Workshop on New Trends in Topological Insulators 2019 & Variety and Universality of Bulk-edge Correspondence in Topological Phases (NTTI2019 and BEC2019), PP-26, Hiroshima, Japan, July 2019. 19. H. Yoshimi, T. Yamaguchi, Y. Ota, Y. Arakawa, and S. Iwamoto, “Numerical Analysis on Edge States at Zigzag and Bearded Interfaces in Valley Photonic Crystals”, International Workshop on New Trends in Topological Insulators 2019 & Variety and Universality of Bulk-edge Correspondence in Topological Phases (NTTI2019 and BEC2019), PP-27, Hiroshima, Japan, July 2019. 20. (Invited) S. Iwamoto, “Confinement of light in semiconductor using topological concept”, Canada-Japan Workshop on Hybrid Quantum Systems, Ottawa, Canada, June 2019. 21. (Invited) S. Iwamoto, “Topological Localized States in Semiconductor Photonic Crystals”, International Workshop TOPOLOGY, Tsukuba, Japan, June 2019. 22. I. Kim, Z. Sun, Y. Arakawa, and S. Iwamoto, “Multi-band valley-protected topological edge states in GaAs-based nanophononic crystals with complete phononic bandgaps”, Compound Semiconductor Week 2019, TuA1-8, Nara, Japan, May 2019. 23. S. Takahashi, S. Oono, Y. Hatsugai, Y. Arakawa, and S. Iwamoto, “Numerical Investigation of Topological Edge States in a GaAs-Based Three-Dimensional Chiral Photonic Crystal”, Compound Semiconductor Week 2019, MoP-D-5, Nara, Japan, May 2019. 24. Y. Kinuta, S. Takahashi, K. Yamashita, J. Tatebayashi, S. Iwamoto, and Y. Arakawa, “Chiral Cavity Mode in a GaAs-Based Three-Dimensional Photonic Crystal Fabricated by a Micro-Manipulation Method using an Optical Microscope”, Compound Semiconductor Week 2019, MoP-D-2, Nara, Japan, May 2019. 25. W. Zhan, S. Ishida, J. Kwoen, S. Iwamoto, and Y. Arakawa, “1.6 µm Emission from InAs QDs in Metamorphic InGaAs Matrix”, Compound Semiconductor Week 2019, MoP-A-8, Nara, Japan, May 2019. 26. W. Lin, Y. Ota, Y. Arakawa, and S. Iwamoto, “An On-chip Full Poincaré Beam Emitter Based on an Optical Micro-ring Cavity”, Conference on Lasers and Electro-Optics (CLEO),SW4J.4, San Jose, California, USA, May 2019. 27. R. Katsumi, Y. Ota, A. Osada, T. Yamaguchi, T. Tajiri, M. Kakuda, S. Iwamoto, and Y. Arakawa, “Quantum-dot single-photon source on a CMOS-processed silicon waveguide”, Conference on Lasers and Electro-Optics (CLEO),FM1M.2, San Jose, California, USA, May 2019. 28. Y. Ota, R. Katsumi, K. Watanabe, F. Liu, K. Wakabayashi, S. Iwamoto, and Y. Arakawa, “Nanocavity based on a topological corner state in a two-dimensional photonic crystal”, Conference on Lasers and Electro-Optics (CLEO),SW4J.1, San Jose, California, USA, May 2019. 29. (Invited) A. Osada, Y. Ota, R. Katsumi, M. Kakuda, S. Iwamoto, and Y. Arakawa, “Strongly-coupled single quantum dot-cavity system on a silicon waveguide”, International Conference on Nano-photonics and Nano-optoelectronics 2019(ICNN2019),ICNN-4-04, Yokohama, Japan, April 2019. 30. I. Kim, Y. Arakawa, and S. Iwamoto, “Design of GaAs-based valley phononic crystals with multiple complete phononic bandgaps”, Photonics West 2019, 10927-61, San Francisco, CS, USA, February 2019. 31. (Invited) Y. Ota, S. Iwamoto and Y. Arakawa, “ Lasing in a topological nanocavity with quantum dot gain”, Photonics West 2019, 10939-8, San Francisco, CS, USA, February 2019. 1. T. Yamaguchi, Y. Ota, R. Katsumi, A. Osada, S. Ishida, Y. Arakawa, and S. Iwamoto, “Observation of light transmission in a GaAs slab valley photonic crystal waveguide with sharp bends”, International workshop ”Variety and universality of bulk-edge correspondence in topological phases: From solid state physics to transdisciplinary concepts” (BEC2018X), P 6, Tokyo, Japan, December 2. Z. Sun, I. Kim and S. Iwamoto, “Design of valley phononic crystal with piezoelectric material”, International workshop ”Variety and universality of bulk-edge correspondence in topological phases: From solid state physics to transdisciplinary concepts” (BEC2018X), P 15, Tokyo, Japan, December 2018. 3. W. Lin, Y. Ota, Y. Arakawa, and S. Iwamoto, “Topological light from optical micro-ring cavity”, International workshop ”Variety and universality of bulk-edge correspondence in topological phases: From solid state physics to transdisciplinary concepts” (BEC2018X), P 5, Tokyo, Japan, December 2018. 4. (Invited) S. Iwamoto, “Photonic crystal nanocavities by topological concept”, International workshop ”Variety and universality of bulk-edge correspondence in topological phases: From solid state physics to transdisciplinary concepts” (BEC2018X), Tokyo, Japan, December 2018. 5. C. F. Fong, Y. Ota, S. Iwamoto, and Y. Arakawa, “Scheme for Conversion between Electronic Spin and Photonic Orbital Angular Momentum using a Photonic Crystal with an Embedded Quantum Dot”, The Excitonics and Polaritonics International Conference, P01, Singapore, Singapore, December 2018. 6. S. Iwamoto, T. Yamaguchi, Y. Ota, and Y. Arakawa, “Light Propagation in Semiconductor Valley Photonic Crystal Slab”, 8th international symposium on photonics and electronics convergence (ISPEC 2018), E-3, Tokyo, Japan, December 2018. 7. Y. Ota, R. Katsumi, K. Watanabe, S. Iwamoto and Y. Arakawa, “Demonstration of a Topological Photonic Crystal Nanocavity Laser with Quantum Dot Gain”, 8th international symposium on photonics and electronics convergence (ISPEC 2018), P-26, Tokyo, Japan, December 2018. 8. A. Osada, Y. Ota, R. Katsumi, T. Yamaguchi, M. Kakuda, S. Iwamoto, and Y. Arakawa, “On-Chip Excitation of Single Quantum Dots using a Silicon Waveguide”, 8th international symposium on photonics and electronics convergence (ISPEC 2018), P-29, Tokyo, Japan, December 2018. 9. W. Zhan, J. –K. Kwoen, K. Watanabe, S. Iwamoto, and Y. Arakawa, “Growth of InAs/GaAs Bilayer Quantum Dots for Long-Wavelength Emission”, 8th international symposium on photonics and electronics convergence (ISPEC 2018), P-30, Tokyo, Japan, December 2018. 10. T. Yamaguchi, Y. Ota, R. Katsumi, S. Ishida, A. Osada, Y. Arakawa, and S. Iwamoto, “Observation of Light Propagation through Sharp Bends in a Slab-type Valley Photonic Crystal Waveguide”, 8th international symposium on photonics and electronics convergence (ISPEC 2018), P-32, Tokyo, Japan, December 2018. 11. H. Yoshikawa, J. –K. Kwoen, T. Doe, M. Izumi, S. Iwamoto, Y. Arakawa, “Characteristics of a Quantum Dot Infrared Photodetector on On-Axis Si (100) Substrate”, 8th international symposium on photonics and electronics convergence (ISPEC 2018), P-33, Tokyo, Japan, December 2018. 12. A. Tamada, Y. Ota, K. Kurama, K. Watanabe, S. Iwamoto, and Y. Arakawa, “Single Plasmon Generation in a Plasmonic Microring Resonator Embedding Self-Assembled Quantum Dots”, 8th international symposium on photonics and electronics convergence (ISPEC 2018), P-36, Tokyo, Japan, December 2018. 13. W. Vos, T. Tajiri, S. Takahashi, C. A. Harteveld, D. A. Grishina, S. Iwamoto and Y. Arakawa, “Reflectivity of Finite 3D GaAs Photonic Band Gap Crystals”, MRS Fall Meeting, EP07.04.06, Boston, USA, November 2018. 14. S. Takahashi, W. Vos, T. Tajiri, S. Iwamoto and Y. Arakawa, “Optical Properties of Direct Versus Inverse 3D Chiral Photonic Crystals”, MRS Fall Meeting, EP07.06.01, Boston, USA, November 2018. 15. (Invited)S. Iwamoto, Y. Ota, K. Kuruma, T. Tajiri, S. Takahashi, R. Katsumi, M. Kakuda, K. Watanabe and Y. Arakawa, “Tailored Disorders in Photonic Crystals for Laser and Cavity QED Applications”, MRS Fall Meeting, EP07.10.01, Boston, USA, November 2018. 16. (Invited) S. Iwamoto, T. Yamaguchi, Y. Ota, and Y. Arakawa, “Valley-Protected Edge State in Semiconductor Photonic Crystal Slab”, Workshop on Innovative Nanoscale Devices and Systems (WINDS2018), D-2, Hawaii, USA, November 2018. 17. A. Tamada, Y. Ota, K. Kuruma, K. Watanabe, S. Iwamoto, and Y. Arakawa, “Observation of single plasmon generation in a self-assembled InAs/GaAs quantum dot embedded in a transfer-printed plasmonic microring resonator”, The 23rd MICROOPTICS CONFERENCE (MOC2018), D-2, Taipei, Taiwan, October 2018. 18. T. Yamaguchi, R. Katsumi, A. Osada, Y. Ota, S. Ishida, S. Iwamoto and Y. Arakawa, “Observation of topologically protected light propagation in a slab-type valley photonic crystal waveguide”, The 23rd MICROOPTICS CONFERENCE (MOC2018), C-2, Taipei, Taiwan, October 2018. 19. (Invited) S. Iwamoto, “Topological Interface States in Semiconductor Photonic Crystals”, France-Japan Bilateral Workshop on Hybrid Quantum Systems, Paris, France, October 2018. 20. K. Kuruma, Y. Ota, M. Kakuda, S. Iwamoto, and Y. Arakawa, “Observation of Strong Coupling between a Single Quantum Dot and an L4/3 Photonic Crystal Nanocavity”, 2018 International Conference on Solid State Devices and Materials (SSDM2018), M-1-03, Tokyo, Japan, September 2018. 21. (Invited) S. Iwamoto, Y. Ota, T. Yamaguchi, and Y. Arakawa, “Topological edge states in semiconductor-based photonic crystals”, China-Japan International Workshop on Quantum Technologies (QTech 2018), Hefei, China, August 2018. 22. (Invited)S. Iwamoto, Y. Ota, R. Katsumi. K. Watanabe, and Y. Arakawa, “Topological Localized State in Photonic Crystal Nanobeam”, Progress In Electromagnetics Research Symposium (PIERS) 2018, Toyama, Japan, August 2018. 23. (Invited) Y. Ota, R. Katsumi, A. Osada, M. Kakuda, S. Iwamoto, and Y. Arakawa, “Chip-integrated Quantum-dot Single Photon Sources Fabricated by Transfer Printing”, Progress In Electromagnetics Research Symposium (PIERS) 2018, Toyama, Japan, August 2018. 24. (Invited) S. Iwamoto, “Advances in quantum dot cavity quantum electrodynamics using photonic crystal nanocavities”, Pacific Rim Conference on Lasers and Electro-Optics (CLEO-PR) 2018,Workshop 5: Photonic Quantum Computing, Hong Kong, July 2018. 25. R. Katsumi, Y. Ota, M. Kakuda, S. Iwamoto, and Y. Arakawa, “Integration of multiple quantum-dot single-photon sources on a photonic waveguide by transfer printing”, 10th Biannual Conference on Quantum Dots (QD 2018), Tronto, Canada, June 2018. 26. Y. Ota, R. Katsumi, K. Watanabe, S. Iwamoto, and Y. Arakawa, “Lasing in a topological photonic crystal nanocavity”, Conference on Lasers and Electro-Optics(CLEO 2018), STh3A.4, San Jose Convention Center, San Jose, California, USA , May 2018. 27. A. Osada, Y. Ota, R. Katsumi, K. Watanabe, S. Iwamoto, and Y. Arakawa, “Quantum-dot nanolasers on Si photonic circuits”, Conference on Lasers and Electro-Optics(CLEO 2018), SF1A.7., San Jose Convention Center, San Jose, California, USA , May 2018. 28. R. Katsumi, Y. Ota, M. Kakuda, S. Iwamoto, and Y. Arakawa, “Quantum dot single photon sources transfer-printed on wire waveguides”, Conference on Lasers and Electro-Optics(CLEO 2018), FM1H.5., San Jose Convention Center, San Jose, California, USA , May 2018. 29. T. Ishida, S. Takahashi, T. Tajiri, K. Watanabe, Y. Ota, S. Iwamoto, and Y. Arakawa, “Three-dimensional photonic crystal nanocavity fabricated by a micro-manipulation technique under optical microscope observation”, International Conference on Nano-photonics and Nano-optoelectronics 2018 (ICNN2018), ICNN8-4, Pacifico Yokohama, Yokohama, Japan, April 2018. 30. Y. Ota, S. Iwamoto, and Y. Arakawa, “Analysis on Giant Light Scattering near a Dirac Point in a Photonic Crystal”, International Conference on Nano-photonics and Nano-optoelectronics 2018 (ICNN2018), ICNN8-3, Pacifico Yokohama, Yokohama, Japan, April 2018. 31. A. Osada, Y. Ota, R. Katsumi, K. Watanabe, S. Iwamoto, and Y. Arakawa, “Quantum-Dot Nanolaser Integrated on a Silicon Waveguide Buried in Silicon Dioxide by Transfer Printing”, International Conference on Nano-photonics and Nano-optoelectronics 2018 (ICNN2018), ICNN4-2, Pacifico Yokohama, Yokohama, Japan, April 2018. 32. H. Yoshikawa, J. Kwoen, T. Doe, M. Izumi, S. Iwamoto, and Y. Arakawa, “Evaluation of inter-sublevel transition of InAs/GaAs quantum dot structures on on-axis Si (100) substrate by photocurrent measurement”, International Conference on Nano-photonics and Nano-optoelectronics 2018 (ICNN2018), ICNN2-2, Pacifico Yokohama, Yokohama, Japan, April 2018. 33. R. Katsumi, Y. Ota, M. Kakuda, S. Iwamoto, and Y. Arakawa, “Transfer-printed Quantum-dot Single Photon Sources for Efficient Waveguide Coupling”, International Conference on Nano-photonics and Nano-optoelectronics 2018 (ICNN2018), ICNN1-2, Pacifico Yokohama, Yokohama , Japan, April 2018. 34. (Invited) Y. Arakawa, S. Iwamoto, J. Tatebayashi, Y. Ota, M. Holmes, and M. Arita, “Quantum dots for advanced light sources”, Utokyo-ANU Workshop on Quantum Control and Electronic Material and Devices,February 28-March1, Tokyo, Japan, February 2018. 35. (Invited) Y. Ota, S. Iwamoto, Y. Arakawa, “A thresholdless quantum-dot photonic-crystal nanocavity laser“, SPIE Photonics West 2018, Novel In-Plane Semiconductor Lasers XVII, paper# 10553-31, The Moscone Center, San Francisco, USA, January 2018. 36. (Invited) S. Iwamoto, T. Tajiri, S. Takahashi, Y. Ota and Y. Arakawa, “Three-dimensional functional photonic crystals made by micromanipulation”, Physics@Veldhoven, FT5.1, ‘NH Conference Center Koningshof, Veldhoven, Netherlands, January 2018. 37. (Invited) S. Iwamoto, I. Kim, Y. Ota, R. Katsumi, and Y. Arakawa, “Topological Localized States in Quasi-1D Photonic and Phononic Crystals”, International workshop “Variety and universality of bulk-edge correspondence in topological phases: From solid state physics to transdisciplinary concepts” (BEC2018) , Tsukuba Univ., Ibaraki, Japan, January 2018. 38. S. Takahashi, S. Oono, S. Iwamoto, Y. Hatsugai, and Y. Arakawa, “Optical Weyl points and topological edge states in semiconductor chiral photonic crystals” International workshop “Variety and universality of bulk-edge correspondence in topological phases: From solid state physics to transdisciplinary concepts” (BEC2018), Tsukuba Univ., Ibaraki, Japan, January 2018. 39. I. Kim, S. Iwamoto, and Y. Arakawa, “Topologically Protected Elastic Waves in One-Dimensional Periodic Structures of Continuous Media”, International workshop ”Variety and universality of bulk-edge correspondence in topological phases: From solid state physics to transdisciplinary concepts” (BEC2018), P-18, Tsukuba, Ibaraki, Japan, January 2018. 1. K. Watanabe, S. Iwamoto, and Y. Arakawa, “Photoluminescence improvements of InAs-GaAs quantum-dot multiple layers by introducing GaAsP layers, “The 7th International Symposium on Photonics and Electronics Convergence (ISPEC2017), P-25, ENEOS Hall, The University of Tokyo, Tokyo, Japan, December 2017. 2. Y. Ota, K. Watanabe , M. Kakuda, S. Iwamoto and Y. Arakawa, “Demonstration of Thresholdless Lasing in a Nanolaser with Quantum Dot Gain”, The 7th International Symposium on Photonics and Electronics Convergence (ISPEC2017), P-17, ENEOS Hall, The University of Tokyo, Tokyo, Japan, December 2017. 3. K. Kuruma, Y. Ota, M. Kakuda, S. Iwamoto, and Y. Arakawa, “Carrier Dynamics in a Quantum Dot-Nanocavity System Resolved via Vacuum Rabi Oscillations”, The 7th International Symposium on Photonics and Electronics Convergence (ISPEC2017), P-8, ENEOS Hall, The University of Tokyo, Tokyo, Japan, December 2017. 4. R. Katsumi, Y. Ota, M. Kakuda, T. Miyazawa, K. Takemoto, S. Iwamoto, and Y. Arakawa, “Observation of optical coupling in a quantum dot-nanocavity-waveguide coupled system fabricated by transfer printing”< The 7th International Symposium on Photonics and Electronics Convergence (ISPEC2017), P-43, ENEOS Hall, The University of Tokyo, Tokyo, Japan, December 2017. 5. M. Kakuda, Y. Ota, K. Kuruma, K. Watanabe, S. Iwamoto and Y. Arakawa, “Improved optical properties of low-density InAs-GaAs quantum dots after the optimization of partial capping and In-flush process”, The 7th International Symposium on Photonics and Electronics Convergence (ISPEC2017), P-27, ENEOS Hall, The University of Tokyo, Tokyo, Japan, December 2017. 6. B. Jang, T. Tsuchizawa, H. Nishi, T. Nakamura, S. Iwamoto, and Y. Arakawa, “Hybrid distributed feedback quantum dot laser with laterally coupled grating”, The 7th International Symposium on Photonics and Electronics Convergence (ISPEC2017), P-15, ENEOS Hall, The University of Tokyo, Tokyo, Japan, December 2017. 7. S. Ishida, S. Kako, K. Oda, S. Iwamoto, and Y. Arakawa, “Enhancement of Biaxial Tensile Strain using Suspended Cross-shaped Microstructures for N-dope Germanium”, The 7th International Symposium on Photonics and Electronics Convergence (ISPEC2017), P-10, ENEOS Hall, The University of Tokyo, Tokyo, Japan, December 2017. 8. Q. H. Vo, Y. Ota, K. Watanabe, T. Kageyama, S. Iwamoto, and Y. Arakawa, “Observation of cavity mode emission from photonic crystal nanocavity with quantum dot active region embedded by MBE regrowth”, The 7th International Symposium on Photonics and Electronics Convergence (ISPEC2017), P-14, ENEOS Hall, The University of Tokyo, Tokyo, Japan, December 2017. 9. Q. H. Vo, Y. Ota, K. Watanabe, T. Kageyama, S. Iwamoto, and Y. Arakawa, “A photonic crystal nanocavity with a quantum dot active region embedded by MBE regrowth”, 22nd Microoptics Conference (MOC’17), E-2, The University of Tokyo, Tokyo, Japan, November 2017. 10. I. Kim, S. Iwamoto, and Y. Arakawa, “Imaging of Topologically Protected Elastic Mode in Silica 1D Phononic Crystal via Photoelastic Effect”, 22nd Microoptics Conference (MOC’17), G-5, The University of Tokyo, Tokyo, Japan, November 2017. 11. I. Kim, S. Iwamoto, and Y. Arakawa, “Observation of Topological Interface State of Elastic Wave in 1D Phononic Crystal”, International Symposium on Hybrid Quantum Systems 2017 (HQS2017), Miyagi-Zao Royal Hotel, Miyagi, Japan, September 2017. 12. T. Tajiri, S. Takahashi, Y. Ota, K. Watanabe, S. Iwamoto and Y. Arakawa, “Nanocavity laser and photonic waveguides integrated in three-dimensional photonic crystals”, International Symposium on Hybrid Quantum Systems 2017 (HQS2017), P16, Miyagi-Zao Royal Hotel, Miyagi, Japan, September 2017. 13. S. Takahashi, T. Tajiri, Y. Ota, J. Tatebayashi, S. Iwamoto, and Y. Arakawa, “Circularly Polarized Spontaneous Emission from Quantum Dots in Three-Dimensional Semiconductor Chiral Photonic Crystals”, International Symposium on Hybrid Quantum Systems 2017 (HSQ2017), TU-A2-1, Miyagi-Zao Royal Hotel, Miyagi, Japan, September 2017. 14. W. Lin, Y. Ota, S. Iwamoto, and Y. Arakawa, “A Numerical Investigation on the Directional Emission from a Quantum Dot Ensemble Embedded in an Asymmetric Optical Waveguide”, International Symposium on Hybrid Quantum Systems 2017 (HSQ2017), P17, Miyagi-Zao Royal Hotel, Miyagi, Japan, September 2017. 15. M. Kakuda, Y. Ota, K. Kuruma, K. Watanabe, S. Iwamoto and Y. Arakawa, “Improved optical properties of low density InAs-GaAs quantum dots by controlling partial capping process”, 2017 International Conference on Solid State Devices and Materials, M-5-02, Sendai International Center, Sendai, Miyagi, Japan, September 2017. 16. R. Katsumi, Y. Ota, K. Kuruma, A. Tamada, M. Kakuda, T. Miyazawa, K. Takemoto, S. Iwamoto and Y. Arakawa, “Fabrication of quantum dot-nanocavity-waveguide coupled systems via transfer printing method”, The 11st International Nano-Optoelectronics Workshop (iNOW2017), WeP8, Institute for Electronics and Information Technology in Tianjin of Tsinghua University, Qian’an, China, August 17. Y. Ota, R. Moriya, N. Yabuki, M. Arai, M. Kakuda, S. Iwamoto, T. Machida and Y. Arakawa, “Coupling atomically-thin black phosphorus to a photonic crystal nanocavity”< The 24th General Congress of International Commission for Optics (ICO-24), Th2E-07, Keio Plaza Hotel, Tokyo, Japan, August 2017. 18. K. Kuruma, Y. Ota, M. Kakuda,S. Iwamoto and Y. Arakawa, “Dephasing in a Quantum Dot-Nanocavity System Resolved via Time-domain Vacuum Rabi Oscillation”, The 24th General Congress of International Commission for Optics (ICO-24), Keio Plaza Hotel, Tokyo, Japan, August 2017. 19. S. Takahashi, S. Oono, S. Iwamoto, Y. Hatsugai, and Y. Arakawa, “Topological Edge States by Resolving Weyl Points in Semiconductor Chiral Woodpile Photonic Crystals”, The 24th General Congress of International Commission for Optics (ICO-24), F1E-02, Keio Plaza Hotel, Tokyo, Japan, August 2017. 20. S. Iwamoto and Y. Arakawa, “Design of Slab-Type Valley Photonic Crystals with Triangular Air Holes”, The 24th General Congress of International Commission for Optics (ICO-24), F1E-03, Keio Plaza Hotel, Tokyo, Japan, August 2017. 21. I. Kim, S. Iwamoto, and Y. Arakawa, “Observation of topological interface state of elastic wave in a silica 1D phononic crystal”, The 8th International Conference on Metamaterials, Photonic Crystals and Plasmonics, Session 3A23-5, Incheo, Korean, July 2017. 22. S. Oono, S. Takahashi, S. Iwamoto, Y. Hatsugai, and Y. Arakawa, “Topological edge modes of light in all dielectric chiral woodpile structures stacked with pi-4 in-plane rotation”, 18th International Conference on Physics of Light-Matter Coupling in Nanostructures (PLMCN18), P5-240, Neubaukirche, Wurzburg, Germany, July 2017. 23. S. Takahashi, S. Oono, S. Iwamoto, Y. Hatsugai, and Y. Arakawa, “Optical Weyl Points below the Light Line in Semiconductor Chiral Woodpile Photonic Crystals”, Conference on Lasers and Electro-Optics(CLEO 2017), JTu5A.42, San Jose Convention Center, San Jose, California, USA, May 2017. 24. T. Tajiri, S. Takahashi, Y. Ota, K. Watanabe, S. Iwamoto and Y. Arakawa, “Guiding of laser light from a nanocavity in a three-dimensional photonic crystal”, Conference on Lasers and Electro-Optics(CLEO 2017), San Jose Convention Center, San Jose, California, USA, May 2017. 25. Y. Ota, M. Kakuda, K. Watanabe, S. Iwamoto and Y. Arakawa, “Thresholdless lasing with quantum dot gain”, Conference on Lasers and Electro-Optics(CLEO 2017), STh4N.3, San Jose Convention Center, San Jose, California, USA , May 2017. 26. R. Katsumi, Y. Ota, K. Kuruma, A. Tamada, M. Kakuda, T. Miyazawa, K. Takemoto, S. Iwamoto and Y. Arakawa, “Quantum dot-nanocavity-waveguide coupled systems fabricated by transfer printing”, International Conference on Nano-photonics and Nano-optoelectronics 2017 (ICNN2017), ICNN1-2, Pacifico Yokohama, Yokohama, Japan, April 2017. 27. S. Iwamoto, Y. Ota, Y.Arakawa, “A Scheme for Generating Optical Vortex from a Quantum Dot using Degenerate Photonic Crystal Nanocavity Modes”, International Conference on Nano-photonics and Nano-optoelectronics 2017 (ICNN2017), ICNN1-4, Pacifico Yokohama, Yokohama , Japan, April 2017. 28. W. Lin, Y. Ota, S. Iwamoto, and Y. Arakawa, “Spin-dependent Directional Emission from a Quantum Dot Ensemble Embedded in an Asymmetric Optical Waveguide”, International Conference on Nano-photonics and Nano-optoelectronics 2017 (ICNN2017), ICNN3-2, p24, Pacifico Yokohama, Yokohama , Japan, April 2017. 1. M. Kakuda, Y. Ota, K. Kuruma, K. Watanabe, S. Iwamoto and Y. Arakawa, “Improving optical properties of low density InAs/GaAs quantum dots by controlling partial capping temperature”< The 6th International Symposium on Photonics and Electronics Convergence (ISPEC 2016) P-40, the University of Tokyo, Tokyo, Japan, November 2016. 2. Y. Ota, D. Takamiya, K. Watanabe , M. Kakuda, S. Iwamoto and Y. Arakawa, 2Large Spontaneous Emission Coupling Factor in a Quantum Dot Nanolaser Driven by Cavity Resonant Excitation”, The 6th International Symposium on Photonics and Electronics Convergence (ISPEC2016), P-32, ENEOS Hall, The University of Tokyo, Tokyo, Japan, November 2016. 3. J. Tatebayashi, Y. Ota, S. Ishida, S. Iwamoto and Y. Arakawa, “Nanowire-quantum dot lasers grown on AlGaAs/GaAs distributed Bragg reflectors”, The 6th symposium on Photonics and Electronics Convergence,P-41, Komaba, Japan, November 2016. 4. S. Iwamoto, Y. Ota, S. Takahashi, T. Tajiri, K. Kuruma, and Y. Arakawa, “Engineering Light-Matter Interactions using Photonic Crystals toward Future Photonics and Electronics Convergence Systems”, The 6th International Symposium on Photonics and Electronics Convergence -Advanced Nanophotonics and Silicon Device Systems- (ISPEC2016), The University of Tokyo, Tokyo, Japan, November 5. S. Ishida, S. Kako, K. Oda, T. Ido, S. Iwamoto, and Y. Arakawa, “Enhancement and control of biaxial tensile strain of suspended germanium cross-shaped microstructures under low temperature”, The 6th International Symposium on Photonics and Electronics Convergence-Advanced Nanophotonics and Silicon Device Systems, P-46, Komaba Research Campus, the University of Tokyo, Tokyo, Japan, November 2016. 6. (Invited) S. Iwamoto, Y. Ota, S. Takahashi, T. Tajiri, K. Kuruma, M. Kakuda and Y. Arakawa, “Quantum-Dot Cavity Quantum Electrodynamics using Photonic Crystals”, German-Japanese Meeting on the Science of Hybrid Quantum Systems, dbb Forum, Berlin, Germany, November 2016. 7. S. Ishida, S. Kako, K. Oda, T. Ido, S. Iwamoto, and Y. Arakawa. “Temperature dependence of the biaxial tensile strain in suspended Ge cross-shaped microstructures”, 29th International Microprocesses and Nanotechnology Conference,10P-7-31, ANA Crowne Plaza Kyoto, Kyoto, Japan, November 2016. 8. (Invited) S. Iwamoto, S. Takahashi, I. Kim, T. Tajiri, Y. Ota, and Y. Arakawa, “Control of Light Polarization using Photonic and Phononic Crystals”, Asia Communications and Photonics Conference (ACP) ACP2016, AS2F.2, Shangri-La HoteWuhan, China, November 2016. 9. J. Kwoen, J. Lee, T. Kageyama, K. Watanabe, S. Iwamoto, and Y. Arakawa, “InAs/GaAs Quantum Dots Grown Directly on Unpatterned Si(100) On-Axis Substrates”, The 6th International Symposium on Photonics and Electronics Convergence-Advanced Nanophotonics and Silicon Device Systems-(ISPEC2016), P38, ENEOS Hall,Meguro, Tokyo, Japan, October 2016. 10. I. Kim, S. Iwamoto, Y. Arakawa, “Observation of enhanced photoelastic modulation using silica phononic crystal cavity”, 21st Microoptics Conference, David Brower Center, Berkeley, CA, USA, October 2016. 11. A.Tamada, Y.Ota, K.Kuruma, J.Ho, K.Watanabe, S.Iwamoto, Y.Arakawa, “Lasing in a plasmonic microring resonator containing Quantum Dots”, International Conference on Solid State Devices and Materials, C-4-04, Tsukuba International Congress Center, Tsukuba, Japan, September 2016. 12. J. Kwoen, K. Watanabe, S. Iwamoto, and Y. Arakawa, “InAs Quantum Dots Grown Directly on Unpatterned Si(100) On-Axis Substrates”, 19th International Conference on Molecular-Beam Epitaxy, Th‐C8, La Corum, Montpelier, France, September 2016. 13. (Invited) S. Iwamoto, S. Takahashi, T. Tajiri, Y. Ota, and Y. Arakawa, 2Control of Quantum Dot Light Emission by Chiral Photonic Crystal Structures”, Progress In Electromagnetics Research Symposium (PIERS), Shanghai International Convention Center, Shanghai, China, August 2016. 14. J. Tatebayashi, S. Kako, J. Ho, Y. Ota, S. Iwamoto and Y. Arakawa, “Growth of InGaAs/GaAs nanowire-quantum dots on AlGaAs/GaAs distributed Bragg reflectors for laser applications”, The 18th International Conference on Crystal Growth and Epitaxy, Mo1-G03-3, Nagoya Convention Center, Nagoya, Japan, August 2016. 15. K. Kuruma, Y. Ota, M. Kakuda,S. Iwamoto and Y. Arakawa, “Time-resolved photoluminescence of a single quantum dot-nanocavity system in strong coupling regime”, International Nano-Optoelectronics Workshop, 5, p 118-119,Technical University of Munich and Wuerzburg Residence, Munich and Wuerzburg,, Germany , August 2016. 16. B. Jang, K. Tanabe, S. Kako, S. Iwamoto, T. Tsuchizawa, H. Nishi, N. Hatori, M. Noguchi, T. Nakamura, K. Takemasa, M. Sugawara, and Y. Arakawa, “Fabrication of a hybrid silicon evanescent laser with quantum dot gain”, International Nano-Optolectronics Workshop, Technical University of Munich and Wuerzburg Residence, Munich and Wuerzburg,, Germany , August 2016. 17. (Invited)Y. Arakawa, Jin Fa Ho, J. Tatebayashi, Y. Ota, S. Iwamoto, B. Jang, and K. Tanabe, “Recent advances in qua ntum dot lasers”, iNOW2016 International Nano Optoelectronics Workshop, Munich and Wuerzburg, Germany July 2016. 18. T. Tajiri, S. Takahashi, S. Iwamoto, and Y. Arakawa, “Large Complete Photonic Band Gap between High Order Bands in a Three-Dimensional Photonic Crystal with Space Group No. 230”, The 12th International Symposium on Photonic and Electromagnetic Crystal structures, Poster session B, University of York, UK, July 2016. 19. (Invited) S. Iwamoto, Y. Ota, S. Takahashi, T. Tajiri, K. Kuruma, M. Kakuda, and Y. Arakawa, “Quantum-Dot Cavity Quantum Electrodynamics using 2D and 3D Photonic Crystal Structures”, The 12th International Symposium on Photonic and Electromagnetic Crystal Structures (PECS). University of York, York, UK, July 2016. 20. D. A. Grishina, T. Tajiri, J. Hofste, O. S. Ojambati, E. Yuce, J. Perez-Vizcaino, S. Iwamoto, and W. L. Vos, “Cavity in a silicon inverse woodpile 3D photoniv band gap crystal”, The 12th International Symposium on Photonic and Electromagnetic Crystal Structures (PECS). University of York, York, UK, July 2016. 21. B. Jang, K. Tanabe, S. Kako, S. Iwamoto, T. Tsuchizawa, H. Nishi, N. Hatori, M. Noguchi, T. Nakamura, K. Takemasa, M. Sugawara, and Y. Arakawa, “Demonstration of a hybrid silicon evanescent quantum dot laser”, 21st Optoelectronics and Communications Conference / International Conference on Photonics in Switching 2016, PD2-2, TOKI MESSE Niigata Convention Center Niigata, Japan, July 22. J. Ho, J. Tatebayashi, S. Sergent, C. F. Fong, S. Iwamoto and Y. Arakawa, “Demonstration of a plasmonic laser using quantum dot gain medium”, Compound Semiconductor Week 2016, TuB3-3, Toyama International Conference Center, Toyama, Japan, June 2016. 23. T. Kageyama, Q. H. Vo, K. Watanabe, K. Takemasa, M. Sugawara, S. Iwamoto and Y. Arakawa, “Large modulation bandwidth (13.1 GHz) of 1.3 ?m-range quantum dot lasers with high dot density and thin barrier layer”, the 2016 Compound Semiconductor Week (CSW2016), MoC3-4, Toyama International Conference Center, Toyama, Toyama, Japan, June 2016. 24. Y. H. Jhang, R. Mochida, K. Tanabe, K. Takemasa, M. Sugawara, S. Iwamoto, and Y. Arakawa, “Direct modulation of InAs/GaAs quantum dot Lasers on silicon at 60 ℃”, 2016 Compound Semiconductor Week, MoC3-3, Toyama International Conference Center, Toyama, Japan, June2016. 25. (Invited) S. Iwamoto, S. Takahashi, T. Tajiri, Y. Ota, and Y. Arakawa, “Chiral Three-Dimensional Photonic Crystals for Controlling Light-Matter Interactions”, CIQM Frontiers in Quantum Materials and Devices Workshop, RIKEN, Wako, Saitama, Japan, June 2016. 26. S. Takahashi, Y. Ota, T. Tajiri, J. Tatebayashi, S. Iwamoto, and Y. Arakawa, “Chiral Cavity Mode Emission from Quantum Dots in a Three-Dimensional Photonic Crystal”, 17th International Conference on Physics of Light-Matter Coupling in Nanostructures (PLMCN17), We8, Todaiji Temple Cultural Center, Nara, Japan, March 2016. 27. Y. Ota, D. Takamiya, K. Watanabe, M. Kakuda, S. Iwamoto, and Y. Arakawa, “Enhanced spontaneous emission coupling factor in a nanolaser by cavity resonant excitation”, 17th International Conference on Physics of Light-Matter Coupling in Nanostructures (PLMCN17), We4, Todaiji Temple Cultural Center, Nara, Japan, March 2016. 28. K. Kamide, S. Iwamoto, and Y. Arakawa, “Self-Pulsation in Coupled Cavity QED Systems”, 17th International Conference on Physics of Light-Matter Coupling in Nanostructures (PLMCN17), We6, Todaiji Temple Cultural Center, Nara, Japan, March 2016. 29. (Invited) S. Iwamoto, Y. Ota, S. Takahashi, K. Kurum, and Y. Arakawa, “Quantum dot cavity quantum electrodynamics with photonic crystals, “SPIE Photonic West 2016, Paper 9757-21, San Francisco, CA, USA, February 2016. 30. (Invited) Y. Ota, K. Kuruma, M. Kakuda, S. Iwamoto and Y. Arakawa, “Cavity quantum electrodynamics using semiconductor quantum dots embedded in photonic crystal nanocavities”, The CEMS International Symposium on Dynamics in Artificial Quantum Systems (DAQS2016), We07, ENEOS hall, Tokyo, Japan, January 2016. 1. (Invited) K. Oda, T. Okumura, J. Kasai, S. Kako, S. Iwamoto, and Y. Arakawa, “Monolithic integrated Ge light emitters fabricated by epitaxial lateral overgrowth”, The Collaborative Conference on Crystal Growth, Energy Materials Nanotechnology, B35, p.76, Hong Kong, China, December 2015. 2. J. Ho, J. Tatebayashi, S. Sergent, C. F. Fong, S. Iwamoto, Y. Arakawa , “Nanowire Plasmonic Laser with Quantum Dots as the Gain Medium”, The 5th International Symposium on Photonics and Electronics Convergence, P-12, Tokyo, Japan, December 2015. 3. H. Jhang, K. Tanabe, B. Y. Jang, S. Iwamoto, and Y. Arakawa, “1.3-μm InAs/GaAs Quantum Dot Lasers Wafer-Bonded onto Silicon Substrates with Improved Bonding Strength”, The 5th International symposium on photonics and electronics convergence (ISPEC) , Tokyo, Japan, December 2015. 4. J. Tatebayashi, S. Kako, J. Ho, Y. Ota, S. Iwamoto and Y. Arakawa, “Room-temperature Lasing in a Single GaAs Nanowire with InGaAs/GaAs Quantum Dots”, The 5th International Symposium on Photonics and Electronics Convergence, P-19, Tokyo, Japan, December 2015. 5. M. Kakuda, K. Kuruma, J. Kwoen, Y. Ota, K. Watanabe, S. Iwamoto and Y. Arakawa, “Effect of capping layer growth conditions on the surface morphology of InAs/GaAs quantum dot wafers”, The 5th International Symposium on Photonics and Electronics Convergence (ISPEC 2015) P-56, the University of Tokyo, Tokyo, Japan, December 2015. 6. K. Oda, T. Okumura, J. Kasai, S. Kako, S. Iwamoto, and Y. Arakawa, “Crystallinity improvement of Ge waveguides fabricated by epitaxial lateral overgrowth and chemical mechanical polishing”, The 5th International Symposium on Photonics and Electronics Convergence, P-7, p.49, Tokyo, Japan, December 2015. 7. (Invited) S. Iwamoto, “Quantum physics in microoptics”, 20th MICROOPTICS CONFERENCE (MOC’15), TU3, FUKUOKA INTERNATIONAL CONGRESS CENTER, Fukuoka, Japan, October 2015. 8. T. Kageyama, Q.H. Vo, K. Watanabe, S. Iwamoto, Y. Arakawa, “Suppression of Vertical Alignment in Multilayer Quantum Dot Structures: The Roles of Dot Density and Spacer Thickness”, 31st North American Molecular Beam Epitaxy Conference, Mo-07, 16, Iberostar Paraiso Beach, Mayan Riviera, Quintana Roo, Mexico, October 2015. 9. S. Ishida, S. Kako, K. Oda, T. Ido, S. Iwamoto, and Y. Arakawa, “Suspended Ge cross-shaped microstructures for enhancing biaxial tensile strain”, International Conference on Solid State Devices and Materials, A-5-2, Sapporo Convention Center, Sapporo, Hokkaido, Japan, September 2015. 10. T. Tajiri, S. Takahashi, J. Tatebayashi, S. Iwamoto, and Y. Arakawa, “Plate-Insertion Stacking Method for Three-Dimensional Photonic Crystal Fabrication”, 2015 International Conference on SolidState Devices and Materials, A-1-5, Sapporo Convention Center, Hokkaido, Japan, September 2015. 11. K. Oda, T. Okumura, J. Kasai, S. Kako, S. Iwamoto, and Y. Arakawa, “Crystallinity improvement of Ge waveguides fabricated by epitaxial lateral overgrowth and chemical mechanical polishing”, 2015 International Conference on Solid State Devices and Materials (SSDM2015), A-5-3, Sapporo Convention Center, Hokkaido, Japan, September 2015. 12. S. Takahashi, Y. Ota, T. Tajiri, J. Tatebayashi, S. Iwamoto, Y. Arakawa, “Spontaneous emission of circularly polarized light from quantum dots in three-dimensional chiral photonic crystals”, Energy Materials Nanotechnology Spain Meeting, C03, San Sebastian, Spain, September 2015. 13. S. Takahashi, T. Tajiri, Y. Ota, J. Tatebayashi, S. Iwamoto, Y. Arakawa, “Enhanced Optical Activity in Three-Dimensional Chiral Photonic Crystals”, International Nano Optoelectronics Workshop, ThP3, Tokyo, Japan, August 2015. 14. J. Ho, J. Tatebayashi, S. Sergent, C. F. Fong, S. Iwamoto, Y. Arakawa,”Stimulated emission from plasmonic modes in GaAs nanowires containing quantum dots”, International Nano-Optoelectronics Workshop, TuP12, Tokyo, Japan, August 2015. 15. (Invited) Y. Arakawa, S. Iwamoto, M. Holmes Jun Tatebayashi, M. Arita, K. Choi, and J. Ho, “Advances in Nanowire Quantum Dots International Nano Optoelectronics Workshop, Tokyo, Japan, August 16. J. Ho, J. Tatebayashi, S. Sergent, C. F. Fong, S. Iwamoto, Y. Arakawa,”Observation of stimulated emission from GaAs plasmonic nanowires containing InGaAs quantum dots”, 17th International Conference on Modulated Semiconductor Structures, Tu-B4-3, Sendai, Japan, July 2015. 17. S. Takahashi, Y. Ota, T. Tajiri, J. Tatebayashi, S. Iwamoto, Y. Arakawa, “Controlled Radiative Life Time of Circularly Polarized Emission from Quantum Dots by Three-Dimensional Chiral Photonic Crystals”, 17th International Conference on Modulated Semiconductor Structures, Tu-B1-2, Sendai, Miyagi, Japan, July 2015. 18. J. Kwoen, K.i Watanabe, S. Iwamoto, and Y. Arakawa, “Growth of S-K InAs Quantum Dots on Side Facets of GaAs Nanowire on Si (100) Substrate”, 17th International Conference on Modulated Semiconductor Structures, Tu-B4-5, Sendai International Center, Sendai, Miyagi, Japan, July 2015. 19. C. F. Fong, Y. Ota, E. Harbord, S. Iwamoto, and Y. Arakawa, “Dynamic Nuclear Spin Polarization by P-shell Carriers in Single Quantum Dots at Zero External Magnetic Field”, 17th International Conference on Modulated Semiconductor Structures, Sendai, Miyagi, Japan, July 2015. 20. T. Kageyama, K. Watanabe, Q. H. Vo, K. Takemasa, M. Sugawara, S. Iwamoto and Y. Arakawa, “Strain-Compensated InAs/GaAs Quantum Dot Lasers Grown by MBE”, 2015 Compound Semiconductor Week, 42nd International Symposium on Compound Semiconductors, We2O7.1, p.113, University of California Santa Barbara, Santa Barbara, USA, July 2015. 21. (Invited) S. Iwamoto, Y. Ota, S. Takahashi, and Y. Arakawa, ”Advances in Quantum Dot Cavity Quantum Electrodynamics using Photonic Crystal Structures”, Progress In Electromagnetics Research Symposium (PIERS), FocusSession.SC3: Solid-state Quantum Photonics,Top Hotel, Prague, Czech Republic, July 2015. 22. Q. H. Vo, K. Watanabe, T. Kageyama, S. Iwamoto, and Y. Arakawa, “Self-assembled formation of GaAsP nano-apertures above InAs/GaAs quantum dots by the thermal diffusion of phosphorus”, Compound Semiconductor Week, Tu3GN4.1, University of California Santa Barbara, Santa Barbara, California, the United States, June 2015. 23. S. Takahashi, T. Tajiri, Y. Ota, J. Tatebayashi, S. Iwamoto, Y. Arakawa, “Circularly Polarized Light Emission of Quantum Dots at the Band Edge of Three-Dimensional Chiral Photonic Crystals”, CLEO: 2015, FF1C.2, San Jose, US, May 2015. 24. Y. Ota, R. Ohta, N. Kumagai, S. Iwamoto, Y. Arakawa, “Single Emitter Vacuum Rabi Splitting Measured Through Direct Free Space Spontaneous Emission”, The Conference on Lasers and Electro-Optics and The Quantum Electronics and Laser Science Conference (CLEO/QELS 2015), FF1B.5, SanJose convention center, San Jose, California, USA, May 2015. 25. Y. H. Jhang, K. Tanabe, S. Iwamoto, Y. Arakawa, “1.3-μm InAs/GaAs Quantum Dot Lasers on Silicon-on-Insulator Substrates by Metal-Stripe Bonding”, Conference on Lasers and Electro-Optics (CLEO): Science and Innovations, SW3F.4, San Jose, CA, USA, May 2015. 26. M. Kuroki, S. Kako, S. Ishida, K. Oda, T. Ido, S. Iwamoto, and Y. Arakawa, “Germanium Photonic Crystal Nanobeam Cavity with Q > 1,300”, Conference on Lasers and Electro-Optics, San Jose Convention Center, San Jose, California, USA, May 2015. 27. (Invited)J. Tatebayashi, S. Kako, J. Ho, Y. Ota, S. Iwamoto and Y. Arakawa, “Room-temperature lasing in GaAs nanowires embedding multi-stacked InGaAs/GaAs quantum dots”, Conference on Lasers and Electro-Optics:2015, SM2F.1, San Jose, California, USA , May 2015. 1. S. Iwamoto and Y. Arakawa, “Control of light emission using photonic crystals”, IITH-Japan Symposium in Nanotechnology and Nanoscience, Indian Institute of Technology, Hyderabad, India, December 2. S. Iwamoto, K. Tanabe, and Y. Arakawa, “Developments of innovative light sources for photonic electronic convergence systems”, The 4th International Symposium on Photonics and Electronics Convergence – Advanced Nanophotonics and Silicon Device Systems (ISPEC 2014) F-4, the University of Tokyo, Tokyo, Japan, November 2014. 3. T. Rae, K. Tanabe, S. Iwamoto, and Y. Arakawa, “Investigation of Band-Filling Effect at Low Temperatures in InAs/GaAs Quantum Dot Lasers with p-Type Doping”, ISPEC 2014, P-41, p.87, The University of Tokyo, Tokyo, Japan, November 2014. 4. J. Ho, J. Tatebayashi, S. Sergent, C. F. Fong, S. Iwamoto, and Y. Arakawa, “High temperature plasmonic lasing using GaAs-AlGaAs core-shell nanowires dispersed on a silver thin film”, The 4th International Symposium on Photonics and Electronics Convergence – Advanced Nanophotonics and Silicon Device Systems (ISPEC 2014 ) P-11 , the University of Tokyo, Tokyo, Japan, November 2014. 5. Y. H. Hsiao, W. Wang, S. Iwamoto, and Y. Arakawa, “Spontaneous and Stimulated Raman Scattering in Silica-Cladded Silicon Hole-Shape-Modulated Photonic Crystal Waveguides ”, The 4th International Symposium on Photonics and Electronics Convergence – Advanced Nanophotonics and Silicon Device Systems (ISPEC 2014) P-13 , the University of Tokyo, Tokyo, Japan, November 2014. 6. M. Kuroki, S. Kako, S. Ishida, K. Oda, T. Ido, S. Iwamoto and Y. Arakawa, “Fabrication and optical characterization of germanium L3-type photonic crystal nanobeam cavity”, The 4th International Symposium on Photonics and Electronics Convergence – Advanced Nanophotonics and Silicon Device Systems (ISPEC 2014) P-24, the University of Tokyo, Tokyo, Japan, November 2014. 7. Y. H. Jhang, K. Tanabe, S. Iwamoto, and Y. Arakawa, “Fabrication of 1.3-μm InAs/GaAs quantum dot lasers metal-stripe-bonded onto silicon-on-insulator substrates”, The 4th International Symposium on Photonics and Electronics Convergence – Advanced Nanophotonics and Silicon Device Systems (ISPEC 2014) P-27, the University of Tokyo, Tokyo, Japan, November 2014. 8. K. Tanabe, N. Hatori, M. Ishizaka, S. Kako, S. Iwamoto, T. Nakamura and Y. Arakawa,“Towards the Realization of QD Evanescent Hybrid Si Lasers”, The 4th International Symposium on Photonics and Electronics Convergence – Advanced Nanophotonics and Silicon Device Systems (ISPEC 2014) P-44, the University of Tokyo, Tokyo, Japan, November 2014. 9. J. Kwoen, K. Watanabe, S. Iwamoto and Y. Arakawa, “MBE Growth of S-K InAs Quantum Dots on GaAs Nanowire on Si (100) Substrate”, The 4th International Symposium on Photonics and Electronics Convergence – Advanced Nanophotonics and Silicon Device Systems (ISPEC 2014) P-55, the University of Tokyo, Tokyo, Japan, November 2014. 10. M. Kakuda, J. Kwoen, Y. Ota, K. Watanabe, S. Iwamoto and Y. Arakawa, “Low density InAs/GaAs quantum dots grown with high temperature partial capping under As2 irradiation”, The 4th International Symposium on Photonics and Electronics Convergence – Advanced Nanophotonics and Silicon Device Systems (ISPEC 2014) P-56, the University of Tokyo, Tokyo, Japan, November 2014. 11. J. Tatebayashi, S. Kako, J. Ho, Y. Ota, S. Iwamoto, and Y. Arakawa, “Lasing oscillation in a single GaAs nanowire cavity with multi-stacked InGaAs/GaAs quantum dots”, The 4th International Symposium on Photonics and Electronics Convergence – Advanced Nanophotonics and Silicon Device Systems (ISPEC 2014) P-61, the University of Tokyo, Tokyo, Japan, November 2014. 12. (Invited) Y. Ota, K. Watanabe, S. Iwamoto and Y. Arakawa, “Nonlinear optics in photonic crystal nanocavity quantum dot lasers”, SPIE Photonics Asia, 9277-19, Beijing, China, October 2014. 13. J. Ho, J. Tatebayashi, S. Sergent, C. F. Fong, S. Iwamoto and Y. Arakawa, “Demonstration of GaAs based Plasmonic Laser”, International Conference on Solid State Device and Materials 2014, B-3-1, Tsukuba, Japan, September 2014. 14. J. Tatebayashi, S. Kako, J. F. Ho, S. Iwamoto, and Y. Arakawa, “Lasing oscillation in multi-stacked InGaAs/GaAs quantum dots with a single GaAs nanowire cavity”, The 46th International Conference on Solid State Devices and Materials 2014, P-6-2, Tsukuba, Japan, September 2014. 15. K. Kamide, S. Iwamoto, and Y. Arakawa, “Impact of dark excitons on fast single‐photon emissions in quantum dot‐nanocavity systems”, The 12th International Conference on Nonlinear Optics and Excitation Kinetics in Semiconductors (NOEKS12), Tu3.2, p.48, Bremen, Germany, September 2014. 16. J. Kwoen, M. Kakuda, Y. Ota, K. Watanabe, S. Iwamoto and Y. Arakawa, “Proposal of a two-temperature growth technique for enhanced uniformity of InAs/GaAs quantum dots”, The International Conference on Molecular Beam Epitaxy (MBE 2014), Flagstaff, Arizona, USA, September 2014. 17. (Invited) Y. Ota, S. Iwamoto and Y. Arakawa, “Quantum dot cavity quantum electrodynamics using a photonic crystal nanocavity with high Q and small V”, Progress In Electromagnetics Research Symposium, 1P2b, Langham Place Guangzhou, Guangzhou, China, September 2014. 18. (Invited)Y. Arakawa, Y. Ota, K. Kamide, and S. Iwamoto, “Quantum Dot Laser in the Cavity Quantum Electrodynamical Regime”, 24th IEEE International Semiconductor Laser Conference, WP1, Meliá Palas Atenea, Palma de Mallorca, Spain, September 2014. 19. (Invited) S. Iwamoto, Y. Ota, and Y. Arakawa, “Frequency Conversion in Quantum-Dot Photonic-Crystal Nanocavity Laser”, The 6th IEEE International Nanoelectronics Conference, Sapporo, Hokkaido, Japan, July 2014. 20. (Invited) S. Iwamoto, Y. Ota, H. Takagi, D. Takamiya, and Y. Arakawa, “Cavity Quantum Electrodynamics in Quantum Dot-Photonic Crystal Nanocavity Coupled System with Large g/k”, CLEO 2014 (the 2014 Conference on Lasers and Electro-Optics), SM3M.4, San Jose, CA, USA, June 2014. 21. Y. Ota, K. Watanabe, S. Iwamoto and Y. Arakawa, “Measurement of the second order coherence of a nanolaser through its intra-cavity second harmonic generation”, The Conference on Lasers and Electro-Optics and The Quantum Electronics and Laser Science Conference (CLEO/QELS 2014), SW1G.2, San Jose, California, USA, June 2014. 22. (Invited) S. Iwamoto and Y. Arakawa, “Tailoring Optical Properties of Materials by Engineering the “Environment””, The 16th Kavli Future Symposium: Nanomaterials Science in Asian Perspective, Seoul National University, Seoul, Korea, June 2014. 23. T. Tajiri, S. Takahashi, Y. Ota, J. Tatebayashi, S. Iwamoto and Y. Arakawa, “High-Q Three-Dimensional Photonic Crystal Nanocavity with a -Layered Diamond Structure”, The 11th International Symposium on Photonic and Electromagnetic Crystal Structures, P-35, Fudan University, Shanghai, China, May 2014. 24. S. Takahashi, T. Tajiri, Y. Ota, J. Tatebayashi, S. Iwamoto, and Y. Arakawa, “Circular Dichroism in a Three-Dimensional Semiconductor Chiral Photonic Crystal”, The 11th International Symposium on Photonic and Electromagnetic Crystal Structures, O-18, Fudan University, Shanghai, China, May 2014. 25. (Invited) Y. Ota, K. Watanabe, S. Iwamoto and Y. Arakawa, “Self-frequency conversion in photonic crystal nanocavity quantum dot lasers”, SPIE Photonics West Conference 2014, 9002-33, California, USA, February 2014. 1. K. Oda, T. Okumura, K. Tani, T. Ido, S. Kako, S. Iwamoto, and Y. Arakawa, “Properties of Ge Waveguides Fabricated by Low Temperature Selective Epitaxial Growth and Rapid Thermal Annealing”, The 3rd International Symposium on Photonics and Electronics Convergence, Tokyo, Japan, November 2013. 2. J. Fu, A. Tandaechanurat, S. Iwamoto and Y. Arakawa, “High-Q nanocavity design with vertically mirror-symmetric three-dimensional woodpile photonic crystal”, The 3nd International Symposium on Photonics and Electronics Convergence -Advanced Nanophotonics and Silicon Device Systems, Tokyo, Japan, November 2013. 3. K. Kamide, S. Kako, S. Iwamoto, and Y. Arakawa, “Conditions for Polariton Condensation and Photon Lasing in Quantum Dot Systems”, The 3rd International Symposium on Photonics and Electronics Convergence -Advanced Nanophotonics and Silicon Device Systems-, P-16, Komaba Research Campus, Univ. Tokyo, Tokyo, Japan, November 2013. 4. R. Ohta, Y. Ota, S. Iwamoto, and Y. Arakawa, “Frequency Shift in Q-factor Control of Photonic Crystal Nanobeam Cavity”, International Symposium on Photonics and Electronics Convergence,P35, Tokyo, Japan, November 2013. 5. S. Takahashi, A. Tandaechanurat, R. Igusa, Y. Ota, J. Tatebayashi, S. Iwamoto, and Y. Arakawa, “Manipulation of circular polarization in a three-dimensional chiral photonic crystal”, The 3rd International Symposium on Photonics and Electronics Convergence, Tokyo, Japan, November 2013. 6. Y. Ota, D. Takamiya, R. Ohta, N. Kumagai, S. Ishida, S. Iwamoto and Y. Arakawa, “Strong coupling between a single semiconductor quantum dot and a high-Q H0 photonic crystal nanocavity”, The 3rd International Symposium on Photonics and Electronics Convergence, P-34, Ito International Research Center, Tokyo, Japan, November 2013. 7. S. Iwamoto, S. Nakayama, S. Ishida and Y. Arakawa, “Silicon Light Emitting Diodes with Photonic Crystal Nanocavities”, The 3rd International Symposium on Photonics and Electronics Convergence, F-9, Tokyo, Japan, November 2013. 8. R. Ohta, Y. Ota, S. Iwamoto, and Y. Arakawa, “Wavelength shift in MEMS-integrated photonic crystal nanobeam cavity ~Effects of adjacent waveguide width~”, International Symposium on Nanoscale Transport and Technology, PWe15, NTT BRL Atsugi Kanagawa Japan, November 2013. 9. (Invited) Y. Ota, K. Watanabe, S. Iwamoto and Y. Arakawa, “Nonlinear photonic crystal nanocavities containing quantum dots”, 2013 EMN open access week, B12, Homeland Hotel, Chengdu, China, October 2013. 10. J. F. Ho, S. Iwamoto, Y. Arakawa, “Design of efficient surface plasmon polariton modulator using Graphene”, MicroOptics Conference, Tokyo, Japan, October 2013. 11. T. Tajiri, S. Takahashi, A. Tandaechanurat, S. Iwamoto and Y. Arakawa, “Design of a three-dimensional photonic crystal nanocavity based on a -layered diamond structure”, International Conference on Solid State Devices and Materials, K-5-5, Hilton Fukuoka Sea Hawk, Fukuoka, Japan, September 2013. 12. T. Yamamoto, Y. Ota, S. Ishida, N. Kumagai, S. Iwamoto, Y. Arakawa, “Observation of enhanced exciton decay rate of single InAs quantum dots in nanoscale metal-semiconductor-metal plasmonic structures”, International Conference on Solid State Devices and Materials, K-5-2, Hilton Fukuoka Sea Hawk, Fukuoka, Japan, September 2013. 13. D. Takamiya, Y. Ota, R. Ohta, H. Takagi, N. Kumagai, S. Ishida, S. Iwamoto, Y. Arakawa, “Large Vacuum Rabi Splitting in an H0 Photonic Crystal Nanocavity-Quantum Dot System”, The 10th Conference on Lasers and Electro-Optics Pacific Rim (CLEO-PR 2013), MI2-2, ,Kyoto International Conference Center,Kyoto Kyoto, Japan, July 2013. 14. (Invited) S. Iwamoto, Y. Ota, H. Takagi, N. Kumagai and Y. Arakawa, “Nonlinear Photonics in Single Quantum Dot-Photonic Crystal Nanocavity Couples Systems”, The 10th Conference on Lasers and Electro-Optics Pacific Rim (CLEO-PR 2013), MI2-1, Kyoto International Conference Center, Kyoto, Kyoto, Japan, July 2013. 15. Y. Ota, K. Watanabe, S. Iwamoto and Y. Arakawa, “Self-frequency Summing in Photonic Crystal Nanocavity Quantum Dot Lasers”, The 10th Conference on Lasers and Electro-Optics Pacific Rim (CLEO-PR 2013), MI2-7, Kyoto International Conference Center, Kyoto, Kyoto, Japan, July 2013. 16. J. Kwoen, K. Watanabe, Y. Ota, S. Iwamoto, and Y. Arakawa , “InAs Quantum Dot Embedded GaAs Nanowire Structure on Silicon for Single Photon Emission”, 16th International Conference on Modulated Semiconductor Structures(MSS-16), TuP36, Wroclaw University of Technology, Wroclaw, Poland, July 2013. 17. S. Takahashi, A. Tandaechanurat, R. Igusa, Y. Ota, J. Tatebayashi, S. Iwamoto and Y. Arakawa, “Manipulation of circular polarization in a three-dimensional chiral photonic crystal”, 16th International Conference on Modulated Semiconductor Structures (MSS-16), TuP27, Wroclaw University of Technology, Wroclaw, Poland, July 2013. 18. Y. H. Hsiao, S. Iwamoto, and Y. Arakawa, “Design of Slow-Light Grating Waveguides for Silicon Raman Amplifier”, The 10th Conference on Lasers and Electro-Optics Pacific Rim (CLEO-PR 2013),WI2-5,Kyoto International Conference Center, Kyoto, Japan, July 2013. 19. Y. Ota, K. Watanabe, S. Iwamoto and Y. Arakawa , “Multi-color visible light generation by self-frequency doubling in photonic crystal nanocavity quantum dot lasers”, The Conference on Lasers and Electro-Optics and The Quantum Electronics and Laser Science Conference (CLEO/QELS 2013), CThF2, San Jose convention center, San Jose, California, USA, June 2013. 20. J. Kwoen, K. Watanabe, Y. Ota, S. Iwamoto, and Y. Arakawa, “High-quality InAs quantum dot embedded in GaAs nanowire structures on silicon substrate”, The 40th International Symposium on Compound Semiconductors, TuA4-4, Kobe Convention Center, Kobe, Hyogo, Japan, May 2013. 21. S. Sergent, M. Arita, S. Kako, K. Tanabe, S. Iwamoto, and Y. Arakawa, “High-Q (>6900) AlN 1D ladder-structure photonic crystal nanocavity fabricated by layer transfer”, The 40th International Symposium on Compound Semiconductors, TuA1-1, Kobe Convention Center, Kobe, Hyogo, Japan, May 2013. 22. J. Tatebayashi, Y. Ota, K. Tanabe, M. Nishioka, S. Iwamoto and Y. Arakawa, “Formation of highly-uniform multi-stacked InGaAs/GaAs quantum-dots-in-nanowires for photovoltaic applications”, The 40th International Symposium on Compound Semiconductors, WeB1-1, Kobe Convention Center, Kobe, Hyogo, Japan, May 2013. 23. (Invited) S. Iwamoto, H. Takagi, R. Ohta, Y. Ota, and Y. Arakawa, “Control of Quantum Dot-Photonic Crystal Nanocavity Coupled Systems”, ICNP/AOM 2013, The 7th International Conference on Nanophotonics(ICNP)/The 3rd Conference on Advanced In Optoelectronics and Micro/Nano Optics (AOM) , (IN)PD009, 香港理工大学,香港, May 2013. 1. J. Fu, A. Tandaechanurat, S. Iwamoto and Y. Arakawa, ”Design of large-bandwidth single-mode operation waveguides in silicon woodpile structure using two guided modes”, The 2nd International Symposium on Photonics and Electronics Convergence -Advanced Nanophotonics and Silicon Device Systems- Tokyo, Japan, December 2012. (2012). 2. D. Cao, A. Tandaechanurat, S. Nakayama, S. Ishida, S. Iwamoto, and Y. Arakawa, “Lasing oscillation in silicon-based three-dimensional photonic crystal nanocavity embedding InAs quantum dots”, The 2nd international symposium on photonic and electroics convergence advanced nanophotonics and silicon device system ISPEC 2012, Tokyo, Japan P-33, December 2012. (2012). 3. J. F. Ho, S. Sergent, A. Enderlin, S. Iwamoto and Y. Arakawa, “Modification of Epitaxial GaAs Quantum Dot Emission by Gold Nanodisk Chain Waveguides”, The 2nd International Symposium on Photonics and Electronics Convergence, Tokyo, Japan, December 2012. 4. S. Iwamoto, S. Nakayama, D. Cao, A. Tandaechanurat, S. Ishida and Y. Arakawa, “Silicon-based Nano Light Sources Using Photonic Crystals Structures”, The 2nd International Symposium on Photonics and Electronics Convergence-Advanced Nanophotonics and Silicon Device Systems-, F-4, Tokyo, Japan, December 2012. (2012). 5. S. Iwamoto, S. Nakayama, S. Ishida, and Y. Arakawa, “Silicon Light Emitting Diodes with Photonic Crystal Structures”, Joint Workshop on Advances in Nanophotonics, Wuerzburg, Germany, November 6. S. Sergent, M. Arita, S. Kako, K. Tanabe, S. Iwamoto, and Y. Arakawa, “New method for the fabrication of high-Q (>6300) 1D photonic crystal nanobeam cavities in GaN/AlN quantum dot stacks grown on SiC”, International Workshop on Nitride Semiconductors, OD2-2, p 318, Sapporo, Japan, October 2012. 7. (Invited) M. Nomura, S. Iwamoto, and Y. Arakawa, “Single quantum dot-photonic crystal nanocavity laser”, 3rd International Conference on Photonics 2012, L-D2-AM2-4, Penang, Malaysia, October 8. (Invited) S. Iwamoto, “Information-Processing Photonics for Yottabyte-Scale Information Era ~Necessity for Developments and its Technology Roadmap~”, INTERNATIONAL SYMPOSIUM ON OPTICAL MEMORY 2012, Mo-E-03, Tokyo, Japan, October 2012. (2012). 9. M. Arita, S. Kako, S. Iwamoto, Y. Arakawa, “Fabrication of AlGaN two-dimensional photonic crystal nanocavities by selective thermal decomposition of GaN”, International Workshop on Nitride Semiconductors 2012, TuP-OD-34, p.261, Sapporo, Japan, October 2012. 10. (Invited) S. Iwamoto, M. Nomura, A. Tandaechanurat, D. Cao, and Y. Arakawa, “2D and 3D Photonic Crystal Nanocavity Lasers with Quantum Dot Gain”, IEEE PHOTONICS CONFERENCE 2012 (IPC 2012), Burlingame, CA, USA, September 2012. 11. Y. H. Hsiao, S. Iwamoto, and Y. Arakawa, “Design of silicon photonic crystal waveguides for high gain Raman amplification using two symmetric TE-like slow-light modes”, 2012 International Conference on Solid State Devices and Materials, 534-535, Kyoto Japan, September 2012. 12. R. Ohta, Y. Ota, H. Takagi, N. Kumagai, K. Tanabe, S. Ishida, S. Iwamoto and Y. Arakawa, “Electro-mechanical control of Q factor of photonic crystal nanobeam cavities”, 2012 International Conference on Solid State Devices and Materials, A-5-3, Kyoto Japan, September 2012. 13. Y. Ota, K. Watanabe, S. Iwamoto and Y. Arakawa, “Intra-cavity frequency doubling in photonic crystal nanocavity quantum dot lasers”, 2012 IEEE Photonics Conference, WW 3, California, USA, September 2012. 14. N. Kumagai, S. Ohkouchi, K. Watanabe, S. Iwamoto, and Y. Arakawa, “Rim structures on non-elongated InAs quantum dots grown by partial cap and annealing process at low temperature”, The 17th International Conference on Molecular Beam Epitaxy (MBE2012), Tup-34, Nara, Japan, September 2012. 15. S. Ohkouchi, N. Kumagai, K. Watanabe, S. Iwamoto, and Y. Arakawa, “Shape evolution of low density InAs quantum dots in the partial capping process by using As2 source”, 17th International Conference on Molecular Beam Epitaxy (MBE 2012), TuA3-2, Nara, Japan, September 2012. 16. J Kwoen, K. Watanabe, S. Iwamoto, Y. Arakawa, “Non-VLS growth of GaAs nanowire by a Ga pre-deposition technique”, The 17th International Conference on Molecular Beam Epitaxy, (MBE 2012), WeA-2-6, Nara Prefectural New Public Hall, Nara, Japan, September 2012. 17. E. Harbord, Y. Ota, M. Shirane, Y. Igarashi, N. Kumagai, S. Ohkouchi, S.Iwamoto, S. Yorozu, Y. Arakawa, “Spin pumping InAs/GaAs QDs: controlling linear and circular polarization”, 2012 International Conference on Solid State Devices and Materials, K-8-2, Kyoto Japan, September 2012. 18. M. Nomura, S. Iwamoto, and Y. Arakawa, “Quantum dot-photonic crystal nanocavity laser”, DYCE International Workshop, Thu-6, Hokkaido, Japan, August 2012. 19. A. Enderlin, Y. Ota R. Ohta, N. Kumagai, S. Ishida S. Iwamoto, Y. Arakawa, “Efficient Light Extraction from a Quantum Dot in a Photonic Crystal Nanobeam Cavity through a Waveguide”, 10th International Symposium on Photonic and Electromagnetic Crystal Structures,PECS-X-013,Santa Fe, New Mexico, USA, June 2012. 20. H. Takagi, Y. Ota, K. Watanabe, S. Ishida, S. Iwamoto, Y. Arakawa, “Optical Stark Shift of a Quantum Dot in a Nanocavity: Towards Attojoule Switching in the Telecom Band”, PECS-X, 0121, Santa Fe, USA, June 2012. 21. S. Sergent, M. Arita, S. Kako, S. Iwamoto, and Y. Arakawa, “UV-Range High-Q (Q > 5000) AlN Photonic Crystal Nanobeam Cavities Embedding GaN Quantum Dots”, 10th International Symposium on Photonic and Electromagnetic Crystal Structures,PECS-X-013, pp 258-259, Santa Fe, New Mexico, USA, June 2012. 22. J. Tatebayashi, D. Karunathillake, Y. Ota, S. Ishida, M. Nishioka, S. Iwamoto, and Y. Arakawa, “Formation and optical properties of multi-stack InAs/GaAs quantum dots embedded in GaAs nanowires grown by selective metalorganic chemical vapor deposition”, 16th International Conference on Metal Organic Vapor Phase Epitaxy, MoB3-2 , Busan, Korea, May 2012. 23. H. Takagi, Y. Ota, N. Kumagai, S. Ishida, S. Iwamoto, Y. Arakawa, “Nanocavity-enhanced Optical Stark Shift in a Single Quantum Dot under Extremely Low Excitation Power”, CLEO 2012, QTu3D.5, San Jose, USA, May 2012. 24. J. Tatebayashi, Y. Ota, D. Karunathillake, S. Ishida, M. Nishioka, S. Iwamoto, and Y. Arakawa, “Site-controlled InAs/GaAs quantum-dot-in-nanowires for non-classical photon emitters”, 7th International Conference on Quantum Dots (QD 2012), 10-3, Santa Fe, USA, May 2012. 25. J. Tatebayashi, Y. Ota, S. Ishida, M. Nishioka, S. Iwamoto and Y. Arakawa, “Formation and optical properties of multi-stack InAs/GaAs quantum dots embedded in GaAs nanowires grown by selective metalorganic chemical vapor deposition”, 2012 Materials Research Society Spring Meeting, AA11.2. San Francisco, USA, April 2012. 26. A. Tandaechanurat, N. Hauke, T. Zabel, T. Reichert, H. Takagi, M. Kaniber, S. Iwamoto, D. Bougeard, J. J. Finley, G. Abstreiter, and Y. Arakawa , “Silicon-based three-dimensional photonic crystal with enhanced spontaneous emission”, SPIE Photonics West 2012 Conference, 8266-28, California, USA, January 2012. 27. (Invited) Y. Ota, S. Iwamoto, N. Kumagai and Y. Arakawa, “Spontaneous two photon emission from a single quantum dot coupled to photonic crystal nanocavity”, SPIE Photonics West 2012 Conference, 8266-28, California, USA, January 2012. 1. S. Nakayama, S. Iwamoto, S. Kako, S. Ishida and Y. Arakawa, ”Silicon LED with one-dimensional photonic crystal nanocavity”, 1st International Symposium on Photonics and Electronics Convergence -Advanced Nanophotonics and Silicon Device Systems (ISPEC 2011) P-44, the University of Tokyo, Tokyo, November 2011. 2. (Invited) S. Iwamoto and Y. Arakawa, “Enhanced light emission from silicon with photonic nanostructures”, International Photonics and OptoElectronics Meetings, Wuhan, China, November 2011. 3. S. Iwamoto, S. Nakayama, S. Ishida, and Y. Arakawa, “Enhancement of Light Emission from Silicon by Photonic Crystals Structures”, The 1st International Symposium on Photonics and Electronics Convergence, E-2, Tokyo, Japan, November 2011. 4. D. S. Cao, A. Tandaechanurat, S. Nakayama, N. Hauke, T. Zabel, S. Ishida, S. Iwamoto, J. J. Finley, G. Abstreiter, and Y. Arakawa, “Fabrication of High-Q (~22,000) Silicon-Based Three-Dimensional Photonic Crystal Nanocavity and its Lasing Oscillation with InAs Quantum-Dot Gain”, The 1st international symposium on photonics and electronics convergence advanced nanophotonics and silicon device systems, Tokyo, November 2011. 5. N. Kumagai, S. Ohkouchi, K. Watanabe, S. Iwamoto, and Y. Arakawa, “MBE growth of high quality InAs quantum dots for applications in quantum information technology”,1st Annual World Congress of Nano-S&T, Track 2-5, Molecular Nanoelectronic Devices, 274, Dalian, China, October 2011. 6. Y. Ota, S. Iwamoto, N. Kumagai, Y. Arakawa, “Enhanced spontaneous two photon emission from a single quantum dot in a photonic crystal nanocavity”, The 3rd Japan-German Woqkshop on Quantum Dot Nanolasers, B-3, Tokyo, October 2011. 7. T. Nakaoka, Y. Tamura,T. Miyazawa,K. Watanabe,Y. Ota,S. Iwamoto,Y. Arakawa, “Wavelength tunable single-photon source with a side gate”, 2011 International Conference on Solid State Devices and Materials (SSDM 2011), J-6-4, Nagoya, , September 2011. 8. D. S. Cao, A. Tandaechanurat, S. Nakayama, N. Hauke, T. Zabel, S. Ishid, S. Iwamoto, J. J. Finley, G. Abstreiter, and Y. Arakawa, “Fabrication of High-Q Silicon-Based Three-Dimensional Photonic Crystal Nanocavity and its Lasing Oscillation with InAs Quantum-Dot Gain”, the 8th international confernce on Group IV photonic, London, UK, September 2011. 9. J. K. Kwoen, N. Kumagai, K. Watanabe, S. Ohkouchi, S. Iwamoto, and Y. Arakawa, “Suppression of surface nanocrystal nucleation in growth of GaAs nanowire on Si(111) by molecular beam epitaxy”, 2011 International Conference on Solid State Devices and Materials(SSDM 2011), P-13-6, Nagoya, Japan, September 2011. 10. S. Nakayama, S. Iwamoto, S. Kako, S. Ishida and Y. Arakawa, “Demonstration of silicon nanocavity LED with enhanced luminescence”, 2012 International conference on Solid State Devices and Materials (SSDM 2011) I-8-2, Nagoya, Japan, September 2011. 11. H. Takagi, Y. Ota, S. Iwamoto, Y. Arakawa, “Proposal of method for coherent single-photon generation from coupled quantum dot-nanocavity systems with single excitation laser”, International Nano-Optoelectronics Workshop, Poser Session3, P12, St. Petersburg, Russia and Wuerzburg, Germany, August 2011. 12. K. Tanabe, S. Iwamoto and Y. Arakawa, “Proposal and design of III-V/Si hybrid lasers with current injection across conductive wafer-bonded heterointerfaces”, 9th Pacific Rim Conference on Lasers and Electro-Optics (CLEO Pacific Rim), 2220-CT-1, Sydney, Australia, August 2011. 13. (Invited) Y. Arakawa, A. Tandaechanurat, and S. Iwamoto, “Quantum dot and high-Q 3D photonic crystal nanocavity coupled system for manipulating exciton-photon interaction” SPIE Optics + Photonics 2011, 8095-33, California, USA, August 2011. 14. A. Tandaechanurat, Y. Ota, N. Kumagai, S. Ishida, S. Iwamoto, and Y. Arakawa, “Observation of Purcell effect in a 3D photonic crystal nanocavity with a single quantum dot”, The 2011 IQEC/CLEO Pacific Rim Conference (IQEC/CLEO PR 2011), 967, Sydney, Australia, August 2011. 15. R. Ohta, Y. Ota, M. Nomura, N. Kumagai, S. Ishida, S. Iwamoto, and Y. Arakawa, “Fabrication of high-Q photonic crystal nanobeam cavities and demonstration of strong coupling with a single quantum dot”, 2011 Internal Nano Optoelectronics Workshop , July 2011. 16. S. Sergent, M. Arita, S. Kako, S. Iwamoto, and Y. Arakawa, “Design and fabrication of group-III nitride 1D photonic crystal nanobeam cavities” 9th International Conference on Nitride Semiconductors, C4.5, Glasgow, United Kingdom, July 2011. 17. Y. Ota, S. Iwamoto, N. Kumagai, Y. Arakawa, “Observation of spontaneous two photon emission from a single quantum dot in a high Q photonic crystal nanocavity”, The 15th conference on Modulated Semiconductor Structures (MSS15), Tu-3 6, Florida, USA , July 2011. 18. (Invited) Y. Arakawa, S. Iwamoto, M. Nomura, A. Tandaechanurat, Y. Ota, “Semiconductor Nanophotonics based on Cavity QED”, Utokyo-Shinhua Pekin, China , June 2011. 19. Y. Arakawa, Y. Ota, M. Nomura, A. Tandaechanurat, and S. Iwamoto(Invited), “Lasing oscillation in quantum dots and photonic crystal nanocavity coupled systems”, 38th International Symposium on Compound Semiconductors Berlin, Germany, May 2011. 20. K. Konishi, M. Kuwata-Gonokami, M. Nomura, N. Kumagai, S. Iwamoto and Y. Arakawa, “Circularly-Polarized Photoluminescence from Semiconductor Chiral Photonic Nanostructures”, NANOMETA2011, Tirol, Austria, January 2011. 1. Y. Ota, R. Ohta, N. Kumagai, S. Ohkouchi, S. Ishida, M. Shirane, Y. Igarashi, M. Nomura, S. Iwamoto, S. Yorozu and Y. Arakawa, “Vacuum Rabi Splitting with Single Quantum Dot in 1D and 2D Photonic Crystal Nanocavities”, University of Wurzburg – University of Tokyo Joint Workshop on Advances in Nanophotonics and Spintronics, Wurzburg, Germany, October 2010. 2. S. Iwamoto, S. Nakayama. S. Ishida, and Y. Arakawa, “Enhanced Light Emission from Silicon with Photonic Crystal Structures”, Joint Workshop on Advances in Nanophotonics and Spintronics, 2-3, Wurzburg, Germany, October 2010. 3. M. Nomura, Y. Ota, N. Kumagai, S. Iwamoto, and Y. Arakawa, “Lasing in a strongly coupled single quantum dot-nanocavity system and going one step further”, Joint Workshop on Advances in Nanophotonics and Spintronics, 2-3, Wurzburg, Germany, October 2010. 4. Y. Ota, R. Ohta, N. Kumagai, S. Ohkouchi, S. Ishida, M. Shirane, Y. Igarashi, M. Nomura, S. Iwamoto, S. Yorozu and Y. Arakawa, “Vacuum Rabi Splitting with Single Quantum Dot in 1D and 2D Photonic Crystal Nanocavities”, University of Wurzburg – University of Tokyo Joint Workshop on Advances in Nanophotonics and Spintronics, Wurzburg, Germany, October 2010. 5. S. Nakayama, S. Iwamoto, S. Ishida and Y. Arakawa, “Demonstration of a silicon photonic crystal slab LED with efficient electroluminescence”, Solid State Devices and Materials (SSDM) [Oral] D-4-3 (2010) Tokyo, Japan, September 2010. 6. S. Nakayama, S. Iwamoto, S. Ishida and Y. Arakawa, “Enhanced electroluminescence of silicon by utilizing photonic crystal structures”, Photonic and Electromagnetic Crystal Structures (PECS) [Poster] 163 (2010) Granada, Spain, September 2010. 7. A. Tandaechanurat, S. Ishida, D. Guimard, M. Nomura, S. Iwamoto, and Y. Arakawa, “Continuous-Wave Lasing in a Three-Dimensional Photonic Crystal Nanocavity with Quantum Dots”, Ninth International Conference on Photonic and Electromagnetic Crystal Structures (PECS IX), 154, Granada, Spain, September 2010. 8. K. Tanabe, M. Nishioka, D. Guimard, S. Iwamoto, Y. Arakawa, “Direct-bonded AlGaAs/Si hybrid dual-junction solar cells”, 5th World Conference on Photovoltaic Energy Conversion (WCPEC-5) ,1DV.3.100, Valencia,Spain, September 2010. 9. (Invited) M. Nomura, Y. Ota, N. Kumagai, S. Iwamoto, and Y. Arakawa, “Ultrasmall zero-cell photonic crystal nanocavity laser with quantum dot gain”, 9th International Conference on Photonic and Electromagnetic Crystal Structures, I-9, Granada, Spain, September 2010. 10. Y. Ota, N. Kumagai, S. Ohkouchi, S. Ishida, M. Shirane, Y. Igarashi, M. Nomura, S. Iwamoto, S. Yorozu and Y. Arakawa, “Strong Coupling between a Single Quantum Dot and a Photonic Crystal Heterostructure Cavity”, Photonic and Electromagnetic Crystal Structures Ⅸ, 126, Granada, Spain, September 2010. 11. R. Ohta, Y. Ota, M. Nomura, N. Kumagai,S. Ishida, S. Iwamoto, and Y. Arakawa, “Demonstration of the strong coupling in a quantum dot-nanobeam cavity system”, 9th International Conference on Photonic and Electromagnetic Crystal Structures, Granada, Spain, September 2010. 12. Y. Igarashi, M. Shirane, Y. Ota, M. Nomura, N. Kumagai, S. Ohkouchi, A. Kirihara, S. Ishida, S. Iwamoto, S. Yorozu, and Y. Arakawa, “Spin-relaxation Dynamics of Excited Trion States in an InAs Quantum Dot”, International Conference on Solid State Devices and Materials, F-3-2, the University of Tokyo (Hongo Campus), Tokyo, Japan, September 2010. 13. S. Ohkouchi, N. Kumagai, M. Shirane, Y. Igarashi, M. Nomura, Y. Ota, S. Yorozu, S. Iwamoto, and Y. Arakawa, “New Method to Isolate and Distribute Photoluminescence Emissions from InAs Quantum Dots over a Wide-Wavelength Range”, 16th International Conference on Molecular Beam Epitaxy (MBE 2010), P1-34, Berlin, Germany, August 2010. 14. S. Nakayama, S. Iwamoto, S. Ishida and Y. Arakawa, “Efficient silicon LED by utilizing photonic crystal structures”, 2010 International Nano-Optoelectronics Workshop (iNOW) [Poster] 4-P9,Beijing and Changchun, China, August 2010. 15. A. Tandaechanurat, S. Ishida, D. Guimard, D. Bordel, M. Nomura, S. Iwamoto, and Y. Arakawa, “Lasing characteristics of a quantum-dot-3D-photonic-crystal-nanocavity coupled system: Interaction between fully confined electrons and photons”, The 30th International Conference on the Physics of Semiconductors (ICPS 2010), MoA2-3, Seoul, Korea, July 2010. 16. A. Tandaechanurat, S. Ishida, D. Guimard, D. Bordel, M. Nomura, S. Iwamoto, and Y. Arakawa, “Achievement of a high quality factor (~ 38,500) in three-dimensional photonic crystal microcavity for ultimate microlasers”, The 37th International Symposium on Compound Semiconductors (ISCS 2010), ThC2-2, Kagawa, Japan, June2010. 17. K. Tanabe, D. Guimard, D. Bordel, S. Iwamoto, Y. Arakawa, “Fabrication of Electrically Pumped InAs/GaAs Quantum Dot Lasers on Si Substrates by Au-Mediated Wafer Bonding”, 37th International Symposium on Compound Semiconductors (ISCS),WeE3-1, Takamatsu, Japan, June 2010. 18. A. Tandaechanurat, S. Ishida, D. Guimard, D. Bordel, M. Nomura, S. Iwamoto, and Y. Arakawa, “Achievement of a high quality factor (~ 38,500) in three-dimensional photonic crystal microcavity for ultimate microlasers”, The 37th International Symposium on Compound Semiconductors (ISCS 2010), ThC2-2, Kagawa, Japan, June 2010. 19. K. Tanabe, D. Guimard, D. Bordel, S. Iwamoto, Y. Arakawa, “Fabrication of Electrically Pumped InAs/GaAs Quantum Dot Lasers on Si Substrates by Au-Mediated Wafer Bonding”, 37th International Symposium on Compound Semiconductors (ISCS),WeE3-1, Takamatsu, Japan, June 2010. 20. (Invited) M. Nomura, S. Iwamoto, and Y. Arakawa, “Single quantum dot laser using photonic crystal nanocavity”, 22nd International Conference on Indium Phosphide and Related Materials, FrA2-4, Kagawa, Japan, June 2010. 21. M. Nomura, K. Tanabe, S. Iwamoto, and Y. Arakawa, “Design of a High-Q H0 Photonic Crystal Nanocavity for cavity QED”, The 37th International Symposium on Compound Semiconductors, FrD3-6, Kagawa, Japan, June 2010. 22. A. Tandaechanurat, S. Ishida, D. Guimard, D. Bordel, M. Nomura, S. Iwamoto, and Y. Arakawa, “Lasing oscillation in a three-dimensional photonic crystal nanocavity with quantum dots”, The Conference on Lasers and Electro-Optics and The Quantum Electronics and Laser Science Conference (CLEO/QELS 2010), CWK2, California, USA , May 2010. 23. N. Kumagai, S. Ohkouchi, M. Shirane, Y. Igarashi, M. Nomura, Y. Ota, S. Yorozu, S. Iwamoto, and Y. Arakawa, “Effects of growth temperature of partial GaAs cap on InAs quantum dots in In-flush process for single dot spectroscopy”, The 37th International symposium on Compound Semiconductor (ISCS 2010), TuC3-3, Takamatsu, Japan, May 2010. 24. (Invited) Y. Arakawa, M. Nomura, A. Tandaechanurat, S. Iwamoto, S. Ishida, N. Kumagai, “Light-matter coupling in self-assembled quantum dots with 2D/3D photonic crystal nanocavity”, Quantum Dot 2010, Nottingham, UK, April 2010. 25. K. Tanabe, D. Guimard, D. Bordel, S. Iwamoto, Y. Arakawa, “Room temperature operation of 1.3 um InAs/GaAs quantum dot lasers wafer-bonded onto Si substrates”, 6th International Conference on Quantum Dots (QD 2010), P-28,Nottingham, UK, April 2010. 26. (Invited) M. Nomura, S. Iwamoto, and Y. Arakawa, “Single quantum dot photonic crystal nanocavity lasers”, Frontiers in Nanoscale Science and Technology FNST 2011, Th-1, RIKEN, Saitama , Japan, January 2011. 1. (Invited) S. Iwamoto, A. Tandaechanurat, and Y. Arakawa, 2-3D photonic crystal laser with high Q nanocavity”, The 9th Sweden- Japan Workshop on Quantum Nanophysics and Nanoelectronics (QNANO), Tokyo, Japan, December 2009. 2. (Invited) Y. Arakawa, S.Iwamoto, M.Nomura, “Single artificial atom laser with photonic crystal nanocavity”, The 9th Sweden- Japan Workshop on Quantum Nanophysics and Nanoelectronics (QNANO), Tokyo, Japan, December 2009. 3. Y. Ota, N. Kumagai, S. Ohkouchi, M. Shirane, M. Nomura, S. Ishida, S. Iwamoto, S. Yorozu and Y. Arakawa, “Semiconductor features in quantum-dot-based cavity quantum electrodynamics”, International Symposium on Quantum Nanophotincs and Nanoelectronics, FrE-5, Tokyo, Japan, November 2009. 4. A. Tandaechanurat, S. Ishida, D. Guimard, D. Bordel, M. Nomura, S. Iwamoto, and Y. Arakawa, “Achievement of a high quality factor (>10,000) in three-dimensional photonic crystal nanocavity by defect size control”, International Symposium on Quantum Nanophotonics and Nanoelectronics , FrE-7, 107, Tokyo, Japan, November 2009. 5. M. Nomura, N. Kumagai, S. Iwamoto, Y. Ota, and Y. Arakawa, “Lasing in single quantum dot-photonic crystal nanocavity coupled systems”, International Symposium on Quantum Nanophotonics and Nanoelectronics, FrF-5, Tokyo, Japan, November 2009. 6. M. Kitamura, S. Iwamoto, Y. Arakawa and L. Esaki, “Shelf Life of the Esaki Tunnel Diode – Tangible Shifts in the Peak Current from the Values Measured Half a Century Ago”, International Symposium on Quantum Nanophotonics and Nanoelectronics, ThD-9, pp69, Tokyo, Japan, November 2009. 7. K. Tanabe, M. Nomura, D. Guimard, S. Iwamoto, and Y. Arakawa, “Photonic Crystal Nanocavity Lasers with InAs/GaAs Quantum Dot Gain on Silicon Substrates: Room Temperature Continuous Wave Operation with Ultralow Threshold”, International Symposium on Quantum Nanophotonics and Nanoelectronics, Tokyo, Japan, November 2009. 8. N. Kumagai, S. Ohkouchi, S. Nakagawa, M. Shirane, Y. Igarashi,M. Nomura, Y. Ota, S. Iwamoto, and Y. Arakawa, “InAs quantum dots grown by low temperature capping for single dot spectroscopy”, International Symposium on Quantum Nanophotonics and Nanoelectronics, WeP-16, p32,Tokyo, Japan, November 2009. 9. Yuki Wakayama, Satoshi Iwamoto, Yasuhiko Arakawa, “Indirect optical modulation in mid-infrared quantum cascade lasers with coupled cavity structure”, International Symposium on Quantum Nanophotonics and Nanoelectronics、WeP-20, Tokyo, Japan, November 2009. 10. A. Tandaechanurat, S. Ishida, D. Guimard, D. Bordel, M. Nomura, S. Iwamoto, and Y. Arakawa, “Demonstration of quality factor over 10,000 in three-dimensional photonic crystal nanocavity by cavity size control”, International Conference on Solid State Devices and Materials (SSDM 2009), I-1-7, 32, Sendai, Japan, October 2009. 11. K. Tanabe, M. Nomura, D. Guimard, S. Iwamoto, and Y. Arakawa, “Photonic crystal nanocavity lasers with InAs quantum dots bonded onto silicon substrates”, 8th Pacific Rim Conference on Lasers and Electro-Optics (CLEO/Pacific Rim), MH2-4,Shanghai , China, September 2009. 12. (Invited) M. Nomura , S. Iwamoto, and Y. Arakawa, “Laser oscillation in single quantum dot-photonic crystal nanocavity coupled systems”, The 36th International Symposium on Compound Semiconductors, 8.1, University of California, Santa Barbara, September 2009. 13. (Invited) Y. Arakawa, S. Iwamoto, M. Nomura and Y. Ota, “Light-matter interaction in single quantum dot with photonic crystal nanocavity”, The 8th Pacific Rim Conference on Lasers and Electro-Optics, TuH2-2, Shanghai, China, September 2009. 14. Y. Ota, N. Kumagai, M. Shirane, M. Nomura, S. Ishida, S. Iwamoto, S. Yorozu and Y. Arakawa, “Observation of the Spectral Triplet in Strongly Coupled Quantum Dot-Nanocavity System”, 36th International Symposium on Compound Semiconductors, 19.6, Santa Barbara, U.S., September 2009. 15. (Invited) M. Nomura , S. Iwamoto, and Y. Arakawa, “Single Quantum Dot Laser with Photonic Crystal Nanocavity”, Integrated Photonics and Nanophotonics Research and Applications, IMF3, Hawaii, USA, July 2009. 16. M. Nomura, N. Kumagai, S. Iwamoto, Y. Ota, and Y. Arakawa, “Observation of unique photon statistics of single artificial atom laser”, 14th International Conference on Modulated Semiconductor Structure, M9c, Kobe, Japan, July 2009. 17. K. Tanabe, M. Nomura, D. Guimard, S. Iwamoto, and Y. Arakawa, “Fabrication and optical characterization of photonic crystal nanocavities with InAs quantum dots bonded on silicon substrates”, 14th International Conference on Modulated Semiconductor Structures (MSS-14), Tu-mP35, Kobe, July 2009. 18. S. Nakayama, S. Ishida, S. Iwamoto, D. Bordel, E. Augendre, L. Clavelier, ad Y. Arakawa, “Enhancement of photoluminescence from germanium by utilizing air-bridge type photonic crystal slab”, 14th International Conference on Modulated Semiconductor Structures (MSS-14), Tu-mP41, p155, Kobe, July 2009. 19. N. Kumagai, S. Ohkouchi, S. Nakagawa, M. Nomura, Y. Ota, M. Shirane, Y. Igarashi, S. Yorozu, S. Iwamoto, Y. Arakawa, “Suppression of indefinite peaks in InAs/ GaAs quantum dot spectrum by low temperature Indium-flush method”, 14th International Conference on Modulated Semiconductor Structures (MSS-14), Mo-mP22, pp34, Kobe, Japan, July 2009. 20. Y. Ota, M. Shirane, M. Nomura, N. Kumagai, S. Ishida, S. Iwamoto, S. Yorozu and Y. Arakawa, “Cavity quantum electrodynamics with quantum dot in H1-type photonic crystal nanocavity”, Frontiers in Nanoscale Science and Technology (FNST 2009), poster session A, Harvard University, Boston, USA , May 2009. 21. (Invited) M. Nomura , N. Kumagai, S. Iwamoto, Y. Ota, and Y. Arakawa, “Photonic Crystal Nanocavity Laser with Single Quantum Dot Gain”, Conference on Lasers and Electro-Optics (CLEO) CMP1, Baltimore, USA, May 2009 22. Y. Ota, M. Shirane, M. Nomura, N. Kumagai, S. Ishida, S. Iwamoto, S. Yorozu and Y. Arakawa, “Strong coupling between a single quantum dot and a H1 photonic crystal nanocavity”, Photonic and Electromagnetic Crystal Structures Ⅷ, B 136, Sydney, Australia, April 2009. 23. M. Nomura, S. Iwamoto, A. Tandaechanurat, Y. Ota, N. Kumagai, and Y. Arakawa, “Quantum Dot-based Photonic Bandedge Lasers”, Photonic and Electromagnetic Crystal Structures VIII, 115, Sydney, Australia, April 2009. 24. A. Tandaechanurat, S. Ishida, K. Aoki, D. Guimard, M. Nomura, S. Iwamoto, and Y. Arakawa, “Demonstration of high-Q (> 7,700) three-dimensional photonic crystal nanocavity”, The 8th International Photonic & Electromagnetic Crystal Structures Meeting (PECS VIII), 92, pp.120, Sydney, Australia, April 2009. 25. (Invited) Y. Arakawa, S. Iwamoto, K. Aoki, D. Guimard, M. Nomura, T. Aniwat, Y. Ohta, “Controlled photons emitted from quantum dots with 2D and 3D photonic crystal nanocavity”, 2nd Germany /Japan Workshop“Nanolaser based Optical Sensing” , Tokyo , February 2009. 1. M. Nomura, A. Tandaechanurat, S. Iwamoto, Y. Ota, N. Kumagai, and Y. Arakawa, “Direct Observation of Highly Efficient Coupling of Spontaneous Emission in Quantum Dot-Photonic Crystal Nanocavity Systems by Momentum Space Imaging”, IEEE Nanotechnology Materials and Devices Conference(NMDC 2008), MoC II-4, Kyoto, Japan, October 2008. 2. S. Nakayama, S. Ishida, S. Iwamoto, and Y. Arakawa, “Effect of cavity mode volume on enhancement of photoluminescece from silcion photonic nanocavities”, IEEE Nanotechnology Materials and Devices Conference 2008 (NMDC 2008), Kyoto, Japan , October 2008. 3. Y. Wakayama, S. Iwamoto, Y, Arakawa, “Microcylinder Quantum Cascade Lasers Coupled with Lateral Waveguides”, IEEE Nanotechnology Materials and Devices Conference(NMDC 2008), MoC I-4, pp.36, Kyoto, Japan, October 2008. 4. (Invited) S. Iwamoto and Y. Arakawa, “Advances in 2D/3D photonic Crystal Nanocavities with Quantum Dots for Nanophotonic Devices”, University of Tokyo and UC Santa Barbara Joint Workshop, p.19, Santa Barbara, USA, September 2008. 5. Y. Wakayama, S. Iwamoto, Y. Arakawa, “Microcylinder quantum cascade lasers coupled with lateral waveguides”, Bilateral Workshop on Nanoscale Systems, pp.60, Garching, Germany, July 2008. 6. S. Nakayama, S. Ishida, S. Iwamoto, and Y. Arakawa, “Enhanced photoluminescence from silicon photonic crystal nanocavities with different sizes of mode volume”, International Nano-Optoelectronic Workshop (iNOW 2008), 2, P19, Tokyo, Japan, August 2008. 7. Y. Wakayama, S. Iwamoto, Y. Arakawa, “Microcylinder Quantum Cascade Laser Coupled with an Optical Waveguide”, International Nano-Optoelectronic Workshop (iNOW 2008),Session 2, P.16, Tokyo, Japan, August 2008. 8. M. Nomura, Y. Ota, N. Kumagai, S. Iwamoto, and Y. Arakawa, “Achievement of ultra-low threshold excitation power (8 nW) in a nearly-single quantum dot nanocavity laser”, Conference on Lasers and Electro-Optics, CTuW3, San Jose, USA, May 2008. 9. Y. Ota, M. Nomura, N. Kumagai, K. Watanabe, S. Ishida, S. Iwamoto, M. Shirane, S. Kono, S. Yorozu and Y. Arakawa, “Efficient Excitation and Emission of Single Quantum Dot by Simultaneous Coupling to Two Different Photonic Crystal Nanocavity Modes”, CLEO/QELS 2008, CTuW2, San Jose, USA, May 2008. 10. Y. Wakayama, A. Tandaechanurat, S. Iwamoto, Y. Arakawa, “Design of Photonic Crystal Microcavities for Quantum Cascade Lasers”, Todai Week at Tsinghua University, Joint Workshop on Secure-Life Photonics, Beijing, China, May 2008. 11. M. Nomura, Y. Ota, N. Kumagai, S. Iwamoto, and Y. Arakawa, “Large vacuum Rabi splitting in single-assembled 1uantum dot-nanocavity system”, 8th International Conference on Physics of Light-Matter Coupling in Nanostructures (PLMCN 8), MoC-2, 12, Tokyo, Japan, April 2008. 12. S. Iwamoto, S. Nakayama, A. Gomyo, Y. Arakawa, “Control of Spontaneous Emission of Crystalline Silicon by Utilizing Photonic Crystal Nanocavities”, 8th International Conference on Physics of Light-Matter Coupling in Nanostructures (PLMCN 8), TuA-3, 20, Tokyo, Japan, April 2008. 13. S. Nakayama, S. Ishida, S. Iwamoto, and Y. Arakawa, “Strong photoluminescence from silicon with randomized photonic crystal pattern”, 8th International Conference on Physics of Light-Matter Coupling in Nanostructures (PLMCN 8), WeP-21, p.76, Tokyo, Japan, April 2008. 14. Y. Ota, M. Nomura, N. Kumagai, K. Watanabe, S. Ishida, S. Iwamoto, M. Shirane, S. Kono, S. Yorozu and Y. Arakawa, “Enhanced emission and absorption of single quantum dot by coupling to two different photonic crystal nanocavity modes”, 8th International Conference on Physics of Light-Matter Coupling in Nanostructures (PLMCN 8), WeP-3, Tokyo, Japan, April 2008. 15. H. Takagi, T. Nakaoka, K. Watanabe, N. Kumagai, S. Iwamoto, and Y. Arakawa, “Photocurrent spectroscopy of a single quantum dot at telecommunication wavelength of 1.3 mm”, Frontiers in Nanoscale Science and Technology (FNST 2008), poster40,University of Basel, Switzerland, January 2008. 16. (Invited) Y. Arakawa, S. Iwamoto, M. Nomura and K. Aoki, “Controlled Light Emission from Quantum Dots with 2D and 3D Photonic Crystal”, SPIE Photonics West, OPTO: Integrated Optoelectronic Devices, January 2008. 1. (Invited) S. Iwamoto, M. Nomura, and Y. Arakawa, “Photonic Crystal Nanocavity for Highly Efficient Lasers and Light Sources”, The 20th Annual Meeting of the IEEE Lasers and Electro-Optics Society (LEOS 2007), MB1, 13-14, Florida, USA, October 2007. 2. M. Nomura, S. Iwamoto, and Y. Arakawa, “Prerequisites of Nanocavities for Single Artificial Atom Laser”, The 34th International Symposium on Compound Semiconductors, ThC-P5, Kyoto, Japan, October 3. K. Aoki, D. Guimard, M. Nishioka, M. Nomura, S. Iwamoto, and Y. Arakawa, “Three-dimensional photonic crystal nanocavity with the highest Q-factor”, The 34th International Symposium on Compound Semiconductors, ThB I-1, Kyoto, Japan, October 2007. 4. (Invited) M. Nomura, S. Iwamoto, and Y. Arakawa, “Photonic crystal nanocavity lasers with quantum dots”, SPIE, Optics East 2007, 6779-15, Boston, USA, September 2007. 5. S. Iwamoto, A. Gomyo, and Y. Arakawa, “Resonant photoluminescence from crystalline Si with photonic crystal nanocavity structures”, 4th International Conference on Group IV Photonics, FC-3, Tokyo, Japan, September 2007. 6. A. Tandaechanurat, S. Iwamoto, M. Nomura, N. Kumagai and Y. Arakawa, “Demonstration of High-Q Photonic Crystal H1-defect Nanocavities after Closing of Photonic Bandgap”, The 7th Pacific Rim Conference On Lasers and Electro-Optics (CLEO/Pacific Rim2007), TuF4-7, Seoul, Korea, August 2007. 7. Y. Ota, M. Nomura, N. Kumagai, K. Watanabe, S. Ishida, S. Iwamoto, M. Shirane, S. Kono, S. Yorozu, and Y. Arakawa, “Fabrication and characterization of photonic crystal nanocavity with degenerated cavity modes for generating entangled photon pairs using quantum dots”, The 7th Pacific Rim Conference On Lasers and Electro-Optics (CLEO/Pacific Rim2007), TuF4-6, Seoul, Korea, August 2007. 8. Y. Ota, M. Nomura, N. Kumagai, K. Watanabe, S. Ishida, S. Iwamoto, M .Shirane, S. Kono, S. Yorozu, and Y. Arakawa, “Fabrication of photonic crystal nanocavity for generating entangled photon pairs using quantum dots”, International Nano-Optelectronic Workshop 2007, Poster session3 P76, Beijing, China, July 2007. 9. (Invited) K. Aoki, D. Guimard, M. Nishioka, T. Katsuyama, S. Iwamoto, and Y. Arakawa, “Arrayed 3D photonic crystals for optical communication wavelengths”, 9th International Conference on Transparent Optical Networks, We.D1.4, July 1-5, Rome, Italy, July 2007. 10. A. Tandaechanurat, S. Iwamoto, M. Nomura, N. Kumagai and Y. Arakawa, “ Demonstration of High-Q Photonic Crystal H1-defect Nanocavities after Closing of Photonic Bandgap”, International Nano-Optoelectronic Workshop, P55, Beijing, China , July 2007. 11. Y. Wakayama, A. Tandaechanurat, S. Iwamoto, Y. Arakawa, “Design of Surface-Emitting Photonic Crystal Microcavitis for Quantum Cascade Lasers”, International Nano-Optoelectronic Workshop, P21, Beijing, Chaina, July 2007. 12. M. Nomura, S. Iwamoto, N. Kumagai, and Y. Arakawa, “Ultralow threshold photonic crystal nanocavity laser”, 13th International Conference on Modulated Semiconductor Structures, p. 133, Genova, Italy, July 2007. 13. H. Takagi, T. Nakaoka, K. Watanabe, N. Kumagai, S. Iwamoto, and Y. Arakawa, “Very narrow fine-structure splittings in self-assembled quantum dots investigated by photocurrent spectroscopy”, 13th International Conference on Modulated Semiconductor Structures, WE-PM68, p363, Genova, Italy, July 2007. 14. S. Iwamoto, A. Gomyo, Y. Arakawa, “Enhanced photoluminescence from silicon photonic crystal nanocavities”, International Symposium on Photonic & Electromagnetic Crystal Structures (PECS) VII, A-68, Monterey, CA, USA April 2007. 15. M. Nomura, S. Iwamoto, N. Kumagai, and Y. Arakawa, “Temporal coherence of a photonic crystal microcavity laser”, Photonic and Electromagnetic Crystal Structures VII, B-25, Monterey, California, USA April 2007. 16. M. Arita, S. Kako, S. Iwamoto, S. Ishida, and Y. Arakawa, “A Novel Fabrication Method for III-Nitride Air-Bridge Photonic Crystals,” International Symposium on Photonic and Electromagnetic Crystal Structures (PECS07), Monterey, CA, USA, April 2007. 17. (Invited) K. Aoki, S. Iwamoto, and Y. Arakawa, “Microassembly of rod-connected diamond structures at optical wavelengths,” Progress in Electromagnetics Research Symposium (PIERS2007), 3P1, Beijing, China, March 2007. 1. N. Li, M. Arita, S. Kako, M. Kitamura, S. Iwamoto, and Y. Arakawa, “Fabrication and optical characterization of III-nitride air-bridge photonic crystal with GaN quatum dots”, The 6th International Symposium on Blue Lasers and Light Emitting Diodes (ISBLLED 2006), E1.02, p.135, Montpellier, France, May 2006. 2. (Invited) K. Aoki, S. Iwamoto, and Y. Arakawa, “Fabrication of rod-connected diamond structures at optical wavelengths by micromanipulation”, ICTON 2006, Tu.C2.2, 106, Nottingham, UK, June, 2006. 3. M. Nomura, S. Iwamoto, T. Yang, S. Ishida, and Y. Arakawa, “Selective excitation of a single quantum dot in a photonic crystal nanocavity by using cavity resonance”, 28th International Conference on the Physics of Semiconductors, WeA1e6, Vienna, Austria, July 2006. 4. M. Nomura, S. Iwamoto, N. Kumagai, K. Watanabe, S. Ishida, and Y. Arakawa, “Photonic crystal nanocavity continuous-wave laser operation at room temperature”, International Conference on Solid State Devices and Materials, B-1-3, Yokohama, September 2006. 5. M. Arita, S. Ishida, S. Kako, S. Iwamoto, and Y. Arakawa, “AlN Photonic crystal nanocavity demonstrating high quality factor (> 2400)”, International Workshop on Nitride Semiconductors 2006 (IWN2006) Tu-LN-3, TuP-LN-E3, Foucised Session -Physics and Engineering of Light Extraction-, p. 103, Kyoto, Japan, October 2006. 1. (Invited) Y. Arakawa and S. Iwamoto, “Photonic Crystal with Quantum dots and/or MEMS”, 2004 206th Meeting on The Electrochemical Society,Honolulu, Hawaii, USA, October 2004. 2. S. Iwamoto, M. Tokushima, A. Gomyo, H. Yamada, A. Higo, H. Toshiyoshi, H. Fujita, and Y. Arakawa, “Optical Switching in Photonic Crystal Waveguide Controlled by Micro Electro Mechanical System,” Pacific Rim Conference on Lasers and Electro-Optics 2005, CTuE3-4, Tokyo, Japan, July 2005. 3. M. Kitamura, S. Iwamoto, and Y. Arakawa, “Enhanced light emission of an organic semiconductor based two-dimensional photonic crystal with a nanocavity,” Pacific Rim Conference on Lasers and Electro-Optics 2005, CThE1-3, Tokyo, Japan, July 2005. 4. (Invited) S. Iwamoto and Y. Arakawa, “Advances in Photonic Crystals with MEMS and Quantum Dots,” 14th International Laser Physics Workshop, 1-4-5, Kyoto, Japan, July 2005. 5. S. Iwamoto, M. Tokushima, A. Gomyo, H. Yamada, S. Ishida, H. Higo, H. Toshiyoshi, and Y. Arakawa, “Fabrication of MEMS-integrated photonic crystal waveguide and demonstration of its switching operation,” PECS-VI: International Symposium on Photonic and Electromagnetic Crystal Structures, Aghia Pelaghia, Crete, Greece, June 2005. 6. M. Kitamura, S. Iwamoto, and Y. Arakawa, “Organic Semiconductor Based Two-Dimensional Photonic Crystal with a Single Defect,” 2005 Conference on Lasers and Electro-Optics Quantum Electronics (CLEO2005), CMEE3, Baltimore, Maryland, USA, May 2005. 7. S. Balasubramanian, P. Yu, S. Iwamoto, and K. Kuroda, “Adaptive optical coherence-domain reflectometry using InGaAs quantum wells,” Conference on Lasers and Electro-Optics (CLEO2005), pp.1617- 1619, Baltimore, USA, May 2005. 8. A. Higo, S. Iwamoto, M. Ishida, Y. Arakawa, H. Fujita, A. Gomyo, M. Tokushima, H. Yamada, and H. Toshiyoshi, “Design and Fabrication of MEMS Optical Modulators Integrated with PhC Waveguides,” IEEE/LEOS Int. Conf. on Optical MEMS and Their Applications, oral G4, Oulu, Finland, August 2005. 9. M. Gel, S. Ishida, S. Iwamoto, Y. Arakawa, H. Fujita, “Nano Contact Formation in a Simple MEMS Device for The conductance Measurements of Nano Objects,” Conference on Micro Electro Mechanical Systems, TPb2, pp.431-434, Miami Beach, Florida USA , January 2005. 10. S. Iwamoto, J. Tatebayashi, T. Fukuda, T. Nakaoka. S. Ishida, and Y. Arakawa, “1.55-micron light emission from InAs QDs embedded in a high-Q photonic crystal microcavity, “Photonic West 2005, 5733-63, San Jose, USA, January 2005. 1. (Invited) Y. Arakawa and S. Iwamoto, “Photonic Crystal with Quantum dots and/or MEMS”, 2004 206th Meeting on The Electrochemical Society,Honolulu, Hawaii, USA, October 2004. 2. M. Kitamura, S. Iwamoto, and Y. Arakawa, “Enhanced Luminance Efficiency from Organic Light-Emitting Diodes with 2D Photonic Crystal,” Int. Conf. Solid State Devices and Materials, A-4-1, p160-161, Tokyo, Japan , September 2004. 3. S. Iwamoto, J. Tatebayashi, T. Fukuda, T. Nakaoka, S. Ishida, and Y. Arakawa, “Observation of light emission at ~1.5 um from InAs quantum dots in photonic crystal microcavity,” Int. Conf. Solid State Devices and Materials, H-7-4, p. 918 , Tokyo, Japan, September 2004. 4. S. Kako, K. Hoshino, S. Iwamoto, S. Ishida, and Y. Arakawa , “Single dot spectroscopy of GaN/AlN self-assembled quantum dots,” 27th International Conference on the Physics of Semiconductors, F3.004,p.50, Flagstaff, USA, July 2004. 5. A. Higo, S. Iwamoto, S. Ishida, H. Fujita, Y. Arakawa, H. Toshiyoshi, H. Yamada, A. Gomyo and M. Tokushima, “Micromechanical Structures for Photonic Crystal Waveguide Switches,” Asisa-Pacific Conf. on Transducers and micro-Nano Technology (APCOT 2004), Sapporo, Japan, July 2004. 6. M. Kudo, T. Nakaoka, S. Iwamoto, and Y. Arakawa, “Self-Assembled InAsSb Quantum Dots Grown on GaAs Substrates by Molecular-Beam Epitaxy,” 16th International conference on InP and related materials,Kagoshima, Japan, May 2004. 7. T. Ide, T. Baba, J. Tatebayashi, S. Iwamoto, T. Nakaoka and Y. Arakawa, “InAs quantum dot microdisk laser at room temperature,” Conf. Laser and Electro-Optics, no. CThB3, San Francisco, USA, May 2004. 8. (Invited) Y. Arakawa and S. Iwamoto, “Photonic Crystals with Quantum Dots and/or Micro Electro-Mechanical Systems for Advanced Nanophotonic Devices,” International Symposium on Photonic and Electromagnetic Crystal Structures V (PECS-V), Th-K3, p.177, Kyoto ,Japan March 2004. 9. S. Iwamoto, H. Yamada, A. Gomyo, M.Shirane, and Y. Arakawa, ”Control of light propagation and localization in a photonic crystal slab by using a micromechanical actuator,” Photonic West 2004, 5360-27, San Jose , USA, January 2004. 1. S. Iwamoto, H. Yamada, A. Gomyo, M. Shirane,and Y. Arakawa , “Photonic Crystal Modulators Controlled by Micro Electro-Mechanical Systems -Proposal and Experiments-,” The 5th Pacific Rim Conference on Lasers and Electro-Optics, Taipei, Taiwan,December 2003. 2. S. Iwamoto and Y. Arakawa, “Photonic Crystals with Advanced Micro- and Nano-Structures,” The 5th International Workshop on Future Information Processing Technology, 2-5, Miyazaki, Japan, November 3. S. Iwamoto, H. Yamada, A. Gomyo, M. Shirane, and Y. Arakawa, “Photonic Crystal Slab Waveguide Controlled by a Micro-Mechanical Actuator,” 16th Annual Meeting of the IEEE lasers and Electro-Optics Society, WE3, Tuscon, Arizona, USA, October 2003. 4. M. Ito, S. Iwamoto, and Y. Arakawa, “High-Q Resonant Modes in a Quasi-Three Dimensional Photonic Crystla Microcavity,” 16th Annual Meeting of the IEEE lasers and Electro-Optics Society, TuM6, Tuscon, Arizona, USA , October 2003. 5. M. Ito, S. Iwamoto, and Y. Arakawa, “Enhancement of Cavity-Q in a Quasi-Three Dimensional Photonic Crystal,” International Conference on Solid State Devices and Materials 2003, F-7-4, Tokyo, Japan, October 2003. 6. S. Iwamoto, J. Tatebayashi, S. Kako, S. Ishida, and Y. Arakawa, “Numerical Analysis of DFB Lasing Action in Photonic Crystals with Quantum Dots,” The 11th International Conference on Modulated Semiconductor Structures, C-3, pp.170-171, Nara, Japan, July 2003. 7. M. Kudo, T. Mishima, S. Iwamoto, T. Nakaoka, and Y. Arakawa, “Long Wavelength Luminescence from GaSb Quantum Dots Grown on GaAs Substrates,” The 11th International Cnference on Modulated Semiconductor Structures, Nara, Japan, July 2003.
{"url":"http://www.iwamoto.iis.u-tokyo.ac.jp/publication/408-2/","timestamp":"2024-11-08T15:41:09Z","content_type":"text/html","content_length":"156862","record_id":"<urn:uuid:2fd9c6e4-5876-4ba9-a065-fb5118e18f16>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00691.warc.gz"}
resolution of the phase information from the fft 1 year ago ●11 replies● latest reply 1 year ago 902 views I understand that the frequency resolution for an N point fft sampled at Fs is Fs/N. However, I'm confused regarding the phase information. For instance, if I sample a signal at 1MHz for 256 samples, the frequency resolution will be ~3.9kHz per bin. But say I want to use the FFT to determine phase delay of a given signal, and I wanted a 1 deg resolution on the phase. I'm struggling to understand the relationship between the phase information obtained from the FFT and the FFT parameters. Thank you. [ - ] Reply by ●August 2, 2023 Think of phase resolution and amplitude resolution as being related. Neither has anything to do with the sample rate. Both have to do with the number of effective bits, which has to do with both the number of effective bits in any ADC(s) present as well as any processing gain that has been realized by the time you wish to measure amplitude or phase. Phase is just an angle calculated from the I and Q components of an analytic signal, so you can determine the effective phase resolution from the number of bits. e.g., if you only have one bit each of I and Q, you have 90 degrees of phase resolution. As you increase the number of bits in each, the phase resolution will increase. I hope that helps a little bit. [ - ] Reply by ●August 2, 2023 Thank you for your resonse. Perhaps I didn't ask the right question, or I don't understand your response fully- so please bear with me. I want determine the phase shift of a sinusoidal current at a known frequency measured as the voltage drop across a resistor. I figured I could take the FFT of the samples and obtain the phase information from it--but then I couldn't understand how the phase info would be affected by sample rate, etc..So, I don't have I and Q samples, I have voltage samples of a sinusoid. Is your response still applicable in this scenario? --if so, could you please elaborate some more? For instance, I don't get how phase information is not related to time (ie, sample rate in some way), when phase information is by its very nature the postion of the signal at a point in relative to the signal's cycle or period. Intuitively, if I sampled faster, I would expect to be able to resolve the phase difference between two signals of the same frequency --in the time domain at least. For instance, I think cross correlation would work well to determine this phase difference (between a refernce signal an the phase-sifted signall), and the time step would be related to the sample rate, correct? Why would this then not translate to the frequency damain as well when in fact, I think correlation (and convolution) can be obtained via a Fourier Xform....I'm confused! [ - ] Reply by ●August 2, 2023 If you were intending to take an FFT, the phase information in any particular FFT output bin, for example one with a lot of energy for an input sinusoid, will be the angle between the I and Q samples in that bin. FFTs are generally complex-valued at the output, even if the input was real-valued. The angle represented in the output will be the difference in phase between the input signal and the FFT reference functions, essentially the FFT window if it isn't zero-padded. As you mention, the FFT essentially does a cross-correlation between the input signal and the reference (basis) function. FFT sinusoidal basis functions start at zero phase at the beginning of the FFT window, so an input sinusoid matching a bin frequency will indicate phase difference with the basis function by the phase of the FFT output bin at that frequency. [ - ] Reply by ●August 2, 2023 You are not wrong. If I understand your question you have a signal and want to know at a given time what is that signal's phase, correct? In most books this is effectively related "symbol time" which is to say the points in time where you want to note what the phase is of the signal. If you wanted 1 degree of phase resolution on every cycle for a given wave, your sampling would have to be 360x the frequency after converting the waveform to baseband. So for a 1MHz signal, 360MHz of sampling after you mixed down to DC. Whether or not you will be able to resolve the phase difference accurately does mean you need enough bits of signal to represent the angle. This is used in QPSK to encode 2 bits, so each bit pair represents a 90 degree phase shifts offset by 45 degrees. QPSK however will send several cycles at the phase of interest. (homework problem on why they do this :-)) You could similarly encode 3 bits with 45 degree phase shifts, etc. But the higher values will require higher oversampling to resolve the phase data. I came up with what I think of as a fairly novel way to finesse this based on some ideas that Marty Graham shared with me years ago but haven't yet done any field trials to see if I can actually demodulate it over the air. [ - ] Reply by ●August 2, 2023 If you are after sinusoid phase from FFT then you got correct replies from Slartibartfast. This phase can show delay within one cycle and as such I believe the number of samples per cycle will have some impact on phase accuracy. If you are after absolute delay then it is something else. The sinusoid will wrap up after 1 cycle and phase will not show absolute delay. You need to do time domain correlation with a reference. For this purpose a sinusoid is not good for delay measurement as its correlation has slow gradual peak. [ - ] Reply by ●August 2, 2023 You are in essence asking a nonsensical question. I hope the following explanation helps. You didn't specify a real or a complex signal, so I will illustrate with the real signal, which is the more complicated case. The concepts are the same. Model your signal with $$ x[n] = M \cos( \alpha n + \phi ) $$ Using the angle addition formulas (with complex you get just a product). $$ x[n] = M \left[ \cos( \alpha n ) \cos( \phi ) - \sin( \alpha n ) \sin( \phi ) \right] $$ Multiply out. $$ x[n] = \left[ M \cos( \phi ) \right] \cos( \alpha n ) - \left[ M \sin( \phi ) \right] \sin( \alpha n ) $$ Recognize the form: $$ x[n] = A \cos( \alpha n ) + B \sin( \alpha n ) $$ If we can solve for A and B, then we find M and phi from those. $$ M = \sqrt{ A^2 + B^2 } $$ $$ \phi = \text{atan2} (-B,A) $$ To solve this, we can recognize the sine and cosine functions as basis vectors, and solve using Linear Algebra. Call the x[n] vector plain x, the sine S, and the cosine C. $$ x = A C + B S $$ Dot these with the basis vectors and you get. $$ x \cdot C = A \left( C \cdot C \right) + B \left( S \cdot C \right) $$ $$ x \cdot S = A \left( C \cdot S \right) + B \left( S \cdot S \right) $$ This gives you two equations, two unknowns. The nice thing about the trig functions is that if the interval is a whole number of cycles, the mixed dot product term will be zero and you get: $$ x \cdot C = A \left( C \cdot C \right) $$ $$ x \cdot S = B \left( S \cdot S \right) $$ When you align your DFT frame on a whole number of cycles, this is precisely the DFT calculation for a given bin. The first equation is the real part, and the second the imaginary. From these you can calculate A and B directly. This is why everybody says the angle of the bin value represents the phase, and that is how you read the value. The more difficult calculation is when your signal is not a whole number of cycles within your DFT frame. That is what my blog articles are mostly about. I recommend you have a look at my overview article and read the related articles to your situation. "Overview of my Articles" Now, addressing the resolution issue. Calculating the frequency by the nearest bin is known as "Coarse Resolution". With a high SNR signal, the frequency can be determined to a small fraction of the bin width. This is called "Fine Resolution". For very small differences, each increment of error in your frequency represents a \(\pi \left(1-1/N \right) \) change in your calculated phase value. So, phase resolution is roughly one third as good as frequency resolution in the fine domain. Hope this helps. P.S. In a bin calculation, since the sine function is negated in the DFT definition you have: $$ \phi = \text{atan2} (Im,Re) $$ [ - ] Reply by ●August 2, 2023 To elaborate a little bit on what it seems you are trying to accomplish. For a pure complex tone, a shift in the phase causes tandem rotations of all the bins. If it is a whole integer frequency, all the other bins will be zero, so rotating them is moot. However, if there is "leakage", it will rotate in tandem as well. For a real valued tone this is only approximately true (as in not exactly, it will still be quite accurate) near the peak bin. The further from DC or Nyquist you frequency is, the better the approximation. I am assuming from the one degree resolution that you have a phase difference that is smaller than a cycle, otherwise what the others say is true. Suppose you have a stereo pair of microphones listening on a steady tone source and you are trying to calculate direction of arrival using the phase difference. In that case you don't even have to worry about the frequency or whether it is close to a bin or not. All the bins in the peak bin area will twist almost the exact same amount. The difference in the angle between the left channel bin and the corresponding right channel bin will give you the phase difference of the two arriving captures. Due to the exponential nature of the unit circle, you can divide one bin value by the other and the angle of the quotient will be your phase difference. If it is quite small, the atan2 can be replace with atan and you are probably good with the first few terms of the atan series. If you don't get what I am saying, I recommend my "A Recipe for a Basic Trigonometry Table" , particularly the source code in Appendix I. [ - ] Reply by ●August 2, 2023 Thank you for taking the time to explain all this. I still don't get it, but I will...it'll just take me some time. I'm not sure what is nonsensical about my question -- I wanted to understand how sampling rate affected the phase resolution of the FFT. Since I'm dealing with voltages and currents, I'm talking about real signals. Say I generate a voltage sinsoid at 1MHz, and I measure the voltage across a resistor to derive the current through the circuit, and I am interested in determining the phase shift between these two signals (I know the phase of voltage signal since I'm generating it and it is 0). My intuition was that the faster the sample rate, the better resolution I would have in determining the phase shift between these two signal --ie, if I sampled this "current" waveform at 360MHz, I'd get a 1 deg resolution (I think). So, I wanted to understand how this would "show" up in the FFT or what I would have to do (in terms of sampling, Fs and number of bits, and number of points of the FFT) to improve the phase resolution in the FFT. I didn't understand the first response I got from @Slartibartfast (and don't quite yet understand)_ that said that the sample rate doesn't factor in the phase info from the FFT, when it seemed to me like it did in the time domain. So, at the 1MHz bin, for example, I will have a phase,\(\phi\), from the FFT -- but how accurate is that phase and could it have been made better by sampling faster. That was the question I was trying to ask. I really do appreciate you guys taking the time to respond in such detail --even if I don't quite yet understand. I'm still digesting the responses, and appreciate the references you pointed me to dig deeper into this stuff. Thank you. [ - ] Reply by ●August 2, 2023 I'm sorry, by "meaningless" I meant "it doesn't matter", not "stupid". If you select a frame size that is a whole number of cycles in the frame, say seven. No matter what N is, as long as it is greater than 14, the bin values for the DFT will all be zero except at bin 7 and -7, aka N-7. For a noiseless signal, as you increase N by increasing the sampling frequency, this value will not change when you normalize it by 1/N, so it will not increase your resolution. In an actual (avoiding the word 'real') signal, increasing the number of samples will mitigate the effect of noise. Also, if it is not a pure tone, but has higher harmonics, increasing the sampling rate will keep higher harmonics from ending up in the same bin due to "aliasing". For tones with frequencies in between bins, the effect of increasing N will alter the bin values. That's were the \( (1-1/N) \) factor comes in. It is easier to see in the bin value formulas for a complex tone, than in a real tone. See "DFT Bin Value Formulas for Pure Complex Tones". (16), the whole number case, is devoid of N. In constrast, (24) shows that the angle of bin value will rotate according to \( \phi \) the same as (16) for a fixed frequency, but will also twist ever so slightly when N changes. The values of \( Z_k \) are already normalized by \( 1/N \) in the article. A real tone can be considered the sum of two complex tones because $$ \cos( \theta ) = \frac{1}{2} e^{i \theta} + \frac{1}{2} e^{-i \theta} $$ [ - ] Reply by ●August 2, 2023 Let us make it simple: -phase from FFT output is based on Re & Im amplitude so depends on bit resolution -angle resolution depends on how many samples per cycle are at input so depends on sampling rate and location of frequency bin. -you can be as accurate within one cycle only due to wrap up [ - ] Reply by ●August 2, 2023 Here a couple of answers of mine from the DSP Stack Exchange: How the sampling rate interacts with the other parameters is a frequent topic. Here is a search on my answers there:
{"url":"https://dsprelated.com/thread/16619/resolution-of-the-phase-information-from-the-fft","timestamp":"2024-11-12T17:02:08Z","content_type":"text/html","content_length":"60091","record_id":"<urn:uuid:c41c8eb2-51c2-4dd5-8faf-5bd3da580715>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00402.warc.gz"}
wu :: forums - Simple Combinatorics wu :: forums putnam exam (pure math) (Moderators: Icarus, towr, SMQ, Grimbal, william wu, Eigenray) « Previous topic | Next topic » Pages: 1 Reply Notify of replies Send Topic Print Author Topic: Simple Combinatorics (Read 692 times) anonymous Simple Combinatorics Guest « on: Jun 4^th, 2003, 3:54pm » Quote Modify Remove Let n be an integer greater than 1. In a school there are n^2-n+2 clubs and each club has exactly n members. Each pair of clubs has exactly one member in common. Count the total number of distinct members in all the clubs. Icarus Re: Simple Combinatorics wu::riddles Moderator « Reply #1 on: Jun 4^th, 2003, 6:13pm » Quote Modify If it's "given" that there is a unique answer, then it is easy to obtain: If there is one person who is a member of all the clubs, and all the others are members of a single club only, this satisfies the problem conditions. So, there must be (n^2-n+2)(n-1) + 1 = n^3 - 2n^2 + 3n -1 distinct members. Boldly going where even angels Of course the hard part is showing this number is unique. fear to tread. "Pi goes on and on and on ... And e is just as cursed. Gender: I wonder: Which is larger Posts: 4863 When their digits are reversed? " - Anonymous anonymous Re: Simple Combinatorics Guest « Reply #2 on: Jun 5^th, 2003, 5:03am » Quote Modify Remove Actually you can prove that there is an unique answer. Try to prove that (n-1)(n^2-n+2)+1 is the only solution. Icarus Re: Simple Combinatorics wu::riddles Moderator « Reply #3 on: Jun 5^th, 2003, 6:59pm » Quote Modify Now you shouldn't be changing the problem after the fact! You only asked for the number and I gave it. Everything else is just details! (Note that I didn't doubt that it was unique - instead I made use of the fact without proof.) Boldly going where even angels fear to tread. "Pi goes on and on and on ... Gender: And e is just as cursed. Posts: 4863 I wonder: Which is larger When their digits are reversed? " - Anonymous anonymous Re: Simple Combinatorics Guest « Reply #4 on: Jun 5^th, 2003, 7:51pm » Quote Modify Remove The original problem ask for this: "show that there is one student belongs to all of the clubs". I think this give away too much information, reducing the difficulty of the problem. Therefore I change the statement into a more open-ended statment "count the total number of distinct members in all the clubs". This way I can hide the important fact that "there is a student that belongs to all of the clubs", the solver is required to discover this own his/her own, thus increasing the difficulty of the problem. Also, when a problem ask to "evaluate", "find" or "count" a number, the problem not only ask for that number, the problem also ask for a complete proof that there are no other answer. For example Putnam 1995 B4 ask for the evaluation of a continued fraction. If the solver only give the answer, then only partial credits is awarded. But if the solver give the complete proof, then full credits is awarded. ecoist Re: Simple Combinatorics Senior « Reply #5 on: Mar 8^th, 2007, 10:56am » Quote Modify Here's a proof that Icarus's solution is the only solution. Let C be any one of the clubs. Then each of the remaining n^2-n+1 clubs have exactly one member in common with C. Hence the average number of clubs having the same member in common with C is (n^2-n+1)/n, which equals n-1+1/n. Thus, some member of C, say a, belongs to at least n other clubs. Assume, by way of contradiction, that a is not a member of all clubs. Then there is a club C* which does not have a as a member. Let C[i], for i=1,...,n be n clubs other than C that have a as a member. Then the n+1 sets, C\{a} and C'[i]=C[i]\{a}, are pairwise disjoint. Since C* must have a member in common with each of these n+1 sets, it follows that C* must contain at least n+1 members, contradiction. Hence a is a member of all clubs. 405 « Last Edit: Mar 8^th, 2007, 2:22pm by ecoist » Icarus Re: Simple Combinatorics wu::riddles Moderator « Reply #6 on: Mar 8^th, 2007, 4:37pm » Quote Modify Digging in the archives, I see. I had totally forgotten about this problem, and the fact that I had left it unfinished. Boldly going where even angels fear to tread. "Pi goes on and on and on ... Gender: And e is just as cursed. Posts: 4863 I wonder: Which is larger When their digits are reversed? " - Anonymous ecoist Re: Simple Combinatorics Senior « Reply #7 on: Mar 8^th, 2007, 8:11pm » Quote Modify I wondered why a complete solution for this problem went unposted for so long! I had thought that those problems I had posted that still remain unsolved were too trivial to entice someone to post a solution. Thanks to you, Icarus, I realize now that some problems are simply forgotten. Posts: 405 Pages: 1 Reply Notify of replies Send Topic Print « Previous topic | Next topic »
{"url":"https://www.ocf.berkeley.edu/~wwu/cgi-bin/yabb/YaBB.cgi?board=riddles_putnam;action=display;num=1054767247","timestamp":"2024-11-05T03:01:57Z","content_type":"text/html","content_length":"48372","record_id":"<urn:uuid:a20c8fbf-9bdb-4658-9306-e1a72f49ee00>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00737.warc.gz"}
Physics - Online Tutor, Practice Problems & Exam Prep Hey, guys. So we talked about resistances, resistors in circuits, and we've also talked about the relationship between the three variables in Ohm's law. So we're going to talk about energy in circuits and how quickly that energy changes. Let's go ahead and check it out. So remember when we talked about resistance, we said that resistance was an internal friction that occurs in conductors. As these charges move through this circuit, they encounter some resistance and they lose some energy due to, like, this friction. So the energy is lost because you have these charges moving through a potential difference, Δv. We have the equation for the amount of energy that charges lose through a potential difference. It's Δu, and it's equal to q times Δv. But a lot of times, we're more interested in how quickly this power - sorry, how quickly this energy gets changed. And remember, that's the definition of power. Power is defined as how quickly energy changes. So in a circuit, it's how quickly it's gained or lost. And we have an equation for that as well. That's just Δu divided by Δt, which is the change in energy versus how long it took to change. Right? So we can actually use this equation to come up with the power output in any circuit element. So when we're talking about circuits, this equation, p = vi, is going to be valid for anything. Batteries, resistors, we'll see other kinds of circuit elements later on. So how do we get to this equation, p = vi? All we have to do is take this Δu over Δt, and just substitute our equation for the charge lost through a resistor. So we have q times Δv, and once we substitute that, divided by Δt. And if you take a look here, this q divided by Δt is actually the definition of what current is. Remember, current is an amount of charge that's flowing through in a certain amount of time. So this really is just I. Remember that this is kind of like in the numerator here. Right? So this just becomes Δv times I, and then we just drop the Δ. Right? Because that's the voltage. Cool. So that's where that equation comes from. Now, when we're talking about specifically a resistor, we can actually use Ohm's law, which remember is v = Ir. That only is applicable to resistors to come up with two alternative forms of this equation. We know p is equal to v times I, but if we substitute this v in Ohm's law for this equation, we can come up with another expression, which is I^2r. And if we sort of substitute this I for this equation, manipulate it and solve for v and r, we can actually see that this equation is also equal to v^2 / r. So here are the three alternate forms of this equation. All three of them are equally valid. You can use any single one of them when you're talking about circuits. The only thing that's going to determine which one of these three you're going to use is just what variables you're given in a problem. That's basically it. Okay. So this is the power that's dissipated by a resistor. But where does that energy go? Remember, we said that this energy is internal friction, and friction always generates some form of heat. So what happens is that the power that's dissipated by these resistors is released in the form of heat, and in the extreme case, heat and light. So good examples of resistors are going to be light bulbs, flashlights, toasters, or hairdryers, any kind of thing that changes electrical energy into heat is a resistor. Alright? That's basically it, so let's go ahead and take a look at some examples. We've got a battery that operates at a voltage, and the battery's outputting 540 watts of energy. So how much current is the battery producing? So we've got p, which is the power output, and we've got I. And then all we have to do is figure out what the I is, and we have what the voltage is. Right? So if we're looking for the currents, we have to relate it to the power output of any circuit formula. We can't use the three equations because this is not a resistor. Remember, this is a battery, So we have to use p = vi, so that we just have to move the voltage over to the bottom. And we have p / v is equal to I. In other words, 540 / 9 is equal to 60, and that's going to be in amps. So that's basically our answer, 60 amps. Alright. Let's take a look at this bottom one here. A little bit more complicated. We've got a resistor now, so we're going to be able to use those three equations. It's attached to a battery, so it forms a simple circuit, and we're told the resistance and the currents of this circuit, and we need to figure out how much energy is released in the form of heat in 1 minute. So we're actually not solving for power in this equation, we're solving for the amount of energy, which is also equal to Δu. Both of these letters mean the same exact thing. So really, this is what we need to find. But how do we relate that back to power? Remember that this Δu, this power, is the definition of the change in energy over change in time. So we're going to relate it back to this power formula. Now we have a resistor that's attached to a battery, so we can use those three equations. And in this specific instance, we have what the resistance is and the current. So let's take a look at which one of these equations we can use. We've got p = vi through a resistor = I^2r = v^2 / r. Now we've got I and we've got I and r, so that means that we're not going to use this one or this one. We're just going to use the moreirect power equation. Right? So you could probably relate this using using Ohm's law, but that's just creating more work for yourself. Just go ahead and figure out what you have, and then just plug that into the appropriate equation. So we've got power is equal to we've got 60 milliamps. So that's 60 times 10 to the -3, and now we're going to square that. Don't forget that. Now, we have to multiply it by the resistance, which is equal to 30 kilo-ohms. So that's 30 times 10 to the 3. So this is actually going to be 108 watts of power that's dissipated. So now, we take this number here, and if we want to figure out what the amount of energy is released in 1 minute, then we have what this Δt is equal to. This is just 60 seconds. So we have what this Δt is. And we just need to figure out right here. So we've got Δu is equal to p times Δt. So in other words, it's 108 watts times the time, which is 60 seconds. So we got the amount of energy released in 1 minute due to this battery and this resistor is equal to 6,480, and that's in joules. Alright? Let me know if you guys have any questions, and we're going to take a look at a couple more practice problems. Thanks for watching.
{"url":"https://www.pearson.com/channels/physics/learn/patrick/resistors-and-dc-circuits/power-in-circuits?chapterId=8fc5c6a5","timestamp":"2024-11-12T02:14:10Z","content_type":"text/html","content_length":"504139","record_id":"<urn:uuid:1c861b57-3ad7-4c62-87ea-7f4388ee551d>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00733.warc.gz"}
Magnetic force on a current carrying wire Magnetic force on a current carrying wire • We know that current flowing in a conductor is nothing but the drift of free electron's from lower potential end of the conductor to the higher potential end • when a current carrying conductor is placed in a magnetic field ,magnetic forces are exerted on the moving charges with in the conductor • Equation -1 which gives force on a moving charge in a magnetic field can also be used for calculating the magnetic force exerted by magnetic field on a current carrying conductor (or wire) • Let us consider a straight conducting wire carrying current I is placed in a magnetic field B(x).Consider a small element dl of the qire as shown below in the figure • Drift velocity of electrons in a conductor and current I flowing in the conductor is given by I=neAv[d] Where A is the area of cross-section of the wire and n is the number of free electrons per unit volume • Magnetic force experienced by each electron in presence of magnetic field is F=e(v[d] X B) where e is the amount of charge on an electron • Total number of electron in length dl of the wire • Thus magnetic force on wire of length dl is dF=(nAdl)(ev[d] X B) if we denote length dl along the direction of current by the vector dl the above equation becomes dF=(nAev[d])(dl X B) or dF=I(dl X B) -- (12) where the quantity IdL is known as current element • If a straight wire of length l carrying current I is placed in a uniform magnetic field then force on wire would be equal to dF=I(L X B) -- (13) Direction of force • Direction of force is always perpendicular to the plane containing the current element IdL and magnetic field B • Direction of force when current element IdL and B are perpendicular to each other can also be find using either of the following rulesi) Fleming'e left hand rule:- If fore finger ,the middle finger and thumb of the left hand are stretched in such a way that the all are mutually perpendicular to each other then,if the fore finger points in the direction of the field (B) and middle finger points in the direction of current I ,the thumb will point in the direction of the force ii) Right hand palm Rule: Stretch the finger and thumb of the right hand so that they are perpendicular to each other .If the fingers point in the direction of current I and the palm in the direction of field B then the thumb will point in the direction of force
{"url":"https://physicscatalyst.com/magnetism/magnetic-force-on-current-carrying-wire.php","timestamp":"2024-11-07T06:39:18Z","content_type":"text/html","content_length":"68247","record_id":"<urn:uuid:1b61cb27-273f-462e-82c5-67389bbc5545>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00336.warc.gz"}
Understanding Real Numbers - Blue3 Academy Real numbers are the universal language of mathematics, covering all kinds of numberical values that you can imagine – from integers to fractions, irrational numbers, prime numbers, and more. Just picture a line that stretches on forever in both directions, every point on this line represents a real number. These includes numerical values like -1, 0, and 1, as well as larger values like 1000 and smaller values like 0.003 too. Real numbers include fractions like 0.5, π = 3.14159, or even square root 2 = 1.41421356237. Rational & Irrational Numbers Let’s break it down further. But first, watch this video to better understand the differences. Real numbers can be divided into two main categories: rational and irrational numbers. Rational numbers can be written as fractions a/b , where a and b are integers and b is not equal to 0 (e.g., 0.5, -0.32, 3.4242424). They either end or eventually, start to repeat their decimal On the other hand, irrational numbers are numbers whose decimals are non-terminating and non-repeating. This means that the decimals will go on forever without settling into a repeating pattern. Examples of include π = 3.14159, square root 3 = 2.71828182846, and square root 5 = 2.2360679775. Rational numbers can be further categorised into two subcategories: Integers and Fractions. Integers are whole numbers, which are not fractions, and they can be positive or negative. Fun Fact: Zero (0) is also considered a whole number! Other examples include 1, 2, -1, and -2. Of which, 1 and 2 are considered positive integers while -1 and -2 are considered negative integers. Natural numbers can also be further divided into three main types: the number one (1) itself, composite numbers, and prime numbers. Composite numbers have more than two factors and can be divided by numbers other than 1 and themselves. Examples include 4, 6, 12 and 20. Prime numbers, on the other hand, have exactly two factors (1 and itself) and cannot be divided evenly by any other number. Examples include 2, 3, 5 and 7. Fractions can be classified into two types based on their decimal representation: recurring decimals and terminating decimals. Recurring decimals have a digit or group of digits that repeat indefinitely, such as 0.3 (which is written as 0.333…). Terminating decimals, on the other hand, have a finite number of digits after the decimal point and do not repeat indefinitely. Examples include 0.125, 0.3, and 9.57. There are two types of fractions: recurring decimals and terminating decimals. If a decimal has a digit or group of digits that repeats (e.g., 0.3, 9.25), then it is considered a recurring decimal. On the other hand, terminating decimals have a finite number of decimal places. In other words, these end after a fixed number of digits after the decimal point. Examples of such decimals are 0.125, 0.3, 9.57. Solving Questions using the Order of Operation Framework Understanding the order of operations is important in maths to ensure precise calculations. To standardise this process, BODMAS (Brackets, Orders, Division and Multiplication, Addition and Subtraction) provides a clear framework for solving numerical expressions consistently, so as to navigate the complexities of mathematical computations. Application Questions Now that you have a firm grasp of real numbers and how to perform accurate calculations using the BODMAS framework, let’s put your skills to the test! Checkpoint! Answer the following questions. The table below shows the temperature of four cities at noon on a particular day in December. Singapore 27°C Shanghai 4.6°C Hong Kong 16.3°C Mongolia -21.6°C 1. Find the difference in temperature between Singapore and Mongolia. 2. Find the average temperature of the 4 cities. 3. The temperature in Saudi Arabia is midway between the temperature in Shanghai and Hong Kong. What is the temperature in Saudi Arabia? 4. If the temperature in Shanghai drops by 5.5°C one month later, find Shanghai’s temperature in one month’s time. Check your answers! Did you get it right? Want to Learn More? Understanding the fundamentals of real numbers and mastering the BODMAS framework are essential skills that you need to solve maths problems with precision and confidence. Still want more? Join our interactive live teaching sessions, where we delve into various concepts crucial for mastering the GCE O Level Elementary Mathematics exams. Stay updated on our upcoming sessions by following us on TikTok @blue3academy!
{"url":"https://blue3academy.com/understanding-real-numbers/","timestamp":"2024-11-10T14:28:21Z","content_type":"text/html","content_length":"305809","record_id":"<urn:uuid:2fcb7fcb-0940-4e5c-a536-e48ad7a952c1>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00026.warc.gz"}
What are the dimensions of 1 acre in metres? - Vypros.COM What are the dimensions of 1 acre in metres? What are the dimensions of 1 acre in metres? 4,046.86 square metres The acre is an old unit of measure that used to be defined as the area of land that a yoke of oxen could plow in one day. It was later defined as one furlong, 660 feet, by one chain, 66 feet. It is now defined as 4,046.86 square metres. What is the measurement of 1 acre? 43,560 square feet acre, unit of land measurement in the British Imperial and United States Customary systems, equal to 43,560 square feet, or 4,840 square yards. One acre is equivalent to 0.4047 hectare (4,047 square How many Metres is an acre UK? 4,047 square metres However, it is the UK where the unit is more commonly used. In the middle age in England, an acre of land was defined as an area that could be ploughed in one day by a yoke of oxen….About Acre. 1 Acre 4,840 square yards 1 Acre 4,047 square metres 1 Acre 0.4047 hectares How do you measure acres in meters? Used in the imperial system of units and the US system, the modern acre is equal to 4,840 square yards, 43,560 square feet, 4,047 square metres and 0.4047 hectares….About Acre. 1 Acre 4,840 square yards 1 Acre 4,047 square metres 1 Acre 0.4047 hectares How many lots make up an acre of land? It depends on the size of the lots. Depends on the size of a “lot”. An acre contains 43,560 square feet. As an example, say you are selling 60’X100′ lots. Divide 43,560 by 6000 and you come up with 7.26 “lots”. It depends on the size of the lot. There will be 4 quarter acre lots in an acre but 2 half acre lots. Enter some text. What is the formula for an acre of land? Multiplying length by width gives you the area, or the space inside the boundary of your property. Acreage is simply one measure of area. An easy way to remember the formula for area is A (area) = L (length) x W (width), which is the exact calculation done in this step. How much does an acre of land go for in your area? The acre is a unit of land area used in the imperial and US customary systems. It is traditionally defined as the area of one chain by one furlong (66 by 660 feet), which is exactly equal to 10 square chains, 1⁄ 640 of a square mile, or 43,560 square feet, and approximately 4,047 m 2, or about 40% of a hectare. What is the square footage of an acre of land? One acre is equivalent to 43,560 square feet, 4,047 square meters, 4,840 square yards, or 0.0015625 square miles. There are over 4,000 meters in an acre. An acre is defined as a unit of the land measurement that gives an area commonly used in the US customary and the imperial systems.
{"url":"https://vypros.com/what-are-the-dimensions-of-1-acre-in-metres/","timestamp":"2024-11-03T06:26:51Z","content_type":"text/html","content_length":"59140","record_id":"<urn:uuid:8da770bf-b3aa-4935-9318-7aa7df26c2ef>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00614.warc.gz"}
Splittable random numbers Thomas DuBuisson thomas.dubuisson at gmail.com Wed Nov 10 13:46:47 EST 2010 For anyone still interested, I've implemented something very much like what Gideon et al suggested to SPJ. It's in (but not exported from) DRBG's Crypto.Random.DRBG module [1]: split :: CryptoRandomGen g => g -> Either GenError (g,g) split g = do (ent,g') <- genBytes (genSeedLength `for` g) g new <- newGen ent return (g', new) Their words were: > The generator G is a pair comprising a crypto key G.k and an integer counter > (the “message”) G.c. That is embodied in the CryptoRandomGen, typically in the manner recommended by NIST SP 800-90 though more instances are possible. > The (next G) operation returns a pair: 1. a random > integer r obtained by encrypting G.c with G.k, and 2. a new generator G’ > such that G’.k = G.k and G’.c = G.c + 1. Specifically, this is the CTR DRBG. Someone noted "other cryptographic options exist" which I take to mean the hash or HMAC based generators could be acceptable. > The (split G) operation is > similar, returning the same G’, except that instead of returning a random > integer r it returns a third generator G’’ such that G’’.k = r and G’’.c = > 0. The exact starting value of the counter is arbitrary (NIST uses 1, I think), but the use of the output of one generator to seed the new generator is what's important. This is exactly what the above 'split' Some questions: 1) Do we want to go down this path and replace StdGen with something that has stronger theoretical backing for splits? Or perhaps we should just let things be and users looking for something stronger can use a hackage package? 2) What level of performance are we be looking for? 3) What deps do we consider acceptable for a replacement? 'random' has very few deps, DRBG has too many (for a core library). The proposed solution necessitates a crypto algorithm which is either slow (Crypto) or requries bytestring and maybe FFI+C (SHA, SHA2, cryptohash, AES). Also, Ian asked if we could split a non-cryptographic generator using a cryptographic one. I don't see why not (pretend the tuple is a fitting data structure): split2 :: RandomGen g, CrytpoRandomGen c => (g,c) -> ((g,c), (g,c)) split2 (g1,c1) = do (c2,c3) <- split c1 (int,c4) <- crandom c2 let g2 = newStdGen int return ( (g1, c3), (g2, c4) ) [1] http://hackage.haskell.org/packages/archive/DRBG/0.1.1/doc/html/src/Crypto-Random-DRBG.html#split > A suitable block cipher system might be 128-bit AES (Rijndael). > Unencumbered implementations exist in a variety of languages, and > performance is pretty good and will improve dramatically as hardware support > improves. I’d pick both crypto key size and the size of the result r to be > 128 bits, and employ a 64 bit counter c. Other crypto options exist. > From: Simon Peyton-Jones > Sent: Wednesday, November 03, 2010 3:11 AM > To: Burton Smith; Gideon Yuval (Gideon Yuval) > Cc: Tolga Acar; Simon Peyton-Jones > Subject: RE: Random number generation > Burton, Gideon, Tolga > Aha, that’s interesting. I’d never seen a random number generator based on > crypto, but it seems like an attractive idea. As I understand it, > successive calls to ‘next’ will give you > encrypt(0), encrypt(1), encrypt(2), encrypt(3),.... > Is this standard? Does it have provably good randomness properties, (cycle > length, what else?) like other RNGs? Or does it simply seem very plausible? > Can I send it round to the Haskell mailing list, in the hope that someone > will turn the idea into a library? (Ideally I’d like to make claims about > the randomness properties in doing so, hence my qns above.) > From: Gideon Yuval (Gideon Yuval) > Sent: Wednesday, November 03, 2010 7:15 AM > To: Simon Peyton-Jones; Burton Smith > Cc: Tolga Acar > Subject: RE: Random number generation > As long as the key, and the non-counting part of the counter, are kept” > secret”, anyone who can distinguish these pseudorandoms from real random, in > less than 2^128 steps, has a nice paper for crypto-2011 (this is known as > “provable security”) concerning a weakness in AES128. > One exception: real randoms have a birthday paradox; the pseudorandoms > suggested do not. If you care, you can: > (1) Limit the counter to 2^32 steps (paradox has 2^-64 probability) or > even 2^16 (2^-96), then rekey; or > (2) XOR 2 such encrypted counters, with different keys; or > (3) XOR 3 successive values for the same counter (just possibly cheaper; > top-of-head idea). > More hard-core: swap the position of key & message: encrypting a constant > “secret” with 1,2,3,4…. Gives pseudorandoms with no birthday paradox. > From: Tolga Acar > Sent: 03 November 2010 15:50 > To: Gideon Yuval (Gideon Yuval); Simon Peyton-Jones; Burton Smith > Subject: RE: Random number generation > Simon, > The general idea is not really that new in the crypto area with constraints > Gideon describes, of course. That is typically called a PRNG – Pseudo Random > Number Generator, or in another parlance, Deterministic Random Bit > Generators (DRBG). The DRBG constructions based on hash functions and block > ciphers are even standardized in NIST publication SP800-90 (even though I > may not recommend every one of them). > As for the construction below, that is based on the AES block cipher, that > essentially takes advantage of the PRP (Pseudo Random Permutation) property > of the AES block cipher, as each block cipher ought to be. So, as Gideon > outlines below, if you fix the key, the PRP gives you a random-looking (or, > in other terms, indistinguishable from random) output that no one without > the secret key and the “state” can generate or easily predict. Assuming an > ideal cipher (and AES is a close approximation to it), the probability is > believed to be the birthday paradox - no counterexample or a proof exists > without assumptions; so we stick to the birthday bound. > There ought to be papers on this somewhere. If not, that condensed > information is spread across many papers and is part of the crypto folklore, > I’d say. > From: Burton Smith > Sent: 03 November 2010 19:03 > To: Simon Peyton-Jones > Cc: Gideon Yuval (Gideon Yuval); Tolga Acar > Subject: RE: Random number generation > Just two points of further clarification: > 1. All PRNGs used in the technical computing space fail the birthday > paradox criterion (i.e. have full period), so we really need not worry about > this. Also, there are other mitigating factors, e.g. suppose we are using > the PRNG to generate pseudorandom residues mod n << 2^128; the paradox is > happily present there. > 2. The big innovation in this scheme is that the rekeying operation > done by split creates a new generator with independence guaranteed by > “provable security” in the sense Gideon mentioned – if someone can find > something nonrandom in the correlation between G’ and G’’, say, then it > amounts to a weakness in AES128 and is publishable. So it’s yet another > example of reducibility, common in our field: “if you can easily transform a > known/famous hard problem P into this other problem Q, Q must be hard”. > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://www.haskell.org/mailman/listinfo/libraries More information about the Libraries mailing list
{"url":"https://mail.haskell.org/pipermail/libraries/2010-November/015015.html","timestamp":"2024-11-10T14:58:31Z","content_type":"text/html","content_length":"12401","record_id":"<urn:uuid:8e032747-9c10-47c2-9ef1-1b53790b6a94>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00274.warc.gz"}
AIOU Course Code 1430-2 Solved Assignment Spring 2022 - AIOU Learning AIOU Course Code 1430-2 Solved Assignment Spring 2022 ASSIGNMENT No. 2 1. 1 (a) Describe the hypothesis testing process for population for dependent samples. The techniques for hypothesis testing depend on • the type of outcome variable being analyzed (continuous, dichotomous, discrete) • the number of comparison groups in the investigation • whether the comparison groups are independent (i.e., physically separate such as men versus women) or dependent (i.e., matched or paired such as pre- and post-assessments on the same In estimation we focused explicitly on techniques for one and two samples and discussed estimation for a specific parameter (e.g., the mean or proportion of a population), for differences (e.g., difference in means, the risk difference) and ratios (e.g., the relative risk and odds ratio). Here we will focus on procedures for one and two samples when the outcome is either continuous (and we focus on means) or dichotomous (and we focus on proportions). General Approach: A Simple Example The Centers for Disease Control (CDC) reported on trends in weight, height and body mass index from the 1960’s through 2002.^1 The general trend was that Americans were much heavier and slightly taller in 2002 as compared to 1960; both men and women gained approximately 24 pounds, on average, between 1960 and 2002. In 2002, the mean weight for men was reported at 191 pounds. Suppose that an investigator hypothesizes that weights are even higher in 2006 (i.e., that the trend continued over the subsequent 4 years). The research hypothesis is that the mean weight in men in 2006 is more than 191 pounds. The null hypothesis is that there is no change in weight, and therefore the mean weight is still 191 pounds in 2006. Null Hypothesis H[0]: μ= 191 (no change) Research Hypothesis H[1]: μ> 191 (investigator’s belief) In order to test the hypotheses, we select a random sample of American males in 2006 and measure their weights. Suppose we have resources available to recruit n=100 men into our sample. We weigh each participant and compute summary statistics on the sample data. Suppose in the sample we determine the following: Do the sample data support the null or research hypothesis? The sample mean of 197.1 is numerically higher than 191. However, is this difference more than would be expected by chance? In hypothesis testing, we assume that the null hypothesis holds until proven otherwise. We therefore need to determine the likelihood of observing a sample mean of 197.1 or higher when the true population mean is 191 (i.e., if the null hypothesis is true or under the null hypothesis). We can compute this probability using the Central Limit Theorem. Specifically, (Notice that we use the sample standard deviation in computing the Z score. This is generally an appropriate substitution as long as the sample size is large, n > 30. Thus, there is less than a 1% probability of observing a sample mean as large as 197.1 when the true population mean is 191. Do you think that the null hypothesis is likely true? Based on how unlikely it is to observe a sample mean of 197.1 under the null hypothesis (i.e., <1% probability), we might infer, from our data, that the null hypothesis is probably not true. Suppose that the sample data had turned out differently. Suppose that we instead observed the following in 2006: How likely it is to observe a sample mean of 192.1 or higher when the true population mean is 191 (i.e., if the null hypothesis is true)? We can again compute this probability using the Central Limit Theorem. Specifically, There is a 33.4% probability of observing a sample mean as large as 192.1 when the true population mean is 191. Do you think that the null hypothesis is likely true? Neither of the sample means that we obtained allows us to know with certainty whether the null hypothesis is true or not. However, our computations suggest that, if the null hypothesis were true, the probability of observing a sample mean >197.1 is less than 1%. In contrast, if the null hypothesis were true, the probability of observing a sample mean >192.1 is about 33%. We can’t know whether the null hypothesis is true, but the sample that provided a mean value of 197.1 provides much stronger evidence in favor of rejecting the null hypothesis, than the sample that provided a mean value of 192.1. Note that this does not mean that a sample mean of 192.1 indicates that the null hypothesis is true; it just doesn’t provide compelling evidence to reject it. In essence, hypothesis testing is a procedure to compute a probability that reflects the strength of the evidence (based on a given sample) for rejecting the null hypothesis. In hypothesis testing, we determine a threshold or cut-off point (called the critical value) to decide when to believe the null hypothesis and when to believe the research hypothesis. It is important to note that it is possible to observe any sample mean when the true population mean is true (in this example equal to 191), but some sample means are very unlikely. Based on the two samples above it would seem reasonable to believe the research hypothesis when x̄ = 197.1, but to believe the null hypothesis when x̄ =192.1. What we need is a threshold value such that if x̄ is above that threshold then we believe that H[1] is true and if x̄ is below that threshold then we believe that H[0] is true. The difficulty in determining a threshold for x̄ is that it depends on the scale of measurement. In this example, the threshold, sometimes called the critical value, might be 195 (i.e., if the sample mean is 195 or more then we believe that H[1] is true and if the sample mean is less than 195 then we believe that H[0] is true). Suppose we are interested in assessing an increase in blood pressure over time, the critical value will be different because blood pressures are measured in millimeters of mercury (mmHg) as opposed to in pounds. In the following we will explain how the critical value is determined and how we handle the issue of scale. First, to address the issue of scale in determining the critical value, we convert our sample data (in particular the sample mean) into a Z score. We know from the module on probability that the center of the Z distribution is zero and extreme values are those that exceed 2 or fall below -2. Z scores above 2 and below -2 represent approximately 5% of all Z values. If the observed sample mean is close to the mean specified in H[0] (here m =191), then Z will be close to zero. If the observed sample mean is much larger than the mean specified in H[0], then Z will be large. In hypothesis testing, we select a critical value from the Z distribution. This is done by first determining what is called the level of significance, denoted α (“alpha”). What we are doing here is drawing a line at extreme values. The level of significance is the probability that we reject the null hypothesis (in favor of the alternative) when it is actually true and is also called the Type I error rate. α = Level of significance = P(Type I error) = P(Reject H[0] | H[0] is true). Because α is a probability, it ranges between 0 and 1. The most commonly used value in the medical literature for α is 0.05, or 5%. Thus, if an investigator selects α=0.05, then they are allowing a 5% probability of incorrectly rejecting the null hypothesis in favor of the alternative when the null is in fact true. Depending on the circumstances, one might choose to use a level of significance of 1% or 10%. For example, if an investigator wanted to reject the null only if there were even stronger evidence than that ensured with α=0.05, they could choose a =0.01as their level of significance. The typical values for α are 0.01, 0.05 and 0.10, with α=0.05 the most commonly used value. (b) An innovator felt that its new electric motor derive would capture 48% of the regional market within 1 year. There are 5000 users of motor derives in the region. After sampling 10% of these users a year later, the company found that 43% of them were using the new derives. At 1% level of significance, should we conclude that the company failed to reach the market share goal? 1. 2 The University of Bookstore is facing significant competition from off-campus book stores and they are considering targeting a specific class in order to retain student business. The bookstore randomly selected 150 freshmen and 175 sophomores. They found that 46% of the freshmen and 40% of the sophomores purchase all of their textbooks at University Bookstore. (a) At 1% level of significance, is there a significant difference in the proportions of freshman and sophomores who purchase entirely at the University Bookstore? (b) At 5% level of significance, is there a significant difference in the proportions of freshman and sophomores who purchase entirely at the University Bookstore? 1. 3 A coal-fired power plant is considering two different systems for pollution abatement. The first system has reduced the emission of pollutants to acceptable levels 68% of the times as determined from 200 air samples. The second, more expensive system has reduced the emissions of pollutants to acceptable levels 76% of the times as determined from 250 air samples. If the expensive system is significantly more effective than the inexpensive system in reducing pollutants to acceptable levels, then the management of the power plant will install the expensive system. (a) Which system will be installed if management uses a significant level of 1% in making its decision? (b) Which system will be installed if management uses a significant level of 5% in making its decision? 1. 4 Zippy Cola is studying the effect of its latest advertising campaign. People chosen at random were called and asked how many cans of Zippy Cola they had bought in the past week and how many Zippy Cola advertisements they had either read or seen in the past week. X(number of ads) 3 7 4 2 0 4 1 2 Y(cans purchased) 11 18 9 4 7 6 3 8 • Develop the estimating equation. (b) Calculate sample coefficient of determination and sample coefficient of correlation. 1993. 5 Yamaha Motorcycles began producing three models of mopeds in 1993. For the three years 1993 through 1995, sales (in dollars) are as follows: Model Average Annual Price Units Sold (0000) I 139 155 149 3.7 4.1 7.6 II 169 189 189 2.3 4.6 8.1 III 199 2.5 219 1.6 2.1 3.4 • Calculate the weighted average of relative’s price indices using the prices and quantities from 1995 as the base and weights. P1 P2 P0 Q1 Q2 Q0 P1 / P0 *100 P2 / P0 * 100 Base Value P0Q0 I 139 155 149 3.7 4.1 7.6 93.28 104.26 1132.4 105630.2 II 169 189 189 2.3 4.6 8.1 89.41 100 1530.9 136877.7 III 199 2.5 219 1.6 2.1 3.4 90.86 1.14 744.6 67654.3 Sum 273.55 3407.9 WAR = 932231.04 / 3407.9 WAR = 273.55 (b) Calculate the weighted average of relative’s price indices using total dollar values for each year as the weights and 1995 as the base period. P1 P2 P0 Q1 Q2 Q0 P1 / P0 *100 P2 / P0 * 100 Base Value P0Q0 I 139 155 149 3.7 4.1 7.6 93.28 104.26 1132.4 105630.2 II 169 189 189 2.3 4.6 8.1 89.41 100 1530.9 136877.7 III 199 2.5 219 1.6 2.1 3.4 90.86 1.14 744.6 67654.3 Sum 273.55 205.4 3407.9 WAR = 699982.66 / 3407.9 WAR = 205.4
{"url":"https://aioulearning.com/aiou-course-code-1430-2-solved-assignment-spring-2022/","timestamp":"2024-11-09T09:57:29Z","content_type":"text/html","content_length":"171824","record_id":"<urn:uuid:008054d4-d228-48f6-b67a-ee86d5a651e7>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00304.warc.gz"}
Time-dependent linear response theory for excited states. This command can appear in the global context. Flag for whether the excited states are computed from Configutation Interaction Singles (CIS) or Tamm-Dancoff Approximation (TDA). □ The type is bool □ The default is false □ The value must be one of: ☆ true - Do CIS/TDA for excited states. ☆ false - Do Random-Phase Approximation (RPA)/Time-Dependent Density Functional Theory (TDDFT) for excited states. Convergence threshold for the iterative subspace solver, tested on on the infinity norm of the residual vectors. □ The type is real □ The default is 1e-6 □ The value must be positive Whether to use frozen-core approximation. □ The type is bool □ The default is false Flag for whether to construct the response matrix explicitly. □ The type is bool □ The default is false □ The value must be one of: ☆ true - Construct the response matrix explicitly and store it in memory. This option is only available for response calculations based on a Hartree-Fock wavefunction, and should be used only for testing purposes. ☆ false - Do not construct the response matrix explicitly, but compute the matrix-vector product instead. This will require iterative solvers. Threshold for removing linearly dependent vectors based on the square of singular values of the matrix. □ The type is real □ The default is 1e-10 □ The value must be positive Load the AO transition density matrix from file (in arma::arma_binary) format. □ The type is string □ There is no default value. Maximum number of iterations for the iterative subspace solver. □ The type is int □ The default is 128 □ The value must be positive Only calculate excited states with excitation energy (in eV) greater than the specified value. □ The type is real □ There is no default value. □ The value must be nonnegative Number of excited states to be calculated. □ The type is int □ The default is 3 □ The value must be nonnegative Specify the name of a set of results. This option is deprecated. □ The type is string □ There is no default value. Threshold for orthonormalization of the suspace vectors. □ The type is real □ The default is 1e-10 □ The value must be positive Print level. □ The type is int □ There is no default value. □ The value must be one of: ☆ -2 - No output ☆ -1 - Minimum output ☆ 0 - Output that doesn't scale with system size ☆ 1 - Output that scales linearly with system size ☆ 2 - (Debugging) output that scales quadratically with system size ☆ 3 - (Debugging) output that scales cubically with system size Save the AO transition density matrix to file (in arma::arma_binary) format. □ The type is string □ There is no default value. Selectively converge the excited state with which the excitations are from the specified hole (i.e. occupied) orbitals. □ The type is [int] □ There is no default value. Specify spin of the excited states. If no input value is specified, the program will assign default value based on the spin of the reference calculation: for restricted closed-shell reference state, the default spin for excited states will be singlet; otherwise, the default spin will be mixed. □ The type is string □ There is no default value. □ The value must be one of: ☆ singlet - Solve for Singlet excited states only. Only effective for closed-shell reference. This is the default for a closed-shell reference state. ☆ triplet - Solve for Triplet excited states only. Only effective for closed-shell reference. ☆ mixed - Solve for excited states of all POSSIBLE spin. This is the default for an unrestriced open-shell reference state. For a closed-shell reference, this will give both singlet and triplet excited states;
{"url":"https://software.entos.ai/qcore/documentation/commands/td/","timestamp":"2024-11-08T07:15:38Z","content_type":"text/html","content_length":"40218","record_id":"<urn:uuid:d5ea68e6-f164-432f-b772-46af34fe19b6>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00489.warc.gz"}
Best Description: Division Algorithm For Polynomials Proof - The Boat Yacht Best Description: Division Algorithm For Polynomials Proof Best Description: Division Algorithm For Polynomials Proof. All answers, details & pictures are below. Division Algorithm For Polynomials Proof Division Algorithm for Polynomials states that given two polynomials, a dividend polynomial, and a nonzero divisor polynomial, there is a unique quotient polynomial and a unique remainder polynomial. Here, are the Division Algorithm For Polynomials Proof. Let’s denote the divisor polynomial as P(x) and the divisor polynomial as D(x), where D(x) is nonzero. We want to find a quotient polynomial Q(x) and a remainder polynomial R(x), like this: • P(x) = D(x) * Q(x) + R(x) We can use polynomial long division to show the existence of quotient and remaining polynomials. 1. Take the highest order term of the dividing polynomial P(x) and the highest order term of the dividing polynomial D(x). Divide the highest order P(x) term by the highest order D(x) term to obtain the leading term of the quotient polynomial Q(x). 2. Multiply the divisor polynomial D(x) by the leading term of the quotient polynomial Q(x) to get a temporary polynomial T(x). 3. Subtract T(x) from P(x) to get a new polynomial P1(x). P1(x) is obtained by subtracting the terms of T(x) from the corresponding terms of P(x). 4. Repeat steps 1-3 with the new polynomial P1(x) as the numerator until the degree of P1(x) is less than the degree of D(x). 5. The final polynomial obtained when the degree of P1(x) is less than the order of D(x), is the remaining polynomial R(x). 6. The quotient polynomial Q(x) is the sum of all the quotient terms obtained in step 1 at each stage of the division. / Division Algorithm For Polynomials Proof By performing the polynomial long division, we can get the quotient polynomial Q(x) and the remainder polynomial R(x). Therefore, we have demonstrated the existence of Q(x) and R(x). To prove the uniqueness of the quotient and remainder polynomials, we assume that there are two sets of quotients and remainder polynomials Q1(x), R1(x) and Q2(x), R2(x) and satisfy the following: • P(x) = D(x) * Q1(x) + R1(x) • P(x) = D(x) * Q2(x) + R2(x) • Subtracting these two equations we get: • D(x) * (Q1(x) – Q2(x)) = R2(x) – R1(x) Since the degree of D(x) is not zero, the degree of the left side is at least the degree of D(x). But the degree of the right side is less than the degree of R2(x) – R1(x), D(x), because R2(x) and R1 (x) are the remainders from polynomial long division. This contradiction implies that the only probability is that the polynomials on both sides of the equation are equally zero. Therefore, we have: • Q1(x) – Q2(x) = 0 and R2(x) – R1(x) = 0 Hence, Q1(x) = Q2(x) and R1(x) = R2(x), which proves the uniqueness of the quotient and remainder polynomials. We proved the Division Algorithm for polynomials by demonstrating both the existence and uniqueness of quotient and remainder polynomials. Division Algorithm For Polynomials Proof: This article concludes our article on polynomial division with several tasks. There is also a solution for every task. Additionally, you will find a detailed video with polynomial division tasks. Polynomial division problem 1 • Calculate the polynomial part below division algorithm for polynomials proof Solution to task 1 The term with the highest exponent in the first polynomial is . To get a with the polynomial , we need to multiply it by , i.e. division algorithm for polynomials proof Division Algorithm For Polynomials Proof: We add to x^2the next term -4xand subtract the result of the previous multiplication from it. division algorithm for polynomials proof To -7x we add the next part of the first polynomial -21 and get. division algorithm for polynomials proof The term with the highest exponent is now –7x. We therefore need to multiply the second polynomial by –7, i.e . division algorithm for polynomials proof Now we subtract again. division algorithm for polynomials proof This brings us to the end of polynomial division. Polynomial division problem 2 Calculate the following polynomial division. division algorithm for polynomials proof Solution to task 2 The term with the highest exponent in the first polynomial is x^2. x – 8 To make this vanish with the second polynomial , we need to xmultiply the second polynomial by , i.e. division algorithm for polynomials proof We add to x^2the next term -3xand subtract the result of the previous multiplication from it. To 5x we add the next part of the first polynomial –40 and get. The term with the highest exponent is now 5x. We therefore need to multiply the second polynomial by 5, i.e. Now we subtract again. Division Algorithm For Polynomials Proof How do you verify the division algorithm for polynomials? In polynomial division, the degree of the divisor is greater than or equal to the divisor. Multiply the divisor polynomial and the quotient, add to the remainder, if any, to verify the result. Division Algorithm For Polynomials Proof video are below. This brings us to the end of polynomial division. https://studyflix.de/mathematik/polynomdivision-aufgaben-2430 Division Algorithm For Polynomials Proof topic is over. Leave a Comment
{"url":"http://theboatyacht.com/division-algorithm-for-polynomials-proof/","timestamp":"2024-11-10T10:39:59Z","content_type":"text/html","content_length":"180429","record_id":"<urn:uuid:cc3c6a50-017e-4a43-835a-f35299260404>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00798.warc.gz"}
[QSMS-BK21 Toric Geometry Seminar] Moduli Space of Special Lagrangians in the Complement of an Anticanonical Divisor • Date : 2022-04-11 10:00 ~ 17:00 • Place : 129-309 (SNU) • Speaker : 김용환 (SNU) • Title : Moduli Space of Special Lagrangians in the Complement of an Anticanonical Divisor • Abstract : We first briefly review the construction given by the SYZ conjecture and introduce wall-crossing phenomena to provide some background for the paper "Mirror Symmetry and T-Duality in the Complement of an Anticanonical Divisor" by Auroux. Then we investigate the behavior of special Lagrangians and their complexified moduli space. We end by showing that these special Lagrangians have nice properties that enable us to define a superpotential function for these Lagrangians, when equipped with an additional flat unitary connection.
{"url":"https://qsms.math.snu.ac.kr/index.php?mid=board_sjXR83&order_type=asc&listStyle=viewer&page=2&document_srl=2276&sort_index=regdate","timestamp":"2024-11-14T14:50:30Z","content_type":"text/html","content_length":"21083","record_id":"<urn:uuid:7ff985f0-a96e-418a-8b72-7fe0953a1831>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00155.warc.gz"}
John D. Cook Sometimes code is easier to understand than abstract math. A few days ago I was having a hard time conveying bias, consistency, and efficiency in a statistics class. Writing some pseudo-code on the board seemed to help clear things up. Loops and calls to random number generation routines are more tangible than discussions about random samples. Later I turned the pseudo-code into Python code (after all, Python is supposed to be “executable pseudo-code”) and fancied it up a bit. The following page gives some explanation, some plots of the output, and the source code. The difference between an unbiased estimator and a consistent estimator Biased estimators An unbiased estimator, very roughly speaking, is a statistic that gives the correct result on average. For a precise definition, see Wikipedia. Unbiasedness is an intuitively desirable property. In fact, it seems indispensable at first. In the colloquial sense, “bias” is practically synonymous with self-serving dishonesty. Who wants a self-serving, dishonest statistical estimate? But it’s important to remember that “bias” in statistical sense has a technical meaning that may not correspond to the colloquial meaning. Here’s the big problem with statistical bias: if U is an unbiased estimator of θ, f(U) is NOT an unbiased estimator of f(θ) in general. For example, standard deviation is the square root of variance, but the square root of an unbiased estimator for variance is not an unbiased estimator for standard deviation. This shows bias has nothing to do with accuracy, since the square root of an accurate estimation of variance is an accurate estimate of standard deviation. In fact, unbiased estimators can be terrible. The fact that unbiasedness is not preserved under transformations calls into question its usefulness. People seldom care directly about abstract statistical parameters directly. Instead they care about some calculation based on those parameters. An unbiased estimate of the parameters does not generally lead to an unbiased estimate of what people really want to estimate. Unbiased estimators can be absurd An estimator in statistics is a way of guessing a parameter based on data. An estimator is unbiased if over the long run, your guesses converge to the thing you’re estimating. Sounds eminently reasonable. But it might not be. Suppose you’re estimating something like the number of car accidents per week in Texas and you counted 308 the first week. What would you estimate is the probability of seeing no accidents the next If you use a Poisson model for the number of car accidents, a very common assumption for such data, there is a unique unbiased estimator. And this estimator would estimate the probability of no accidents during a week as 1. Worse, had you counted 307 accidents, the estimated probability would be −1! The estimator alternates between two ridiculous values, but in the long run these values average out to the true value. Exact in the limit, useless on the way there. A slightly biased estimator would be much more practical. See Michael Hardy’s article for more details: An_Illuminating_Counterexample.pdf
{"url":"https://www.johndcook.com/blog/tag/bias/","timestamp":"2024-11-13T12:54:16Z","content_type":"text/html","content_length":"51641","record_id":"<urn:uuid:437472b3-6b6d-4904-ad14-686b93f04b7b>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00422.warc.gz"}
First-passage kinetic Monte Carlo method We present an efficient method for Monte Carlo simulations of diffusion-reaction processes. Introduced by us in a previous paper, our algorithm skips the traditional small diffusion hops and propagates the diffusing particles over long distances through a sequence of superhops, one particle at a time. By partitioning the simulation space into nonoverlapping protecting domains each containing only one or two particles, the algorithm factorizes the N -body problem of collisions among multiple Brownian particles into a set of much simpler single-body and two-body problems. Efficient propagation of particles inside their protective domains is enabled through the use of time-dependent Green's functions (propagators) obtained as solutions for the first-passage statistics of random walks. The resulting Monte Carlo algorithm is event-driven and asynchronous; each Brownian particle propagates inside its own protective domain and on its own time clock. The algorithm reproduces the statistics of the underlying Monte Carlo model exactly. Extensive numerical examples demonstrate that for an important class of diffusion-reaction models the algorithm is efficient at low particle densities, where other existing algorithms slow down severely. ASJC Scopus subject areas • Statistical and Nonlinear Physics • Statistics and Probability • Condensed Matter Physics Dive into the research topics of 'First-passage kinetic Monte Carlo method'. Together they form a unique fingerprint.
{"url":"https://nyuscholars.nyu.edu/en/publications/first-passage-kinetic-monte-carlo-method","timestamp":"2024-11-04T10:44:44Z","content_type":"text/html","content_length":"53720","record_id":"<urn:uuid:4defad0b-20be-4dfd-a01d-72b309a4f17d>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00653.warc.gz"}
Seminars: Guram Bezhanishvili, Modal operators on rings of continuous functions: A generalization of Gelfand and Jonsson-Tarski dualities Abstract: Jonsson-Tarski duality in modal logic generalizes the celebrated Stone duality for Boolean algebras by modeling modal operators on Boolean algebras by means of continuous relations on Stone spaces. But the notion of continuous relation makes perfect sense for the class of compact Hausdorff spaces. Gelfand duality generalizes Stone duality to compact Hausdroff spaces by working with continuous functions instead of clopens. I will discuss how to define modal operators on rings of continuous functions by means of continuous relations on compact Hausdorff spaces. The resulting duality will on the one hand generalize Gelfand duality, and on the other Jonsson-Tarski duality. Language: English
{"url":"https://m.mathnet.ru/php/seminars.phtml?option_lang=eng&presentid=28380","timestamp":"2024-11-11T20:29:46Z","content_type":"text/html","content_length":"9011","record_id":"<urn:uuid:0c155516-f391-48e5-9663-11bd2945ed35>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00086.warc.gz"}
Re: Why Does the Universe Exist? Some Perspectives from Our Physics Project—Stephen Wolfram Writings On Fri, Aug 12, 2022 at 2:04 AM Telmo Menezes <te...@telmomenezes.net> > Hi Jason, > This is really interesting, thanks for sharing. Since Wolfram started > going in this direction, something that occurs to me is this: hypergraphs > are perhaps one of the most general mathematical constructs that can be > conceived of. Almost everything else can be seen as a special case of > hypergraphs. Like you say, with the update rules, we shouldn't be surprised > if they are equivalent to the UD. My scepticism is this: is anything being > gained in terms of explanatory power? Should we be surprised that such a > powerful representation can contain the rules of our reality? I do admit > that I have to study these ideas in more detail, and there is something > really compelling about hypergraphs + update rules. That is a good question. I am not familiar with them myself, but my understanding is they do not provide for any form of computation beyond what is turing computable, so in that sense, I don't know that they provide any additional explanatory power beyond the simple statement that all computations exist. A commenter on my site recently asked, what can we say about the "computer" that computes all these computations. My reply was: "There is no single one. There are infinite varieties of different TMs, and all can exist Platonically/Arithmetically. Gregory Chaitin discovered an equation whose structure models LISP computers. There are likewise other equations corresponding to the Java Virtual Machine, and the Commodore 64. All these Turing machines, and their execution traces of every computer program they can run, exist in math in the same sense that the Mandelbrot set or the decimal expansion of Pi exist in math. Despite the infinite variety of architectures for different Turing machines, their equivalence (in the Turing computability sense) makes the question of “Which Turing machine is running this universe?” impossible to answer, beyond saying, “all of them are.”" I think hypergraphs, then, would be just one more mathematical object we could add to the heap of Turing universal mathematical objects which could (and would, if Platonism is correct) underlie the computations of our > "As soon as one starts talking about “running programs” some people will > immediately ask “On what computer?” But a key intellectual point is that > computational processes can ultimately be defined completely abstractly, > without reference to anything like a physical computer. " My same reply also provided an explanation/argument, which is applicable to anyone who accepts simple truths concerning abstract objects have definite and objective true/false values, paired with a rejection of philosophical zombies. I think John rejects zombies, so he would have to reject objective truth to believe a physical computer is necessary to produce observers. Below is what I wrote: The way I like to think about it is this: If one is willing to believe that truth values for mathematical relations like “2 + 2 = 4” can exist and be true independently of the universe or someone writing it down, or a mathematician thinking about it, that is all you need. For if the truth values of certain simple relations have an independent existence, then so to do the truth values of far more complex equations. Let’s call the Diophantine equation that computes the Wave Function of the Hubble Volume of our universe “Equation X”. Now then, it becomes a question of pure arithmetic, whether it is true or false that: “In Equation X, does the universal state variable U, at time step T contain a pattern of electrons that encode to the string: ‘why does the existence of Universal Equations imply the existence of iterative search processes for solutions?'” If that question has a definitive objective truth, then it is the case that in the universe U, at time step T, in equation X, there is some person in that universe who had a conscious thought, and wrote it down and it got organized into a pattern of electrons which anyone who inspects this vast equation with its huge variables could see. Once you get to this point, the last and final step is to reject the possibility that the patterns found in these equations, which behave and act like they are conscious, and claim to be conscious, are philosophical zombies. In other words, to accept that they are conscious beings, just like those who exist in “physical” universes (assuming there is any possible distinction between a physical universe, and a physical universe computed by a Platonic or Arithmetic Turing Machine). > Oh boy, John Clark is not going to like this :) > Telmo. > Am Do, 11. Aug 2022, um 20:35, schrieb Jason Resch: > https://writings.stephenwolfram.com/2021/04/why-does-the-universe-exist-some-perspectives-from-our-physics-project/ > I found this fascinating. It appears to have many similarities with the > type of physical reality that emerges from then universal dovetailer, with > new ways of explaining it and some new insights. > Jason > -- > You received this message because you are subscribed to the Google Groups > "Everything List" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to everything-list+unsubscr...@googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/everything-list/CA%2BBCJUiFsW5z1nPmXdZUNS2_StB%2B_cZjP5tX6gTndExtfxJOvg%40mail.gmail.com > <https://groups.google.com/d/msgid/everything-list/CA%2BBCJUiFsW5z1nPmXdZUNS2_StB%2B_cZjP5tX6gTndExtfxJOvg%40mail.gmail.com?utm_medium=email&utm_source=footer> > . > -- > You received this message because you are subscribed to the Google Groups > "Everything List" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to everything-list+unsubscr...@googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/everything-list/3c907042-e54c-44e2-8969-6d02cd2db5b4%40www.fastmail.com > <https://groups.google.com/d/msgid/everything-list/3c907042-e54c-44e2-8969-6d02cd2db5b4%40www.fastmail.com?utm_medium=email&utm_source=footer> > . You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com. To view this discussion on the web visit
{"url":"https://www.mail-archive.com/everything-list@googlegroups.com/msg93108.html","timestamp":"2024-11-11T08:40:30Z","content_type":"text/html","content_length":"18259","record_id":"<urn:uuid:7df3839b-6b13-402f-af6e-526aba5af439>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00061.warc.gz"}
Evaluating Pyramids Worksheet - 15 Worksheets.com Evaluating Pyramids Worksheet Description This worksheet introduces students to the concept of energy flow in an ecosystem, specifically detailing the energy values at each trophic level per square meter annually. Displaying an energy pyramid, it provides energy values for grass, grasshoppers, and squirrels, representing producers, primary consumers, and secondary consumers, respectively. Students are tasked with determining the amount of energy transferred between these levels, using the given energy values. Through a set of questions, they are directed to deduce the energy consumed by grasshoppers from the grass and the percentage of energy passed on to the squirrels. To tackle this worksheet effectively, students should begin by closely observing the provided energy values for each trophic level. By comparing the energy at one level to the next, they can calculate the percentage of energy transferred. For instance, to find out how much energy grasshoppers derive from grass, they’d refer to the respective energy values. Subsequently, they can compute the energy percentage passed on to squirrels by comparing the energy values of grasshoppers and squirrels. The primary objective of this worksheet is to teach students about the dynamics of energy transfer within an ecosystem, emphasizing the principle that significant energy is lost at each trophic level. It strives to instill an understanding of how only a fraction of the consumed energy is utilized and passed on to the next level. By providing numerical values, the worksheet allows students to quantitatively analyze and visualize these energy dynamics, fostering a deeper grasp of ecological principles. Through hands-on calculations and labeling tasks, students not only learn theoretical concepts but also hone their analytical skills. You are given a pyramid that includes plants (grass), grasshoppers, squirrels, and eagles. You are asked a series of questions based on this including: How much energy do the grasshoppers get from eating one square meter of grass per year? What percentage of energy consumed by the grasshoppers is passed on to the squirrels?
{"url":"https://15worksheets.com/worksheet/ecological-pyramid-2/","timestamp":"2024-11-05T13:03:57Z","content_type":"text/html","content_length":"109904","record_id":"<urn:uuid:7b5b35d0-c961-4d98-9b68-0ce31cf87b86>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00411.warc.gz"}
The Incredible Physics of Simone Biles’ Yurchenko Double Pike Gymnastics is an extremely difficult sport, but some people make it look easy. Step forward Simone Biles, who hit a Yurchenko double pike—considered the “hardest vault in the world”—during the artistic gymnastics Women’s Team event at the 2024 Summer Olympics. With both the Women’s Vault and Women’s All-Around events still to come, it’s possible we’ll get to see Biles complete the feat again before the Games are out. The physics behind the Yurchenko double pike, and so many other gymnastic vaults, is also pretty challenging. Let’s consider something seemingly simple, like a flip. There will be some version of a flip in all four of the women’s gymnastics events: floor, bars, vault, and beam. It’s one of the two types of rotations a gymnast can make midair. In physics terms, a flip is a head-to-foot rotation about an imaginary axis that runs through the gymnast’s hips. For the second type of rotation, a twist, imagine an axis that runs from their head to their feet. Maybe it’s easier just to see them. These two animations were created in Python. (You can see the code here and here.) Trinket via Rhett Allain Trinket via Rhett Allain A gymnast can actually perform both of these types of rotation at the same time—that’s what makes the sport so interesting to watch. In physics, we would call this type of movement a “rigid body rotation.” But, clearly, humans aren’t rigid, so the mathematics to describe rotations like this can be quite complicated. For the sake of brevity, let’s limit our discussion just to flips. There are three kinds of flips. There is a layout, in which the gymnast keeps their body in a straight position. There is a pike, in which they bend at about a 90-degree angle at the hips. Finally, there is a tuck, with the knees pulled up towards the chest. What’s the difference, in terms of physics? Rotations and the Moment of Inertia If you want to understand the physics of a rotation, you need to consider the moment of inertia. I know that’s a strange-sounding term. Let’s start with an example involving boats. (Yes, boats.) Suppose you’re standing on a dock next to a small boat that’s just floating there, and isn’t tied up. If you put your foot onto the boat and push it, what happens? Yes, the boat moves away—but it does something else. The boat also speeds up as it moves away. This change in speed is an acceleration. Now imagine that you move along the dock and pick a much larger boat, like a yacht. If you put your foot on it and push it, using the same force for the same amount of time as you did for the smaller boat, does it move? Yes, it does. However, it doesn’t increase in speed as much as the smaller boat because it has a larger mass. The key property in this example is the boat’s mass. With more mass, it’s more difficult to change an object’s motion. Sometimes we call this property of objects the inertia (which is not to be confused with the moment of inertia—we will get to that soon). When you push on the boat, we can describe this force-motion interaction with a form of Newton’s Second Law. It looks like this: Illustration: Rhett Allain (Note: I am using the one-dimensional scalar version of this equation just to make it simpler. In fact, forces and momentum are actually vectors.) This says that a net force (F) changes the momentum (p) of an object, and momentum is defined as the product of mass (m) and velocity (v). Since both the large and the small boat have the same force, they have the same change in momentum. But with the larger mass of the yacht, you get a smaller increase in velocity. Now let’s consider rotational motion. We have a very similar expression for rotations, which looks like this. (Again, these are scalar versions of the real vector equations.) Illustration: Rhett Allain This time, there is a bunch of new stuff here—so let’s go over it. First, there is the torque (the Greek letter τ). The torque on an object is the result of a force applied at a particular point on it. You can think of this as a type of rotational force. Just like a force changes the momentum of an object, a torque changes the angular momentum (L). Angular momentum is the product of the angular velocity and the moment of inertia. But what the heck is the moment of inertia? By looking at torque and angular momentum, you can sort of see what the moment of inertia does. Just like the mass of an object is its resistance to changes in linear motion, the moment of inertia is an object’s resistance to changes in angular motion. You could call it the “angular mass” if you wanted to—and I would be cool with that. We can also put it this way: If you apply the same torque to two objects with different moments of inertia, the one with the lower moment of inertia will have a greater increase in angular velocity. But what does the moment of inertia depend on? It’s not just how much mass an object has, but also where that mass is located. The moment of inertia is a measurement of both the amount of mass and how far it is from the axis of rotation. This has a very interesting consequence: You can actually change the moment of inertia without changing the mass. Here’s an at-home experiment you can try. Sit on a spinning stool or office chair and push with your foot to get yourself rotating. (Your foot exerts a torque that changes your angular momentum.) Try spinning with your arms held out from your body, and then another time with your arms tucked in. With your arms spread out, you’ll have a larger moment of inertia and your rotation rate will be lower than with your arms tucked in. Notice that you aren’t changing your mass, just the way it is distributed. Try it again with your arms tucked in, but this time get yourself spinning and pull your feet off the ground. In this case, you are rotating with zero torque (after the initial push). Now try starting with your arms outstretched and then pull them in mid-spin. This is what it will look like: Video: Rhett Allain You can clearly see that, with your arms pulled in, the rotation rate increases. That’s all because of a change in the moment of inertia. Changing Moment of Inertia to Change Angular Velocity Now for some fun. Let’s look at the angular velocity of Simone Biles as she changes her body position. In particular, I’m going to analyze her Yurchenko double pike vault. She also does a double layout (with a twist) and a double tuck (with a triple twist—her famous triple-double move) in her floor routine. However, both of these tumbling passes have a twisting motion that makes them more difficult to analyze. In the Yurchenko double pike, she starts off running toward the vault table. Before the actual vault, she completes two roundoffs—one onto the springboard and then one from the springboard to the vault table. (This is the Yurchenko part.) In this initial motion, she rotates in a mostly straight position. Once she leaves the vault table, she bends at the waist into a pike position. This change in position changes her mass distribution and therefore changes her moment of inertia. For the double pike portion of the vault, the only force acting on her is the downward-pulling gravitational force. Since this force acts at the center of an object’s mass, it doesn’t exert any torque on her. That means that her angular momentum must remain constant. But by changing her moment of inertia, her angular velocity will also change. So, the product of the moment of inertia and angular velocity before hitting the vault should be equal to the product after leaving the vault table. Illustration: Rhett Allain If I can get a measurement of her angular velocity during the Yurchenko and the pike parts of the motion, I can use that to see how her body position changes her angular velocity. That’s where I turn to my favorite video analysis tool—Tracker Video Analysis. By looking at the position of Biles’ head relative to her waist in each frame of a video, I can get her angular position. If I do this for multiple frames, I can also get a measurement of time. Since the video plays at 30 frames per second, each frame is 1/30th of a second. Then I can get the angular velocity from the slope of the angle vs. time plot. Actually, finding the angular position is easier than measuring Biles’ actual position and velocity. In order to do that, I would need to somehow measure her actual position in each frame of the video. Normally, you would do this by using the size of a known object. But in videos like this, the camera both pans and zooms, making the whole thing complicated. Finding the angular position ignores all these problems. OK, now for the graph. Here is the angular position of Biles both during her Yurchenko and double pike. Illustration: Rhett Allain From this, it looks like there are three different phases with three different angular velocities. For the roundoff, the slope of the angle-time plot is 12.0 radians per second. That’s cool. But then, when she makes the transition from the ground to the vault table (while still rotating) she has a lower angular velocity of 6.72 rad/s. I’m not really sure why she slows down here. Perhaps when she hits the springboard, there is a torque which decreases her angular momentum (and thus her angular velocity). Finally, in the air (during the pike) she has an increased angular velocity with a value of 15.49 rad/sec. I’ll assume that Simone’s angular velocity in the pre-pike position is 6.72 rad/s and then in the pike it’s 15.49. From this, we can calculate the ratio of moments of inertia for the straight and pike positions. Illustration: Rhett Allain Of course, this only gives us the ratio of moments of inertia (from straight to pike positions). However, you can see that the pike position has a lower moment of inertia than in the straight position. Why does this matter? It’s a big deal, because when a gymnast is in the air, she wants to have an angular velocity high enough that, by the time she returns to the ground, she has rotated so her feet are pointed toward it. Without that, you can’t land on your feet. If you want to complete a double flip, the tuck position is going to have the lowest moment of inertia and give you the greatest rotational speed. This means that you don’t have to jump as high and be off the ground as long as for a complete rotation. The next hardest flip is the pike, since it has a higher moment of inertia. That leaves the layout with the largest moment of inertia and the lowest angular velocity. Although the layout is the hardest, in the near future, I think someone in the women’s competition is going to land at least a triple pike. It’s already happened in men’s gymnastics with Russian gymnast Nikita Nagornyy, so I really wouldn’t be surprised to see this show up in the women’s competition soon. This story was originally published on July 26, 2021 and has been updated to reflect Simone Biles’ performance at the 2024 Olympic Games.
{"url":"https://eletiofe.com/the-incredible-physics-of-simone-biles-yurchenko-double-pike/","timestamp":"2024-11-12T13:17:14Z","content_type":"text/html","content_length":"358466","record_id":"<urn:uuid:ce37df31-e57c-49c3-af87-aad567add528>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00584.warc.gz"}
Stars at Yerkes Observatory - Analyze the Information With your new perspectives in hand of each of the moons' orbits, let's see what we can find out about their orbits. Get your protractor and ruler handy. For each of the drawings you made do these steps: 1. Find the first and last position of the moon you are working with. Write the time each postion was imaged next to its location. 2. Subtract the time of the first moon position from the time of the last moon position. Record this time. 3. Draw radii from the center of jupiter to the first moon position and the last moon position. 4. You should now have an angle that you can measure. Measure this angle and record it on your paper. 5. Repeat these steps for each of the moons 6. Use a ruler to measure the radius of Io's orbit in centimeters. Also measure the diameter of the planet Jupiter (the circle in the center) in centimeters. 1. What percent of the whole orbit (circle) is the angle you drew? 2. How long will it take for this moon to complete one orbit? (What is the orbital period of your moon?) 3. What other information would you need to figure out how fast your moon is moving? 4. You have measured the orbital radius of Io and the diameter of Jupiter on your plate model, both in centimeters. The actual diameter of Jupiter is 142,984 km. Use a simple proportion with these numbers to calculate the actual radius of Io's orbit in kilometers. 5. You now have determined Io's period and orbital radius. Locate a source for the published values for these quantities to see how close you came.
{"url":"https://www.starsatyerkes.net/teacher-resources/lesson-plans/edo-the-jovian-system/analyze-the-information","timestamp":"2024-11-05T15:43:03Z","content_type":"text/html","content_length":"191800","record_id":"<urn:uuid:581021c1-3660-43cc-b228-cda7d4c09fcb>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00897.warc.gz"}
Just like different frequency bands and time slots can be used to multiplex users, spatial domain can also be exploited to achieve the same result. It is well known that if there are 4 transmit antennas and 4 receive antennas then four simultaneous data streams can be transmitted over the air. This can be scaled up to 8 x 8 or in the extreme case to 128 x 128. When the number of transmit or receive antennas is greater than 100 we typically call it a Massive MIMO scenario and we need specialized signal processing techniques to handle this case. Computationally complex techniques like Maximum Likelihood (ML) become quite difficult to implement in real-time and we have to resort to some simplified linear array processing techniques. This is the topic of this blog post. Description of the Scenario To understand the scenario that we will discuss here, please look at our previous post. Also note that when we talk about nT x nR case we do not necessarily mean nT transmit antennas and nR receive antennas, it could also mean nT users with 1 antenna each and co-located nR receive antennas, such as at a base station. We typically assume that the number of receive antennas is greater than the number of users. When the number of users is greater than the number of receive antennas we call it overloaded case and this is not discussed here. Here the number of users is fixed at 16 (randomly distributed between 0 and 360 degrees within the cell) and the number of receive antennas is varied from 20 to 100. Plane Wave Impinging Upon a ULA from a Particular User/Direction Beamforming Using a Uniform Linear Array Linear Signal Processing Techniques for Massive MIMO The four signal processing techniques that are applied at the receive array are: 1. Matched Filtering (MF) 2. Moore Penrose Pseudo-Inverse without controlling the threshold (PINV) 3. Moore Penrose Pseudo-Inverse with a specified threshold (PINV with tol) 4. Minimum Mean Squared Error (MMSE) Simulation Results There are some other techniques that we experimented with but are omitted here for the sake of brevity. The MATLAB code and simulation results showing bit error rate as a function of receive array size (nR) are given below. It is seen that simple Matched Filter works quite well when the receive array size is small but with increasing nR the performance improvement is not that great. Least Squares (LS) technique using Moore-Penrose Pseudo Inverse shows improved performance with increasing nR and this can be further improved by controlling the threshold (tol). We found that a threshold of 0.1 gave significantly improved results as compared to no threshold case. Lastly we implemented MMSE and found that it gave us the best results. It must be noted that we also implemented ML for a limited size of receive array and found that its BER performance was far superior than any other technique. BER of a MU-MIMO Receive Array at the Base Station (LOS) MATLAB Code for Massive MIMO Scenario % COPYRIGHT RAYMAPS (C) 2018 clear all close all f=1e9; %Carrier frequency c=3e8; %Speed of light l=c/f; %Wavelength d=l/2; %Rx array spacing N=20; %Receive array size M=16; %Transmit array size (users) theta=2*pi*(rand(1,M)); %Angular separation of users EbNo=10; %Energy per bit to noise PSD sigma=1/sqrt(2*EbNo); %Standard deviation of noise n=1:N; %Rx array number n=transpose(n); %Row vector to column vector % RECEIVE SIGNAL MODEL (LINEAR) s=2*(round(rand(M,1))-0.5); %BPSK signal of length M H=exp(-i*(n-1)*2*pi*d*cos(theta)/l); %Channel matrix of size NxM wn=sigma*(randn(N,1)+i*randn(N,1)); %AWGN noise of length N x=H*s+wn; %Receive vector of length N % 1-MATCHED FILTER % 2-PINV without tol % y=pinv(H)*x; % 3-PINV with tol % y=pinv(H,0.1)*x; % 4-Minimum Mean Square Error (MMMSE) % y=(H'*H+(2*sigma^2)*eye([M,M]))^(-1)*(H')*x; s_est=sign(real(y)); %Demodulation ber=sum(s!=s_est)/length(s); %BER calculation %Note: Please select the array processing technique %you want to implement (1-MF, 2-LS1, 3-LS2, 4-MMSE) 1. In the code above, N=nR and M=nT 2. What is labelled here as Matched Filter (MF) is strictly speaking Maximal Ratio Combining (MRC) 3. The case we discuss above is categorized as Multiuser MIMO (MU-MIMO) for the uplink 4. MU-MIMO for the downlink is not that straight forward and will be the subject of some future post 5. We have considered a deterministic channel model as opposed to a probabilistic channel model 6. Probabilistic channel model can be easily implemented by assuming that channel coefficients are independent and identically distributed (IID) complex Gaussian random variables with mean zero and variance of 0.5 per dimension 7. The initial results we have obtained using this probabilistic channel model are much better than the results shown above, but the question remains which is the more accurate representation of a real channel Update: Simulation Using a Probabilistic Channel Since most of the literature in Massive MIMO uses a probabilistic channel instead of a deterministic channel, we decided to investigate this further. To implement such a channel model we simply need to change one line of the MATLAB code shown above. Instead of defining H as: We define H as: The results are shown below. It is seen that the BER performance is orders of magnitude better. We would next investigate the performance degradation if the channel coefficients are not independent and identically distributed (IID) but have some correlation. This is closely tied to inter-element separation of the antenna array. BER of a MU-MIMO Receive Array at the Base Station (NLOS) Concluding Remarks The fundamental question that needs to be asked is why the performance in the NLOS scenario (probabilistic) is better than LOS scenario (deterministic). This has to do with the Signal to Noise Ratio [1]. In the above we have assumed the Signal to Noise Ratio (SNR) for the two scenarios to be the same. But realistically speaking this is never the case. Although the NLOS case assumes a rich scattering environment providing a high multiplexing gain (dependent on the rank of the channel matrix H) its SNR would always be lower due to reflection, diffraction and scattering loss. So a fair comparison between the LOS and NLOS case is only possible if we downward adjust the SNR for the NLOS case. Simulation results have shown that the SNR for the NLOS case needs to be downgraded by about 25 dB to have similar BER performance as the LOS case. Lastly it must be noted that the BER performance of the NLOS case would deteriorate once the channel coefficients are not IID and there is some correlation between them. [1] Zimu Cheng, Binghao Chen, and Zhangdui Zhong “A Tradeoff between Rich Multipath and High Receive Power in MIMO Capacity”, International Journal of Antennas and Propagation, Volume 2013. Fundamentals of Linear Array Processing – Receive Beamforming In the previous two posts we discussed the fundamentals of array processing particularly the concept of beamforming (please check out array processing Part-1 and Part-2). Now we build upon these concepts to introduce some linear estimation techniques that are used in array processing. These are particularly suited to a situation where multiple users are spatially distributed in a cell and they need to be separated based upon their angles of arrival. But first let us introduce the linear model; I am sure you have seen this before. Here, s is the vector of symbols transmitted by M users, H is the N x M channel matrix, w is the noise vector of length N and x is the observation vector of length N. The channel matrix formed by the channel coefficients is deterministic (as opposed to probabilistic) in nature as it is purely dependent upon the phase shifts that the channel introduces due to varying path lengths between the transmit and receive antennas. The impact of a channel coefficient can be thought of as a rotation of the complex signal without altering its amplitude. This means that the channel acts like a single tap filter and the process of convolution is reduced to simple multiplication (a reasonable assumption if the symbol length is much larger than the channel delay spread). The channel model does not accommodate for path loss and fading that are also inherent characteristics of the channel. But the techniques are general enough for these effects to be factored in later. Furthermore, it is assumed that the channel H is known at the receiver. This is a realistic assumption if the channel is slowly varying and can be estimated by sending pilot Beamforming Using a Uniform Linear Array So going back to the linear model we see that we know x and H while s and w are unknown. Here w cannot be estimated since it’s random in nature (remember what the term AWGN stands for?) and we ignore it for the moment. The structure of s is known. For example if we are using BPSK modulation then the m symbols of the signal vector s can either be +1 or -1. So we can start the process of symbol detection by substituting all possible combinations of s[1], s[2]…s[m] and determine the combination that minimizes This is called the Maximum Likelihood (ML) solution as it determines the combination that was most likely to have been transmitted based upon the observation. Although ML is conceptually very appealing and yields good results it becomes prohibitively complex as the constellation size or number of transmit antennas increases. For example for 2-Transmit case and BPSK modulation there are 2^(1 bit x 2 antennas)=2^2=4 combinations, which seems quite simplistic. But if 16-QAM modulation is used and there are 4-Transmit antennas the number of combinations increases to 2^(4 bits x 4 antennas)=2^16=65536. So we conclude that ML is not the solution we are looking for if computational complexity is an issue (which might become less of an issue as the processing power of devices increases). Next we turn our attention to a technique popularly known as Zero Forcing or ZF (the origins of the name I still do not know). According to this technique the channel has a multiplicative effect on the signal. So to remove this effect we simply divide the signal by the channel or in the language of matrices we perform matrix inversion. Mathematically we have: So we see that we get back the signal s but we also get a noise component enhanced by inverse of the channel matrix. This is the well-known problem of ZF called Noise Enhancement. Then there are other problems such as non-existence of the inverse when the channel H is not a square matrix (which only happens when the number of transmit and receive antennas is the same). The inverse of H also cannot be calculated if H is not full rank or determinant of H is zero. So we now introduce another technique called Least Squares (LS). According to this the signal vector can be estimated as This is also sometimes referred to as the Minimum Variance Unbiased Estimator, as described by Steven M. Kay in his classical book on Estimation Theory [Fundamentals of Statistical Signal Processing Vol-1]. This can be easily implemented in MATLAB using Moore Penrose Pseudo Inverse or pinv(H). This is much more stable than going for the direct inversion methods. We next plot the Bit Error Rate (BER) using the code below. The number of receive antennas is varied from two to ten while the number of transmit antennas is fixed at four. The transmit antennas are assumed to be positioned at 30, 40, 50 and 60 degrees from the axis of the receive array. The receive antennas are separated by λ/2 meters. The frequency of operation is 1GHz but it is quite irrelevant to the scenario considered as everything is measured in multiples of wavelengths. The Eb/No ratio (roughly the signal to noise ratio) is varied from 5dB to 20dB in steps of 5dB. Bit Error Rate for Changing Rx Array Length As expected the BER for the two methods, other than ML, is more or less the same and decreases rapidly once the number of receive antennas becomes greater than number of transmit antennas (or number of signals). The case where the number of receive antennas is less than number of signals (equal powered and with a small angular separation) is dealt with by Overloaded Array Processing (OLAP) techniques and have been discussed in detail by James Hicks [Doctoral Dissertation] a student of Dr. Reed at Virginia Tech. Strangely enough it is seen that the overloaded case is not the worst part of the BER curve. The worst BER is observed when the number the number of transmit and receive antennas is the same (four in this case). In other words the BER gradually increases as the rank of the channel matrix increases and then decreases once it reaches its maximum value. This is quite interesting and obviously has to do with Noise Enhancement that we discussed earlier. This will be further investigated in future posts. For further information on the above methods visit this interesting article. So we struggled for a while to find out why the BER is worst at full rank and thought that there is something wrong in our model but ultimately we found that this has to do with how the pseudoinverse works and the way the tolerance limit (tol in MATLAB) for the singular values is set. We have found quite interesting results while experimenting with various inversion methods and the results are pending publication. Will keep you updated about the progress. We experimented with the MATLAB function pinv by changing the tolerance parameter. Previously we had used the default tolerance that is built into the function pinv. The default tolerance (tol) is defined as: tol = max (size (H)) * sigma_max (H) * eps where sigma_max (H) is the maximal singular value of channel matrix H and eps is the machine precision. More precisely, eps is the relative spacing between any two adjacent numbers in the machine’s floating point system. This number is obviously system dependent. On machines that support IEEE floating point arithmetic, eps is approximately 2.2204e-16 for double precision and 1.1921e-07 for single precision. So back to the subject we experimented with two values of tol; 1.0 and 0.1 while changing the signal to noise ratio. The number of transmit antennas (users) is fixed at 4 while number of receive antennas is varied from 2 to 8. For tol value of 1.0 it is seen that changing the value of EbNo does not change the results much up to 6 receive antennas but after that the BER results rapidly diverge. For tol value of 0.1 the results are quite unexpected. The BER drops with increasing number of antennas up to N=5 but then there is an unexpected increase in the BER for N=6. This needs to be further investigated. BER for tolerance of 1.0 BER for tolerance of 0.1 % COPYRIGHT RAYMAPS (C) 2018 clear all close all f=1e9; %Carrier frequency c=3e8; %Speed of light l=c/f; %Wavelength d=l/2; %Rx array spacing N=10; %Receive array length theta=([30 40 50 60])*pi/180; %Angular placement of Tx array (users) EbNo=10; %Energy per bit to noise PSD sigma=1/sqrt(2*EbNo); %Standard deviation of noise n=1:N; %Rx array vector n=transpose(n); %Converting row to column M=length(theta); %Tx array length % RECEIVE SIGNAL MODEL (LINEAR) s=2*(round(rand(M,1))-0.5); %BPSK signal of length M H=exp(-i*(n-1)*2*pi*d*cos(theta)/l); %Channel matrix of size NxM wn=sigma*(randn(N,1)+i*randn(N,1)); %AWGN noise of length N x=H*s+wn; %Receive vector of length N % PINV without tol % y=pinv(H)*x; % PINV with tol y=pinv(H, 0.1)*x; s_est=sign(real(y)); %Demodulation ber=sum(s!=s_est)/length(s); %BER calculation Basics of Beamforming in Wireless Communications In the previous post we had discussed the fundamentals of a Uniform Linear Array (ULA). We had seen that as the number of array elements increases the Gain or Directivity of the array increases. We also discussed the Half Power Beam Width (HPBW) that can be approximated as 0.89×2/N radians. This is quite an accurate estimate provided that the number of array elements ‘N’ is sufficiently large. Plane Wave Impinging Upon a ULA But the max Gain is always in a direction perpendicular to the array. What if we want the array to have a high Gain in another direction such as 45 degrees. How can we achieve this? This has application in Radars where you want to search for a target by scanning over 360 degrees or in mobile communications where you want to send a signal to a particular user without causing interference to other users. One simple way is to physically rotate the antenna but that is not always a feasible solution. Going back to the basics remember that the Electric field pattern depends upon the constructive and destructive interference of incoming waves. If we have a vector (usually called the steering vector) that aligns the rays coming in from a particular direction we would get a high Gain in that direction. Similarly we can steer a null in a particular direction if we want to reject a particular signal. This we will discuss in a future post. % COPYRIGHT RAYMAPS (C) 2018 clear all close all w=exp( i*(n-1)*2*pi*d*cos(phi)/l); title ('Gain of a Uniform Linear Array') The figure below shows the Electric field pattern of a 10 element array steered towards 0, 30, 60 and 90 degrees respectively. We see that selectivity of the array is higher on the Broadside than on the Endfire. In my opinion this has to do with how the cosine function behaves from 0 to 90 degrees. The rate of change of cosine function is much faster around 90 degrees than at 0 degrees or 180 degrees. The slowly changing cosine in the latter case causes a wide response on the Endfire. We did calculate the HPBW for a range of steering angles and found that it varied widely from as small as 10.17 degrees to as large as 48.62 degrees. This shows that simple Beamforming using a steering vector has its limitations. The detailed results along with a graph are shown below. It is seen that as the steering angle increases from about 20 degrees there is a sudden decrease in HPBW. For one degree increase of steering angle (phi) from 24 to 25 degrees there is decrease of approx 9 degrees in HPBW. We will investigate this further in future posts. Case 1: phi = 0, HPBW = 48.62 deg Case 2: phi = 30, HPBW = 21.69 deg Case 3: phi = 60, HPBW = 11.75 deg Case 4: phi = 90, HPBW = 10.17 deg Half Power Beamwidth of a ULA as a function of Steering Angle (Phi) For further visualization of the variation in antenna pattern as a function of the steering angle please have a look at this Interactive Graph. The parameters that can be varied include the angle of the beam, number of antenna elements and separation of the antenna elements. This is taken from an excellent online resource by the name of Geogebra. For further information on how you can use this tool for your own mathematical problems please do visit their website. Open Signal Coverage Maps for Pakistan Open Signal is a mobile application that collects the data about your wireless network (2G/3G/4G) and generates coverage maps and host of other reports. The data is collected in the background while the user is busy in his daily routines. But data can also be collected on the request of the user. This is much better than drive testing since the data is collected in real life scenarios and on thousands of different devices that are in use. The app works while the user is indoor or outdoor, at rest or in motion, on land or on water, at sea level or on a mountain, in dry weather or in rain. Basically anywhere and anytime there are wireless signals available. There are currently 20 million users of the app (both Android and iOS combined) and this number is increasing. In Pakistan all major networks are supported including Jazz, Telenor, Zong and Ufone (both 2G/3G and 4G networks are supported). Telenor Islamabad Coverage Map BER for BPSK-OFDM in Frequency Selective Channel OFDM Tx-Rx Block Diagram As the data rates supported by wireless networks continue to rise the bandwidth requirements also continue to increase (although spectral efficiency has also improved). Remember GSM technology which supported 125 channels of 200KHz each, which was further divided among eight users using TDMA. Move on to LTE where the channel bandwidth could be as high as 20MHz (1.4MHz, 3MHz, 5MHz, 10MHz, 15MHz and 20MHz are standardized). This advancement poses a unique challenge referred to as frequency selective fading. This means that different parts of the signal spectrum would see a different channel (different amplitude and different phase offset). Look at this in the time domain where the larger bandwidth means shorter symbol period causing intersymbol interference (as time delayed copies of the signal overlap on arrival at the receiver). The solution to this problem is OFDM that divides the wideband signal into smaller components each having a bandwidth of a few KHz. Each of these components experiences a flat channel. To make the task of equalization simple a cyclic prefix (CP) is added in the time domain to make the effect of fading channel appear as circular convolution. Thus simplifying the frequency domain equalization to a simple division operation. Shown below is the Python code that calculates the bit error rate (BER) of BPSK-OFDM which is the same as simple BPSK in a Rayleigh flat fading channel. However there is a caveat. We have inserted a CP which means we are transmitting more energy than simple BPSK. To be exact we are transmitting 1.25 (160/128) times more energy. This means that if this excess energy is accounted for the performance of BPSK-OFDM would be 1dB (10*log10(1.25)) worse than simple BPSK in Rayleigh flat fading channel. 1. Although we have shown the channel as a multiplicative effect in the figure above, this is only true for a single tap channel. For a multi-tap channel (such as the one used in the code above) the effect of the channel is that of a filter which performs convolution operation on the transmitted signal. 2. We have used a baseband model in our simulation and the accompanying figure. In reality the transmitted signal is upconverted before transmission by the antennas. 3. The above model can be easily modified for any modulation scheme such as QPSK or 16-QAM. The main difference would be that the signal would have a both a real part and an imaginary part, much of the simulation would remain the same. This would be the subject of a future post. For a MATLAB implementation of 64-QAM OFDM see the following post (64-QAM OFDM). 4. Serial to parallel and parallel to serial conversion shown in the above figure was not required as the simulation was done symbol by symbol (one OFDM symbol in the time domain represented 128 BPSK symbols in the frequency domain). 5. The channel model in the above simulation is quasi-static i.e. it remains constant for one OFDM symbol but then rapidly changes for the next, without any memory. Rayleigh Fading Envelope Generation – Python When wireless signals travel from a transmitter to a receiver they do so after reflection, refraction, diffraction and scattering from the environment. Very rarely is there a direct line of sight (LOS) between the transmitter and receiver. Thus multiple time delayed copies of the signal reach the receiver that combine constructively and destructively. In a sense the channel acts as an FIR (finite impulse response) filter. Furthermore since the transmitter or receiver may be in motion the amplitude and phase of these replicas varies with time. There are several methods to model the amplitude and phase of each of these components. We look at one method called the “Smiths Fading Simulator” which is based on Clark and Gans model. The simulator can be constructed using the following steps. 1. Define N the number of Gaussian RVs to be generated, fm the Doppler frequency in Hz, fs the sampling frequency in Hz, df the frequency spacing which is calculated as df=(2*fm)/(N-1) and M total number of samples in frequency domain which is calculated as M=(fs/df). 2. Generate two sequences of N/2 complex Gaussian random variables. These correspond to the frequency bins up to fm. Take the complex conjugate of these sequences to generate the N/2 complex Gaussian random variables for the negative frequency bins up to -fm. 3. Multiply the above complex Gaussian sequences g1 and g2 with square root of the Doppler Spectrum S generated from -fm to fm. Calculate the spectrum at -fm and +fm by using linear extrapolation. 4. Extend the above generated spectra from -fs/2 to +fs/2 by stuffing zeros from -fs/2 to -fm and fm to fs/2. Take the IFFT of the resulting spectra X and Y resulting in time domain signals x and y. 5. Add the absolute values of the resulting signals x and y in quadrature. Take the absolute value of this complex signal. This is the desired Rayleigh distributed envelope with the required temporal The Matlab code for generating Rayleigh random sequence with a Doppler frequency of fm Hz is given below.
{"url":"https://www.raymaps.com/index.php/category/stand/lte/","timestamp":"2024-11-11T18:11:40Z","content_type":"text/html","content_length":"106438","record_id":"<urn:uuid:57e19945-5b10-46ae-8950-49e869c9dd74>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00651.warc.gz"}
Exploring parallel and perpendicular lines Explore changing the values of m and c in the equations y=mx + c for each of the lines. What stays the same and what changes? Can you make your lines parallel to each other? What do you notice about the gradients? Can you make your lines perpendicular to each other? What do you notice about the gradients? Set the red line to be y=x, make the blue line perpendicular Set the red line to be y=2x, make the blue line perpendicular If the red line was y=3x what would the gradient be of a perpendicular line? (not you cannot do this here as the sliders only increase in 0.1s)
{"url":"https://beta.geogebra.org/m/XEEQXXKw","timestamp":"2024-11-14T04:01:34Z","content_type":"text/html","content_length":"90953","record_id":"<urn:uuid:4fd2b226-35ba-47d7-a156-b9615b329819>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00664.warc.gz"}
numpy.random.exponential(scale=1.0, size=None)¶ Draw samples from an exponential distribution. Its probability density function is for x > 0 and 0 elsewhere. [3]. The exponential distribution is a continuous analogue of the geometric distribution. It describes many common situations, such as the size of raindrops measured over many rainstorms [1], or the time between page requests to Wikipedia [2]. scale : float or array_like of floats The scale parameter, Parameters: size : int or tuple of ints, optional Output shape. If the given shape is, e.g., (m, n, k), then m * n * k samples are drawn. If size is None (default), a single value is returned if scale is a scalar. Otherwise, np.array(scale).size samples are drawn. out : ndarray or scalar Drawn samples from the parameterized exponential distribution. [1] (1, 2) Peyton Z. Peebles Jr., “Probability, Random Variables and Random Signal Principles”, 4th ed, 2001, p. 57. [2] (1, 2) Wikipedia, “Poisson process”, http://en.wikipedia.org/wiki/Poisson_process [3] (1, 2) Wikipedia, “Exponential distribution”, http://en.wikipedia.org/wiki/Exponential_distribution
{"url":"https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/numpy.random.exponential.html","timestamp":"2024-11-05T04:16:44Z","content_type":"text/html","content_length":"10357","record_id":"<urn:uuid:ad2c4252-9380-40bb-a42c-e207b5d4d4cc>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00068.warc.gz"}
Course Turkish Isı ve Kütle Aktarımı Name English Heat and Mass Transfer Course KMM Credit Lecture Recitation Laboratory Code 305E (hour/week) (hour/week) (hour/week) Semester 5 4 4 - - Course Language English Course Şerife Birgül Ersolmaz Course 1. To introduce the terminology and the basic principles of heat and mass transfer. Objectives 2. To train students to identify and describe appropriate transport phenomena (fluid mechanics, heat and mass transfer) for any process or system involving heat and mass transfer (in engineering and everyday life). 3. To show students how to use required inputs for computing heat and mass transfer rates and/or material (solid, liquid, gas) temperatures and/or mixture concentrations. 4. To show students how to develop representative models of real processes and systems and draw conclusions concerning process/systems design or performance from the related 5. To provide experience to work in teams. Course Basic concepts of heat transfer. Heat conduction equation. Steady heat conduction: Resistance approach, fins. Transient heat conduction. Forced convection. Natural convection. Heat Description exchangers. Basic concepts of mass transfer. Analogy between heat and mass transfer. Diffusion in gases, liquids, and solids. Steady mass transfer. Unsteady-state mass transfer. Mass convection and mass transfer coefficients. Mass transfer with reaction. Analogies for friction, heat transfer and mass transfer coefficients. Course Outcomes Upon completion of this course, students should be able to: 1. Define, describe, and apply the basic concepts (terminology, modes and equations) of (a) conduction, convection and radiation heat transfer and (b) diffusion and convection mass 2. (a) Apply laws of conservation of energy to thermal systems and heat transfer problems involving conduction, radiation, and/or convection heat transfer. (b) Similarly, apply laws of mass conservation to mass transfer problems involving diffusion and/or convection mass transfer. 3. Formulate and solve steady-state one-dimensional (a) conduction heat transfer (including fins) and (b) mass transfer (reaction with diffusion) problems to calculate the temperature and concentration distributions and heat and mass transfer rates, and evaluate the significance of results. 4. Formulate and solve transient one-dimensional heat conduction and mass transfer problems in different geometries using lumped system approach or one-term approximation of separation of variables solution; and in large mediums using similarity variable. 5. Demonstrate an understanding of the fundamentals of the relationship between fluid flow and convection heat and mass transfer and apply empirical correlations for (a) forced and free (natural) convection heat and (b) mass transfer to determine values for the convection heat and mass transfer coefficients and calculate heat and mass transfer rates. 6. Determine engineering design quantities (power requirements, insulation thickness, cost, etc.) required for design of thermal engineering devices and systems and apply engineering judgment. 7. Analyze and design heat exchangers to calculate heat transfer area and the outlet temperatures of the hot and cold streams. 8. Apply analogies between momentum, heat, and mass transfer to calculate relevant transfer coefficients. 9. Work as a member of a team to solve heat and transfer problems in chemical engineering. Pre-requisite(s) KMM211, KMM220 veya KMM224 (min. DD) Textbook Fundamental of Heat and Mass Transfer (8th edition), by F. P. Incropera and D. P. Dewitt, T. L. Bergman, A. S. Lavine, John Wiley & Sons, NY, 2017. Other References 1. Yunus A. Çengel and Afshin J. Ghajar, Heat and Mass Transfer: Fundamentals and Applications, Fifth Edition, McGraw Hill, NY, 2014. 2. Transport Phenomena by B. R. Bird, W. E. Stewart and E. N. Lightfoot, 2nd Ed., John Wiley, NY, 2002.
{"url":"https://ninova.itu.edu.tr/en/courses/faculty-of-chemical-and-metallurgical-engineering/12430/kmm-305e/form","timestamp":"2024-11-04T07:16:59Z","content_type":"application/xhtml+xml","content_length":"13490","record_id":"<urn:uuid:ea87281e-9abf-4ef9-a6dc-2d6b447661b6>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00051.warc.gz"}
Test Statistic A Test Statistic is a statistic function, [math]\displaystyle{ t= g(s) }[/math], that ... can be used to determine the veracity of the statistical hypotheses. where [math]\displaystyle{ \hat{\theta}(X) }[/math] is a (point estimate derived from sample data of the random variable [math]\displaystyle{ X }[/math], [math]\displaystyle{ \theta_0 }[/ math] is a population parameter value stated under the null hypothesis (i.e. [math]\displaystyle{ H_0:\; \theta=\theta_0 }[/math]) and [math]\displaystyle{ \sigma(X,\theta) }[/math] standard deviation which depends on both sampling distribution and population distribution. ☆ It is generally defined as sum of observed differences or ranks, [math]\displaystyle{ t= \sum f(R_i) }[/math] • Example(s) □ Parametric Statistical Tests: ☆ [math]\displaystyle{ t=\frac{\overline{x}-\mu_0}{s/\sqrt{n}} }[/math], One-sample t-Statistic obtained from the sample mean value ([math]\displaystyle{ \overline{x} }[/math]), the population mean value stated by the null hypothesis ([math]\displaystyle{ \mu_0 }[/math]), sample standard deviation ([math]\displaystyle{ s }[/math]) and sample size ([math]\displaystyle { n }[/math]). ☆ [math]\displaystyle{ t = \frac{\bar{d} - D}{s_d/\sqrt{n}} }[/math], Matched-Pair t-Statistic obtained from the mean difference between matched pairs in the sample ([math]\displaystyle{ \ bar{d} }[/math]), hypothesized difference between population means (D) and the standard deviation of the differences for each matched pair ([math]\displaystyle{ s_d }[/math]). ☆ [math]\displaystyle{ t = \frac{(\overline{x}_1 - \overline{x}_2) - d_0}{s_p \sqrt{1/n_1+1/n_2}} }[/math], Independent Two-Sample t-Statistic obtained from the sample means of sample drawn from population 1 and 2 ([math]\displaystyle{ \overline{x_1}, \; \overline{x_2} }[/math]) with sample size [math]\displaystyle{ n_1 }[/math] and [math]\displaystyle{ n_2 }[/math], as well as the hypothesized difference between population means ([math]\displaystyle{ d_0 }[/math]), ans pooled standard deviation ([math]\displaystyle{ s_p }[/math]). □ Non-Parametric Statistical Tests: ☆ [math]\displaystyle{ W =\sum^n_{i=1} R^{(+)}_i }[/math], Wilcoxon Signed-Rank Test Statistic obtained as the sum of the positive ranks ([math]\displaystyle{ R^{(+)}_i }[/math]). ☆ [math]\displaystyle{ \chi^2=\sum^n_{i=1}\frac{(O_i−E_i)^2}{E_i} }[/math], Chi-Square Statistic obtained from the observed frequency count for the ith level of the categorical variable ( [math]\displaystyle{ O_i }[/math]), and the expected frequency count for the ith level of the categorical variable ([math]\displaystyle{ E_i }[/math]). ☆ [math]\displaystyle{ n_1 n_2 + \frac{n_2(n_2+1)}{2} - \sum^{n_2}_{i=n_1+1}R_i }[/math], Mann-Whitney U Statistic obtained from sample size one ([math]\displaystyle{ n_1 }[/math]), sample size two ([math]\displaystyle{ n_2 }[/math]) and sum of the ranks ([math]\displaystyle{ R_i }[/math]). □ … • Counter-Example(s) • (Wikipedia, 2016) ⇒ http://en.wikipedia.org/wiki/test_statistic Retrieved 2016-09-11 □ QUOTE: A test statistic is a statistic (a quantity derived from the sample) used in statistical hypothesis testing. A hypothesis test is typically specified in terms of a test statistic, considered as a numerical summary of a data-set that reduces the data to one value that can be used to perform the hypothesis test. In general, a test statistic is selected or defined in such a way as to quantify, within observed data, behaviours that would distinguish the null from the alternative hypothesis, where such an alternative is prescribed, or that would characterize the null hypothesis if there is no explicitly stated alternative hypothesis (...) For example, suppose the task is to test whether a coin is fair (i.e. has equal probabilities of producing a head or a tail). If the coin is flipped 100 times and the results are recorded, the raw data can be represented as a sequence of 100 heads and tails. If there is interest in the marginal probability of obtaining a head, only the number T out of the 100 flips that produced a head needs to be recorded. But T can also be used as a test statistic in one of two ways: ○ the exact sampling distribution of T under the null hypothesis is the binomial distribution with parameters 0.5 and 100. ○ the value of T can be compared with its expected value under the null hypothesis of 50, and since the sample size is large a normal distribution can be used as an approximation to the sampling distribution either for T or for the revised test statistic T−50. Using one of these sampling distributions, it is possible to compute either a one-tailed or two-tailed p-value for the null hypothesis that the coin is fair. Note that the test statistic in this case reduces a set of 100 numbers to a single numerical summary that can be used for testing. Suppose the test statistic in a hypothesis test is equal to S. If the probability of observing a test statistic as extreme as S is less than the significance level, we reject the null • (Rosenthal, 1978) ⇒ Robert Rosenthal. (1978). “Combining results of independent studies." Psychological bulletin 85, no. 1 (1978): 185. □ QUOTE: ... Not simply in connection with combining ps but at any time that test statistics such as t, F, or Z are reported, estimated effect sizes should routinely be reported. The particular effect size d seems to be the most useful one to em- ploy when two groups are being compared. ...
{"url":"https://www.gabormelli.com/RKB/test_statistic","timestamp":"2024-11-11T04:33:46Z","content_type":"text/html","content_length":"57750","record_id":"<urn:uuid:744f22b9-3727-4ac0-b2b3-b5931911ebaf>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00832.warc.gz"}
Minimum Entangling Power is Close to Its Maximum Title Minimum Entangling Power is Close to Its Maximum Publication Type Journal Article Year of Publication 2012 Authors Chen, J, Ji, Z, Kribs, DW, Zeng, B Date Published 2012/10/04 Given a quantum gate $U$ acting on a bipartite quantum system, its maximum (average, minimum) entangling power is the maximum (average, minimum) entanglement generation with respect to certain entanglement measure when the inputs are restricted to be product states. In this paper, we mainly focus on the 'weakest' one, i.e., the minimum entangling power, among all these entangling powers. We show that, by choosing von Neumann entropy of reduced density operator or Schmidt rank as entanglement measure, even the 'weakest' Abstract entangling power is generically very close to its maximal possible entanglement generation. In other words, maximum, average and minimum entangling powers are generically close. We then study minimum entangling power with respect to other Lipschitiz-continuous entanglement measures and generalize our results to multipartite quantum systems. As a straightforward application, a random quantum gate will almost surely be an intrinsically fault-tolerant entangling device that will always transform every low-entangled state to near-maximally entangled state. URL http://arxiv.org/abs/1210.1296v1
{"url":"https://www.quics.umd.edu/publications/minimum-entangling-power-close-its-maximum","timestamp":"2024-11-04T20:03:54Z","content_type":"text/html","content_length":"21334","record_id":"<urn:uuid:8f65ea64-3f1d-438a-8a29-12256111a920>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00202.warc.gz"}
Number of Folds in Paper: Thickness of Earth to Sun | FreebookSummary Number of Folds in Paper: Thickness of Earth to Sun See Entire Document Download Document Text Preview Calculating the number of folds and the hypothetical size of a piece of paper so that its thickness equates to the distance from the Earth to Saturn. As a physics student doing the option on astrophysics, I have become very interested in the immensity of the universe. I decided it would be intriguing to combine it with my personal hobby of doing origami which helps me to relieve stress. As a result, I investigated the amount of times a piece of paper would need to be folded in half so that its thickness achieves a length which equates to the distance from the Earth to Saturn. I have chosen the planet Saturn because it is one of the biggest planets that can be observed by the naked eye and I have been able to see it last summer from the top of a high hill in La Pedriza in the Guadarrama mountain range near my hometown (Collado Villalba, Madrid, Spain). Popular claims suggested that it is impossible to fold a piece of paper in half more than seven times no matter its size or thickness. Previous mathematicians have worked out the number of folds required to reach the moon from the Earth which was worked out to be 42 folds[1] using a m thick paper. The size of the paper however, was not found. In theory, the average folds for a normal size A4 (m) paper is seven. This is an interesting topic because it is incredible how, by exponential growth, a miniature of m thick like a piece of paper can be folded in half to reach the distance of the planet Saturn. Britney Gallivan[2] was able to fold a piece of toilet paper of 1200 meters a number of 12 folds. She derived two mathematical expressions based on geometrical sequences, taking into consideration the amount of paper lost in every fold. These formulas make it possible to calculate the hypothetical length L and width W of a piece of paper that would be folded n times to equate the distance from the Earth to Saturn. Gallivan established some rules that would need to be followed when folding a sheet of paper in half: 1. A single rectangular sheet of paper of any size and uniform thickness can be used. 2. The fold line has to be in the same direction each time. 3. The folding process must not tear the paper. 4. When folded in half, the portions of the inner layers which face one another must almost touch one another. 5. The average thickness or structure of material of paper must remain unaffected by the folding process. 6. A fold is considered complete if portions of all layers lie in one straight line. Hence, the length L of the paper influences the number of times it can be folded in half. Hypothesis – The distance from the Earth to Saturn will be obtained by folding a piece of paper ≥50 times with a hypothetically large enough paper. This exploration used exponential growth and logarithms in order to find out the number of folds required to reach Saturn. Before any calculation was done, it was indispensable to collect all See Entire Document Join FreeBookSummary to continue reading See Entire Document Join FreeBookSummary to continue reading the data required for the investigation. All the values were used were in the international system of units (metres) and standard form in order to keep the exploration standardized. Taking into consideration the elliptical orbits of the planets, sometimes they are closer to Earth and sometimes they are further away. Therefore, the value used during this mathematical exploration was the mean value of when they are the furthest apart and the closest together. Astronomical units (AU) are the standard unit measure used when dealing with distances within the Solar System. 1 AU is equal to the distance from the Earth to the Sun which is equivalent tom. Distance from the Earth to Saturn when closer together – 8.00 AU[3] Distance from the Earth to Saturn when further apart – 11.0 AU3 Mean distance from the Earth to Saturn – AU The thickness of a normal A4 paper (0.210 297m) was calculated by taking a measurement of an office pack of 500 pages (80gsm) which was found to be about 0.05m. The thickness of each individual page is calculated by dividing the total thickness (0.05m) by the amount of sheets (500); giving a result of m. Whenever a paper is folded in half, the number of layers is doubled so the thickness increases by two. When there is only one layer of paper (not folded), its thickness is m. Once it is folded in half for the first time, its thickness will be multiplied by 2 hence, Folding it one more time means multiplying it by two again, Thus, an expression can be established, showing the exponential growth; The expression can be represented in a graph to illustrate graphically the exponential growth because of folding. Graph 1: From Graph 1, it is possible to visualise how, something that seems unrealistic like folding a sheet of paper to reach Saturn becomes possible. The graph also illustrates how rapidly exponential growth occurs. Since the expression needs to be equal to the distance from the Earth to Saturn to work out (the number of folds), an equation to find can be solved: To find, the rules of logarithms were put in place due to the exponential nature of the equation. The answer has been rounded up to 54 because it is not possible to have a half fold. Gallivan derived the following formula for the minimum length of a piece of paper of thickness t to be folded n times in a single direction To prove this formula, it is neccesary to understand that after each fold, some part of the paper is lost and becomes a rounded edge. I folded an A4 sheet of paper seven times in order to illustrate As you can see from the picture, there is a rounded edge on the side which is paper being lost and is not contributing to the real thickness but just joining the layers. The curved portion becomes bigger in correlation with the number of folds and begins to take a greater area of the volume of the paper. At the first fold, a semicircle of radius t (thickness) is formed, which has a perimeter . Thus, units of the See Entire Document Join FreeBookSummary to continue reading paper are being used in the fold. A paper smaller than this cannot be folded since there is not enough paper to form the fold. After the fold, there is a two-layer sheet of paper with a thickness of 2t. Another fold results in folding the second layer over the first layer. The second layer has a radius of , so it uses units of paper. The total amount of paper used by the second fold, for both layers, is resulting in a four-layer piece of paper. The i^th fold begings with layers, and folding the j^th layer uses units of paper. Hence, the total length of paper used for the i^th fold is given by Therefore, to obtain the total length of paper required for n number of folds, sum this over i from 1 to n, which gives Gallivan’s formula: The thickness can be substituted into t and the number of folds can be substituted into n which gives L to be equal to 1.70-10^28m. The other expression proposed by Gallivan can be used to calculate the width of the paper. If the length lost in the rad of earlier folds is not considered, the length lost must be considered in the last fold. At the final fold n, the side of the square must be at least equal to the length lost in the final fold which is (amount of length lost in each fold. Taking into consideration that the total area of the sheet (area = nb of sheets in penultimate step area of square in penultimate step) is preserved, Gallivan’s equation can be derived: Again, the thickness and number of folds can be substituted and an answer for W can be found giving W to be 2.69-10^20m. In conclusion, the initial hypothesis was right since the number of folds was 54 which is, indeed, greater than 50. The hypothetical paper that could, in theory, be folded 54 times so that its thickness equates the distance from the Earth to Saturn of m would be long and wide (taking into consideration that its thickness would bem). The dimensions of this paper would be bigger than the actual distance from the Earth to Saturn so, unfortunately, we do not have a paper that big that would allow us to reach to Saturn just by folding it in half. This mathematical exploration used logarithms to find out the number of folds needed to reach Saturn with a m thick paper. However, the dimensions of this sheet of paper would be too big and hence, impossible to find in the Earth’s surface. Nevertheless, the exploration could have looked at using a thinner piece of paper to see if its dimensions would have been smaller and perhaps, we would have been able to find it in the surface of the Earth and we would have been able to reach Saturn. References; ; Astronomy, S. (2012). How Far away is Saturn? [Online] Space.com. Available at: http:/www.space.com/18477-how-far-away-is-saturn.html [Accessed 16 Jan. 2017]. IFLScience. (2016). Fold A Piece of Paper in Half 103 Times and It Will Be As Thick As the UNIVERSE. See Entire Document Join FreeBookSummary to continue reading [Online] Available at: http:/www.iflscience.com/space/fold-piece-paper-half-103-times-and-it-will-be-thick-universe/ [Accessed 16 Jan. 2017]. Pomonahistorical.org. (2002). Folding Paper in Half Twelve Times. [Online] Available at: http:/pomonahistorical.org/12times.htm [Accessed 16 Jan. 2017]. [1] IFLScience. (2016). Fold A Piece of Paper in Half 103 Times and It Will Be As Thick As the UNIVERSE. [online] Available at: http:/www.iflscience.com/space/ fold-piece-paper-half-103-times-and-it-will-be-thick-universe/ [Accessed 16 Jan. 2017]. [2]Pomonahistorical.org. (2002). Folding Paper in Half Twelve Times. [online] Available at: http:/pomonahistorical.org/12times.htm [Accessed 16 Jan. 2017]. [3] Astronomy, S. (2012). How Far away is Saturn? [online] Space.com. Available at: http:/www.space.com/18477-how-far-away-is-saturn.html [Accessed 16 Jan. 2017]. See Entire Document Join FreeBookSummary to continue reading
{"url":"https://freebooksummary.com/number-of-folds-in-paper-thickness-of-earth-to-sun","timestamp":"2024-11-10T12:09:58Z","content_type":"text/html","content_length":"63756","record_id":"<urn:uuid:8a66ddce-c6b2-488e-9991-cd36f727a693>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00589.warc.gz"}
Savings, Savings, and More Savings This post is about savings. It is also a plea with Nick Rowe to slow the )##@ down in posting things that get stuck in my head. I’m still working on this savings thing, and now he’s got a post up about capital that I’m chewing over too. I have a life, Nick, for God’s sake - pump the brakes! That said, this is just a wrap-up of a post about “What happens to GDP if I save 100 dollars”? And the point there was that this is an ill-formed question. Or at least it is ill-formed because we economists have screwed up by using the word “savings” for lots of different things. From the perspective of an individual, they earn income in dollars. That income in dollars is divided across several different uses, several of which could be called “savings”. So let’s be a little more exact. Write \(Inc_t = C_{t,now} + C_{t,later} + I_t + Bank_t\). This says that income can be spent in four ways: 1. $C_{t,now}$ is spending on consumption goods (or services) that are consumed in period $t$. For example, I bought a Diet Coke this afternoon, and just drank it. 2. $C_{t,later}$ is spending on consumption goods that are not consumed in period $t$. For example, I bought a case of Diet Coke today, and it is sitting in my office. I will not consume all of those Diet Cokes before the close of 2016Q1. The purchase, though, gets added to 2016Q1 GDP, even though I don’t consume all of them that quarter. 3. $I_t$ is spending on investment goods. Which, to be clear, is simply an arbitrary definition based on an assumption about what the goods will be used for. An example here would be if I bought a 4. $Bank_t$ is the net change in my bank account in period $t$. This could be positive (I saved!) or negative (I borrowed!). In plain English, numbers (2)-(4) all could conceivably be called “savings”. For the purposes of thinking about macroeconomics, though, the distinction between (4) and (1)-(3) is crucial. (1)-(3), whatever you call them, all involve actually spending money. Whether you spend that money on goods that you then put in the pantry, or store underground, or use to produce more goods next period is irrelevant. You spent the money. “What happens to GDP if I save 100 dollars?” is, I’m sure, a question about what happens if I increase (4) by 100 dollars and decrease (1)-(3) in combination by 100 What I said in the last post was that this could have an effect on GDP. It certainly lowers the velocity of money, so nominal GDP would fall. If prices are sticky, then real GDP would fall as well. If you have in mind a different concept of “save 100 dollars” then the effect on GDP could be different. If you mean that you raise (2) or (3) by 100 dollars while lowering (4) by 100 dollars, then this could raise GDP. You are now spending 100 dollars that you otherwise would not have. You have raised velocity, so nominal GDP goes up, and if prices are sticky, real GDP goes up too. Even though you are “saving” 100 dollars of goods by buying a bulldozer or putting your Diet Coke cans in the garage fridge for later, you still spent 100 dollars on goods - and that is all GDP cares about. On the other hand, perhaps by “save 100 dollars” you mean that you lowered (1) or (2) by 100 and raised (3) by 100. That is, you “saved” by switching from buying consumption goods to buying investment goods. What effect does that have on GDP in period $t$? None (okay, maybe a 2nd order effect if investment good producers have a different propensity to spend their income than you do). You still spent the 100 dollars. In the future, this may effect GDP because the bulldozer may let you produce more output. In growth models, like the Solow model, “savings” mean spending money on (3). And the more you save, the higher will be GDP in the future. So the original question is best phrased as “What happens to GDP in this period if I put 100 dollars of my money in the bank this period rather than spending it on any kind of good or service?” And that could lower GDP today. If you mean a different kind of “savings”, then you have to be specific. Or, as economists we have to teach you to be more specific. Or, as economists we have to push back and force you to be more specific. Because the answers are different depending on what you mean. Beware what follows This last bit is all wild-ass speculation. If each person has an budget constraint of the form I gave above, we could add up across all individuals like this \(\sum_i Inc_t = \sum_i C_{t,now} + \ sum_i C_{t,later} + \sum_i I_t + \sum_i Bank_t.\) Now, let’s say that total spending on goods and services (the two C terms plus the I term) are equal to some multiple, $c$, of income. $c$ could be greater than one (so people are net borrowing) or less than (people are net saving). Then I have \((1-c) \sum_i Inc_t = \sum_i Bank_t.\) The sum of income flowing to individuals has to add up to nominal GDP by the national income product accounts, so call $\sum_i Inc_t = PY$. Then plugging this in and rearranging I have \(PY = \frac{\ sum_i Bank_t}{1-c}\). If $(1-c)>0$, then this means people do not spend all their income, so it makes sense that $\sum_i Bank_t >0$. In other words, savings are accumulating in aggregate. If $(1-c)<0$, then it makes sense that $\sum_i Bank_t<0$, or savings are de-cumulating in aggregate. Either way, the ratio on the right is positive, and equal to nominal GDP. Is this some kind of bastard quantity theory? I ask because I honestly don’t know. I don’t even know what to make of this, or if I made some kind of dumb mistake in here somewhere. Should I take this seriously? Could I think of the terms in the ratio becoming unlinked? That is, could we think that $c$ stays constant, but $\sum_i Bank_t$ goes up in absolute value, so nominal GDP goes up? What would it mean for the total flow of money into banks to go up, but for people’s propensity to consume from their income to stay the same?
{"url":"https://growthecon.com/feed/2016/03/29/Savings-Redux.html","timestamp":"2024-11-10T01:53:05Z","content_type":"text/html","content_length":"15258","record_id":"<urn:uuid:44e7e782-47b5-4b01-a8a2-8d6dc67a6ad1>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00712.warc.gz"}
Why can’t my students do word problems? Helping Kids Find Their Way in Word Problems A MiddleWeb Blog At my school, we’re focused on improving ACT math scores. Anyone with a passing knowledge of ACT knows there is a lot of reading on the test, even in the math section. I have to do more than teach my students the math concepts; I have to teach them how to decode what the word problem is actually asking. So the question I’ve been asking myself lately is how do I help my students puzzle through the math questions on the ACT? In other words, how can I help them be effective solvers of word problems? It’s not that I haven’t been trying. I have tried bell ringers, graphic organizers, teaching key words… and the list goes on. When all these strategies met with limited success, I realized I needed help so I started researching. I found an article in NCTM’s monthly journal Mathematics Teacher: Learning and Teaching (PK-12) titled “Making Word Problems Meaningful” (August 2021). The article convinced me that a lot of things I’ve tried in the past are not good strategies. What doesn’t work For instance, in the past I have given students a set of steps or procedures to follow to solve word problems. The procedural steps usually followed a mnemonic device (e.g., SOLVE) so that it would be easy for the students to remember. Well, the article plainly states, “forcing students to follow one set of steps to solve word problems has no influence on their ability to solve those problems.” I have also given students graphic organizers to organize their thoughts about the problem. Many of the graphic organizers had four quadrants where they could draw a picture or put the question in their own words. This article also discourages doing this, referring to it as “over-scaffolding” and cautioning that it makes “solving word problems tedious and boring.” I can verify that making students do this is tedious for them and for the teacher, but I tried it because I didn’t know what else to do. Research-backed strategies So what strategies should we be using in our classroom? This article suggests using some of the same strategies used to improve reading comprehension. The four specific strategies are (1) visualizing (2) retelling (3) making connections and (4) asking questions. As mentioned previously, the article recommended against making students write down their questions, connections or retellings in a graphic organizer. Instead these four strategies should be used to help students think about the problem with an emphasis on encouraging students to talk with each other. Starting small I am so excited to get started, really dig into this article, and integrate the strategies into my classroom. I also want a chance to do a little more research as well as take a trip to the English department to talk with some of our English teachers. Besides, I need to get a baseline for where my students are in relation to word problems. Many of the word problems we have done this year have been embedded in the content we are covering at the time. After we learn a math concept, we will then “apply” it by solving a word problem. Not surprisingly, this has not helped students improve in the area of word problems. So today I gave my students a word problem I found online which had been created for a program called “Closing the Math Achievement Gap.” The problem had no bearing on the concept we are currently learning. I was up front about this with my students. I let them work in groups of 3 or 4, and I let them choose who they wanted to work with. I gave them the word problem and put 15 minutes on the timer. My only instruction was to read the problem and try to solve it. It was hard to stay silent Here is the problem if you are interested: Your friend leaves for the party 2 hours earlier than you do. (a) If your friend is averaging 50 mph and you are trying to catch him by averaging 60 mph, how many hours will it take for you to catch him? (b) How far will you each have traveled when you catch up to your friend? I walked around and observed and made notes of what the students said and did. This is what I heard over and over in the first five minutes in all four of my Algebra II classes: • I hate these problems • I don’t know where to start • I was never good at these • I’m frustrated I also heard students trying to use the numbers 2, 50, and 60 (the ones mentioned in the problem) without having read the problem. I guess they thought they might just get lucky and come up with the right answer. I was getting nervous. It was so hard to stay silent. This negative self talk went on for a full five minutes (not among all students – some starting writing and thinking about the problem pretty Then I starting hearing words like distance and time – students began to pick up their pencils – and then students started writing things down. Then I heard some “ohh’s.” I could tell some students were making headway. Even in my most challenging class, by the end of the 15 minutes every group had made some headway. This class had five groups. One group arrived at the correct answer with the correct thinking; two other groups didn’t have the right answer but they had the right thinking. The other two groups had made progress but hadn’t quite worked out the correct thinking. The rest of my classes more or less followed this same pattern. At the end of the 15 minutes, I took up their work and did not tell them whether their answer was correct or incorrect. The next day I gave their papers back to them. This time I put 5 minutes on the timer and asked them to get back in their groups and revise their work. I thought they would be bored or uninterested because they had worked on the problem fifteen minutes the day before. Of course, some groups were uninterested, but several had really thought about the problem overnight and were still trying to work it out. The best thing that happened all day was when a student asked, “But was my thinking right?” As a class, we briefly talked about what the answer was and the different strategies they had used. Take-aways from this first experiment Just by observing them working – really observing and listening to them and NOT helping – I learned a lot: • Most students have not developed the stamina needed to be successful at word problems. • They need practice working problems that are not connected to a certain topic. • I have to resist swooping in and telling them what to do. • They need to be encouraged to talk to each other about what they are thinking. • They want to do everything “in their head.” • They need time and practice to begin to see word problems as an important part of mathematical thinking. I will continue to purposefully work on helping students become more proficient at word problems. I plan to continue this topic in my next post here. The article from NCTM’s Mathematics Teacher: Learning and Teaching titled, “Making Word Problems Meaningful” (summary) will be my jumping off point and blueprint. In the meantime, if anyone has suggestions or strategies that have helped students become better at working word problems, please share in the comments. Image by Andreia Joldes from Pixabay 10 Responses 1. I enjoyed reading your article and agree that memorized routines are not as useful as a little productive struggle. when moving from basic word problems to math problem solving. Since you asked…I have an open-ended protocol for PS that I’ve used at all grade levels. https://www.karin-hess.com/_files/ugd/5e86bd_93eb20501ec940f9bc23b209f51af098.pdf In this handout, I differentiate word problems v. problem solving. I like to have small groups discuss the problem together even if they solve individually – what do we really have to find out? how can we use a representation to show the problem? what strategies might work etc. The reasoning behind the solution uses WORDS to describe WHY the strategy worked, etc. Finally, make a mathematical connection beyond the problem (transfer knowledge). Maybe your readers can use this as well. □ It is so interesting that you posted this comment today, this morning I was thinking about what constitutes a word problem. Your handout lays out what I was trying to work out in my mind. Thank you so much for this resource, I will definitely be using this in my room. ☆ I’m glad you found it useful. My thinking about word problems is that there is an intended strategy whereas problem solving is more open ended ○ That makes perfect sense! I’m finding that a lot of things that I think are intuitive are not intuitive for the students, so it’s a learning process for me and them. 2. See this article here at MiddleWeb by Karin Hess: “Six Ways Teachers Can Support Rigor by Deepening Thinking.” 3. Agreed! Teachers at all grade levels need to do more “thinking out loud” so kids learn that solutions to more complex problems follow somewhat crooked paths. 4. I wish there were more Math teachers like you, willing to take the TIME to have students thinks, willing to put them in groups, willing to revisit a problem instead of assigning twenty more problems. Thanks □ You are right it takes so much time. At first I felt guilty taking the time but I had tried everything else. Thank you so much!!!! 5. I used to love word problem and building and solving algebraic equations from them. 6. Word problems are detective cases. You start with the question of finding out what is the detective story you want to solve not with reading the things that lead up to the questions. How can a detective solve a problem if he does not know what the problem is? He starts with the questions What is involved? Who is involved, when did it happen? How did it happen ? Why? In those questions there are some clues. He asks and follow the clues. In that word problem, there are some clue words that tell you what action needs to be taken. For example the question might be How many were involved in all! The clue word is in all and it tells you to take the action to add. How many were left? The. Lue word is left and it tells you to use the process of subtraction. In summart, use the clues. At the higher levels of problem solving there may be more than one or two clues, there may be information there that is not needed and the level of problem solving can become more difficult just as it in real life detective or legal cases This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"url":"https://www.middleweb.com/48410/helping-kids-find-their-way-in-word-problems/","timestamp":"2024-11-11T11:33:51Z","content_type":"text/html","content_length":"266016","record_id":"<urn:uuid:66440f25-44bf-47f8-9f7f-b9ea2b907970>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00835.warc.gz"}
A population with a mean of µ = 6 has Σ X = 42. How many scores are in the population?Group of answer choicesN=42N = 42/6 = 7N = 6*42 = 252N = 6/42 = 1/7 A population with a mean of µ = 6 has Σ X = 42. How many scores are in the population?Group of answer choicesN=42N = 42/6 = 7N = 6*42 = 252N = 6/42 = 1/7 Solution 1 Para resolver esta pregunta, necesitamos usar la fórmula de la media (µ) de una población, que es: $\mu = \frac{\Sigma X}{N}$ • $\mu$ es la media de la población. • $\Sigma X$ es la suma de todos los valores en la población. • $N$ es el número de puntuaciones en la población Knowee AI is a powerful AI-powered study tool designed to help you to solve study problem. Knowee AI is a powerful AI-powered study tool designed to help you to solve study problem. Knowee AI is a powerful AI-powered study tool designed to help you to solve study problem. Knowee AI is a powerful AI-powered study tool designed to help you to solve study problem. Knowee AI is a powerful AI-powered study tool designed to help you to solve study problem. Knowee AI Upgrade your grade with Knowee Get personalized homework help. Review tough concepts in more detail, or go deeper into your topic by exploring other relevant questions.
{"url":"https://knowee.ai/questions/78442864-a-population-with-a-mean-of-has-x-how-many-scores-are-in-the","timestamp":"2024-11-04T08:37:47Z","content_type":"text/html","content_length":"367834","record_id":"<urn:uuid:a2eb8f9d-ed47-4161-b28a-f9a5accd12f2>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00788.warc.gz"}
Homeschooling Kindergarten Math - Stress Free Math for Kids Many parents are looking for options for homeschooling kindergarten math without having their child be on a screen all day. Although there are many curriculums available you really don’t need an expensive curriculum for kindergarten math. You also don’t need workbooks. (If you do want to purchase a curriculum I have a post on homeschool math curriculums here. And if you are going to choose Right Start math, I might go ahead and suggest Level A for kindergarten.) Here are the ways I teach kindergarten math in a hands on way. My goal is to develop number sense, mathematical thinking, a foundation for math skills, and, most importantly, a child’s love for math and confidence in their own abilities. Hands On Math for Kindergarten • Do lots of counting different objects, making sure you touch each object as you count. Teach the child that the last number counted is the amount of the set. Count both forwards and backwards (to 20 to meet school expectations, but most kids will be able to go higher). • Use your counted objects to figure out 1 more and 1 less. • Build number sense, starting to 5 and then going to 10 and 20. Use 10 frames such as these to build numbers. There are many versions- magnetic ten frames, large floor mat ten frames, etc. I have and use the rekenrek ten frame to build the connection with the rekenrek mentioned in the next bullet point. • Build numbers in many ways using lots of different objects, including unifix cubes, counters, manipulatives, ten frames, rekenreks. Then draw the numbers in different ways. You want your child to be able to identify, write, and represent numbers through at least 20. • Make up simple addition and subtraction story problems to go with the numbers. “I had 4 cookies, my sister gave me 2. How many cookies do I have?” Use counters or the rekenrek to show the answer. Have your child make up problems for you to solve. • Make a number path or life size number line and act out addition and subtraction. Teach subtraction as difference between two numbers, not just take away. • After children have practice grouping numbers in tens (with ten frames, rekenreks, etc) teach them to count by tens. • Do lots of subsitizing (recognizing groups of numbers in different ways without counting.) Tiny Polka Dot cards are perfect for this. I also use these cards for comparing numbers- we play war with them and whoever’s number is greater takes both cards. • Practice sorting- there are sets you can buy for this but it’s so easy to do it with practical objects. Sort in different ways, by color, shape, etc. Make picture graphs of the sorted objects. Shapes and Spatial Reasoning • Read books about shapes and make shapes in different ways- playdough and cutters, wiki sticks, and popsicle sticks are all great for this. shapes with craft sticks shapes with wiki sticks Developing Mathematical Thinking • Play lots and lots of games- dice games, card games, board games. All the old classics like chutes and ladders, candyland, war, bingo, checkers are fantastic. There are Junior versions available of Monopoly, Clue, Life, and Set that are perfect for this age. I play Don’t Break the Ice with kids who call it “the take away game” If you want to buy math games the 120 floor mat game will be useful all throughout elementary school, and my kindergarten students also love Sleeping Queens. Storytime Chess is a great way to introduce chess to young children. • Provide your child with toys and puzzles that develop math skills and let them play! My favorite is Magformers or Magnatiles. Gears and wooden shape blocks are also great. I have posts on using Legos and Hot Wheels cars for teaching math. • Any of the math manipulatives on my list can be played with by kindergarteners and it will amaze you what skills they develop on their own. • Emphasize the simple ways you do math every day in practical ways. Websites and Apps While I believe most kindergarten math should be hands on and play based, there are websites and apps that can provide a supplement. Here are some recommended ones. State Standards for Kindergarten If you are in a common core state here are your math standards. Illustrative Mathematics has a fantastic page with downloadable task ideas for most of the common core standards, listed by standard. For other states you can just google your state name and “kindergarten math standards.” If you are in Texas like me, here are the kindergarten math TEKS for reference. This is what would be expected of your child in public school (although as a homeschooler in Texas you have complete freedom to set your own goals and standards for your child.) Although the list may look daunting, you will see that all of these except financial literacy are covered in the play based ideas above! TEKS for Texas 1. (A) count forward and backward to at least 20 with and without objects; 2. (B) read, write, and represent whole numbers from 0 to at least 20 with and without objects or pictures; 3. (C) count a set of objects up to at least 20 and demonstrate that the last number said tells the number of objects in the set regardless of their arrangement or order; 4. (D) recognize instantly the quantity of a small group of objects in organized and random arrangements; 5. (E) generate a set using concrete and pictorial models that represents a number that is more than, less than, and equal to a given number up to 20; 6. (F) generate a number that is one more than or one less than another number up to at least 20; 7. (G) compare sets of objects up to at least 20 in each set using comparative language; 8. (H) use comparative language to describe two numbers up to 20 presented as written numerals; and 9. (I) compose and decompose numbers up to 10 with objects and pictures. 10. (3) Number and operations. The student applies mathematical process standards to develop an understanding of addition and subtraction situations in order to solve problems. The student is expected to: □ (A) model the action of joining to represent addition and the action of separating to represent subtraction; □ (B) solve word problems using objects and drawings to find sums up to 10 and differences within 10; and □ (C) explain the strategies used to solve problems involving adding and subtracting within 10 using spoken words, concrete and pictorial models, and number sentences. 11. (4) Number and operations. The student applies mathematical process standards to identify coins in order to recognize the need for monetary transactions. The student is expected to identify U.S. coins by name, including pennies, nickels, dimes, and quarters. 12. (5) Algebraic reasoning. The student applies mathematical process standards to identify the pattern in the number word list. The student is expected to recite numbers up to at least 100 by ones and tens beginning with any given number. 13. (6) Geometry and measurement. The student applies mathematical process standards to analyze attributes of two-dimensional shapes and three-dimensional solids to develop generalizations about their properties. The student is expected to: □ (A) identify two-dimensional shapes, including circles, triangles, rectangles, and squares as special rectangles; □ (B) identify three-dimensional solids, including cylinders, cones, spheres, and cubes, in the real world; □ (C) identify two-dimensional components of three-dimensional objects; □ (D) identify attributes of two-dimensional shapes using informal and formal geometric language interchangeably; □ (E) classify and sort a variety of regular and irregular two- and three-dimensional figures regardless of orientation or size; and □ (F) create two-dimensional shapes using a variety of materials and drawings. 14. (7) Geometry and measurement. The student applies mathematical process standards to directly compare measurable attributes. The student is expected to: □ (A) give an example of a measurable attribute of a given object, including length, capacity, and weight; and □ (B) compare two objects with a common measurable attribute to see which object has more of/less of the attribute and describe the difference. 15. (8) Data analysis. The student applies mathematical process standards to collect and organize data to make it useful for interpreting information. The student is expected to: □ (A) collect, sort, and organize data into two or three categories; □ (B) use data to create real-object and picture graphs; and □ (C) draw conclusions from real-object and picture graphs. 16. (9) Personal financial literacy. The student applies mathematical process standards to manage one’s financial resources effectively for lifetime financial security. The student is expected to: □ (A) identify ways to earn income; □ (B) differentiate between money received as income and money received as gifts; □ (C) list simple skills required for jobs; and □ (D) distinguish between wants and needs and identify income as a source to meet one’s wants and needs. {"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
{"url":"https://stressfreemathforkids.com/homeschooling/homeschooling-kindergarten-math/","timestamp":"2024-11-07T22:24:52Z","content_type":"text/html","content_length":"738173","record_id":"<urn:uuid:0e3fcd43-2cef-4675-a9db-69069cad4d22>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00072.warc.gz"}
If sinθ+cosθ=2, then tanθ+cotθ= (a) 1 (b) 2... | Filo Question asked by Filo student Not the question you're searching for? + Ask your question Video solutions (1) Learn from their 1-to-1 discussion with Filo tutors. 5 mins Uploaded on: 2/12/2023 Was this solution helpful? Found 7 tutors discussing this question Discuss this question LIVE for FREE 8 mins ago One destination to cover all your homework and assignment needs Learn Practice Revision Succeed Instant 1:1 help, 24x7 60, 000+ Expert tutors Textbook solutions Big idea maths, McGraw-Hill Education etc Essay review Get expert feedback on your essay Schedule classes High dosage tutoring from Dedicated 3 experts Practice more questions on All topics View more Students who ask this question also asked View more Stuck on the question or explanation? Connect with our Mathematics tutors online and get step by step solution of this question. 231 students are taking LIVE classes Question Text If , then (a) 1 (b) 2 Updated On Feb 12, 2023 Topic All topics Subject Mathematics Class Grade 12 Answer Type Video solution: 1 Upvotes 116 Avg. Video Duration 5 min
{"url":"https://askfilo.com/user-question-answers-mathematics/if-then-a-1-b-2-34323330323737","timestamp":"2024-11-05T00:30:26Z","content_type":"text/html","content_length":"240465","record_id":"<urn:uuid:f651e1c6-72e0-4640-a224-a6700533d35b>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00485.warc.gz"}
(A) UIVSH:IS (C) ZBETSVIU (IS) %11 उस विकल्प का वयन करन रिवत ख्... | Filo Question asked by Filo student (A) UIVSH:IS (C) ZBETSVIU (IS) उस विकल्प का वयन करन रिवत ख्यान की मर और शुल्खका (A) 19 (B) 25 (C) 18 (D) 21 उस विकल्प का चयन करें उो उसी तरह संबंधित है, जिस Not the question you're searching for? + Ask your question Video solutions (1) Learn from their 1-to-1 discussion with Filo tutors. 7 mins Uploaded on: 12/20/2022 Was this solution helpful? Found 6 tutors discussing this question Discuss this question LIVE 6 mins ago One destination to cover all your homework and assignment needs Learn Practice Revision Succeed Instant 1:1 help, 24x7 60, 000+ Expert tutors Textbook solutions Big idea maths, McGraw-Hill Education etc Essay review Get expert feedback on your essay Schedule classes High dosage tutoring from Dedicated 3 experts Students who ask this question also asked View more Stuck on the question or explanation? Connect with our Mathematics tutors online and get step by step solution of this question. 231 students are taking LIVE classes Question Text (A) UIVSH:IS (C) ZBETSVIU (IS) उस विकल्प का वयन करन रिवत ख्यान की मर और शुल्खका (A) 19 (B) 25 (C) 18 (D) 21 उस विकल्प का चयन करें उो उसी तरह संबंधित है, जिस Updated On Dec 20, 2022 Topic Algebra Subject Mathematics Class Class 12 Answer Type Video solution: 1 Upvotes 59 Avg. Video Duration 7 min
{"url":"https://askfilo.com/user-question-answers-mathematics/a-uivsh-is-c-zbetsviu-is-us-viklp-kaa-vyn-krn-rivt-khyaan-33343538393633","timestamp":"2024-11-05T04:05:15Z","content_type":"text/html","content_length":"311320","record_id":"<urn:uuid:8b2a8afa-50fb-4845-9472-3580ed1365b1>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00074.warc.gz"}
Possibility spaces and the notion of novelty: from music to biology What is a biological novelty? Is it possible to coin a sound concept of new possibility? What articulation between the concepts of novelty and function? We provide a new perspective on the relation between the space of description of an object and the appearance of novelties. One of the aims of this perspective is to facilitate the interaction between mathematics and historical sciences. The definition of novelties is paradoxical: if one can define in advance the possibles, then they are not genuinely new. By analyzing the situation in set theory, we show that defining generic (i.e., shared) and specific (i.e., individual) properties of elements of a set are radically different notions. As a result, generic and specific definitions of possibilities cannot be conflated. We argue that genuinely stating possibilities requires that their meaning has to be made explicit. For example, in physics, properties playing theoretical roles are generic; then, generic reasoning is sufficient to define possibilities. By contrast, in music, we argue that specific properties matter, and generic definitions become insufficient. Then, the notion of new possibilities becomes relevant and irreducible. In biology, among other examples, the generic definition of the space of DNA sequences is insufficient to state phenotypic possibilities even if we assume complete genetic determinism. The generic properties of this space are relevant for sequencing or DNA duplication, but they are inadequate to understand phenotypes. We develop a strong concept of biological novelties which justifies the notion of new possibilities and is more robust than the notion of changing description spaces. These biological novelties are not generic outcomes from an initial situation. They are specific and this specificity is associated with biological functions, that is to say, with a specific causal structure. Thus, we think that in contrast with physics, the concept of new possibilities is necessary for biology. Keywords: Novelty, Possibility space, Biological function, Organization, Emergence Possibility spaces and the notion of novelty: from music to biology ^aLaboratoire ”Matière et Systèmes Complexes” (MSC), UMR 7057 CNRS, Université Paris 7 Diderot, Paris, France ^bInstitut d’Histoire et de Philosophie des Sciences et des Techniques (IHPST) - UMR 8590, 13, rue du Four, 75006 Paris, France We provide a new perspective on the relation between the space of description of an object and the appearance of novelties. One of the aims of this perspective is to facilitate the interaction between mathematics and historical sciences. The definition of novelties is paradoxical: if one can define in advance the possibles, then they are not genuinely new. By analyzing the situation in set theory, we show that defining generic (i.e., shared) and specific (i.e., individual) properties of elements of a set are radically different notions. As a result, generic and specific definitions of possibilities cannot be conflated. We argue that genuinely stating possibilities requires that their meaning has to be made explicit. For example, in physics, properties playing theoretical roles are generic; then, generic reasoning is sufficient to define possibilities. By contrast, in music, we argue that specific properties matter, and generic definitions become insufficient. Then, the notion of new possibilities becomes relevant and irreducible. In biology, among other examples, the generic definition of the space of DNA sequences is insufficient to state phenotypic possibilities even if we assume complete genetic determinism. The generic properties of this space are relevant for sequencing or DNA duplication, but they are inadequate to understand phenotypes. We develop a strong concept of biological novelties which justifies the notion of new possibilities and is more robust than the notion of changing description spaces. These biological novelties are not generic outcomes from an initial situation. They are specific and this specificity is associated with biological functions, that is to say, with a specific causal structure. Thus, we think that in contrast with physics, the concept of new possibilities is necessary for biology. Keywords: Novelty, Possibility space, Biological functions, Organization, Emergence 1. Introduction The theory of evolution assumes that current life forms are the result of variations of preceding life forms. Since past life forms did not have all the features of current ones, it is necessary to think that novelties appear (and disappear) in the process of evolution. As a result, developing this theory has immediately led to ponder on biological novelties, and both Lamarck and Darwin discuss them (Muller & Wagner, 1991). The current phylogenetic classification of living beings uses the concept of novelty as a way to estimate the genealogical relationship between taxa. For example, phylogenetic trees minimize the number of novelty appearances to maximize the coherency of the classification. In general, the concept of open-ended evolution is central to biology, and some authors even use this notion to define living systems ( Ruiz-Mirazo et al., 2004). However, the mathematical modeling of novelties has been more neglected. For example, population genetics usually describe abstract traits and their consequences on fitness. In this field, the space of possibilities is therefore limited to allele frequencies and phenotypic novelties, if any, are postulated, not explained. By contrast, the artificial life community is struggling to provide computational frames displaying open-ended evolution, where ”open-ended” is an ambiguous concept which embeds some idea of generating novelties. “The particular properties that characterize open-ended evolution are tricky to pin down and often lack consensus […]. Yet despite the difficulty of precisely pinpointing this phenomenon, a major goal of artificial life (alife) research remains to observe open-ended evolution in an alife simulation (Bedau et al., 2000). In fact, there is little doubt that no algorithm yet devised has fully reproduced it.” (Soros & Stanley 2014) There is an intuitive reason why this goal of alife is challenging and more generally why there is a tension between mathematics and the notion of novelty. In mathematics, the structure of logical proofs is hypothetic-deductive, meaning that there should be nothing genuinely new in the proof after the hypotheses have been formulated. The same applies, mutadis mutandis, to computational frameworks. Let us consider a few definitions of novelty in evolutionary biology. Most of these definitions discriminate the relevant novelties from the irrelevant ones on the basis of a given theoretical perspective, but they do not expand on the newness of novelties per se. For example, Mayr proposes a definition focused on adaptation where biological novelties are “any newly acquired structure or property that permits the performance of a new function, which, in turn, will open a new adaptive zone” (Mayr 1963). Other definitions emphasize development: “[an evolutionary novelty is] a novel trait [based on] a qualitatively distinct developmental variant” ( West-Eberhard 2003). These definitions aim to discuss what are the biologically relevant novelties according to a given theoretical perspective. We agree that this is an aspect of the problem. However, these definitions are mostly tautological concerning what it means for something to be new. A self-contained notion of novelty should intrinsically define being new. Muller & Wagner (1991) provide a more precise definition by stating that “a morphological novelty is a structure that is neither homologous to any structure in the ancestral species nor homonymous to any other structure of the same organism.” Here, the concept of novelty is defined by the heterogeneity with respect to a history and the rest of the organisms considered. This notion has lead to a specific research program which defines novelty by development (Wagner & Lynch, 2010). However, novelties associated with the functioning of organisms and a fortiori functions are then excluded. Moreover, this notion cannot be used straightforwardly in the mathematical thinking on novelties which, we argue, is a more general problem. Emergence is a philosophical concept that is relevant for novelty. Typically, emergence corresponds to two notions that are analytically distinct. The first is synchronic emergence which is concerned with the irreducibility of a system to the analysis of its components when they are in isolation (or in simpler systems). The second, diachronic emergence, is of direct interest here because it is defined by the notion of novelty ( Stephan, 1999; Bich & Bocchi, 2012). Diachronic emergence typically comes in different variants depending on the predictability and reducibility of novelties from the initial state of affairs. Let us consider a few physical situations which can be interpreted as modeling the appearance of novelties and are regarded as models of emergence (Anderson, 1972; Anderson & Stein, 1985). Novelty in physics cannot just be the appearance of a specific configuration that never appeared before. For example, it is clear that the microscopic state (position and momentum of all particles) of a gas at equilibrium in a room is new in the sense that the odds of it occurring twice in the universe’s lifetime are vanishingly small. However, as far as equilibrium thermodynamics is concerned, this precise state does not represent something new: the macroscopic description of the gas will match those at other time points and is stationary for all intents and purposes. By contrast, the formation of a crystal from a liquid or a gas involves the appearance of patterns corresponding to the directions of the periodic disposition of atoms or molecules. These “new” patterns play a theoretical and causal role since they explain why crystals do not have the same mechanical and electrical properties in all directions, unlike gases and liquids. Think for example of graphite which tends to break along specific directions or crystals which tend to have facets. In these situations, macroscopic structures that were not present in the initial conditions appear and are theoretically meaningful. Diachronic emergence is also relevant for chaotic dynamical systems where the unpredictability of the outcome is the matter of philosophical interest (Stephan, 1999). These models of physical phenomena are defined by stable equations and space of possibilities; however, several authors, including myself, argue that these assumptions are inadequate for biology and propose alternative viewpoints. Recent theoretical works study the consequences of novelty and argue that biology requires a framework for changes of possibility space (Kauffman, 2002; Longo & Montévil, 2011, 2013a, 2014; Longo et al., 2012; Montévil et al., 2016; Loreto et al., 2016; de Vladar et al., 2017) and that the same applies to economy ( Koppl et al., 2015; Kauffman, 2016). From a philosophical perspective, the issue pertains both to emergence (Bich & Bocchi, 2012) and process philosophy (Koutroufinis, 2014, 2017). In these approaches, the object is not well described by an invariant mathematical space. Instead, the objects require that mathematical spaces change over time.^[1] More generally, invariant mathematical structures do not define these theoretical frameworks; instead, they aim to accommodate changing mathematical structures. One aim of these frameworks is to accommodate biological novelties. However, an explicit analysis of the concept of novelty in this context is still required and this paper precisely aims to perform such an analysis. In this paper, we discuss the notion of new possibilities and some of its conceptual challenges. In particular, we aim to provide a framework to respond to typical objections by mathematicians and physicists. These objections correspond to the following line of reasoning. We can always define spaces that are large enough to seemingly accommodate every possibility so that there is no need for the concept of new possibility and the concept of possibility is sufficient. For example, spaces of all possible forms should be able to accommodate all biological shapes, or spaces of all possible mathematical functions should be sufficient to model any biological interactions. This reasoning enables physicists and mathematicians to think about the situation in the hypothetic-deductive framework that we mentioned in the beginning of this section. In practice, the spaces of possibilities used are far smaller but remain static, for example, in physical approaches to evo-devo (Zhu et al., 2010). In another context, this line of reasoning leads to the historical thought experiment considering the set of all books of a given length. This idea has been discussed by Leibniz (1991, p.61) and popularized by Borges ( 1998) under the name of the Library of Babel. Such a construct seems to exclude the notion that human arts and sciences generate new books and that paradigm shifts entail new possibilities for books. To gain a better understanding of the concept of novelty and of new possibilities, we start from a paradox that stems from Bergson’s work. Bergson discusses the case of symphonies and states that a symphony is not possible before it becomes real because conceiving the precise possibility of a symphony is equivalent to composing it. However, we point out that one can define the set of all possible music scores as the set of combinations of musical symbols. We show that the confrontation of these two lines of reasoning leads to a paradox. We then argue that the concepts of possibility and of novelty require a more precise discussion than a set theoretical definition. Defining generically the elements of a set is not the same thing as defining the individual properties of each of its elements. In a second part, we apply this discussion to biology. We show that the notion of new possibilities is relevant even from perspectives that seem incompatible with it, such as genetic determinism. We characterize the notion of novelty in physical models of self-organization and conclude that they do not require new possibilities. We then elaborate on biological novelties and argue that new possibilities are relevant. We will also show that novelties associated with biological functions have a special theoretical role. 2. New possibilities: an enlightening paradox Several authors have recently emphasized the need to take into account changes of the space of description of biological objects (Kauffman, 2002; Longo & Montévil, 2011, 2013a, 2014; Longo et al., 2012; Montévil et al., 2016; Loreto et al., 2016). In Montévil et al. (2016), we argue that this assumption is part of a fundamental theoretical principle: the mathematical space required to describe and understand the organization of an organism may change with the flow of time, both in life cycles and over evolutionary time scales. In these frames, changes of possibility space are a counterpart to the qualitative changes of biological objects which, in evolutionary theory, lead to the remarkable diversity of current life forms. This perspective is foreign to physics where the possibility space is always postulated as an a priori of the theoretical description. In this section, we will discuss in greater details the concept of new possibilities. This concept is a core component of Bergson’s philosophy of time. It underlies the philosophical understanding of the creativity of living beings in evolution. We will use the following text to show a paradox that helps to understand new possibilities. When a musician writes a symphony, was his work possible before it became real? Yes, in the sense that there was no insurmountable obstacle to realize it. However, it is easy to shift from this entirely negative meaning of the word to a positive one without noticing it: one pictures that everything that happens may be perceived beforehand by a sufficiently informed mind, and thus preexist in an ideal form to its realization; — this idea is absurd in the case of a work of art since as soon as the musician has a precise and complete idea of the symphony he is going to produce, the symphony is done. Neither in the mind of the artist nor, a fortiori, in any other mind comparable to ours, even impersonal or merely virtual, would the symphony lay as a possibility before it became real. ( Bergson 2014, we translate.) We think that this statement of Bergson leads to a paradox and that this paradox is key to a better understanding of the concept of novelty. The paradox is that it is possible to define a set which includes all written symphonies, thus arguably all possible symphonies, and, at the same time, that there is no obvious flaw in Bergson’s reasoning. Can we define the possibility of a symphony without composing it? There is a standardized way to write classical music and the writing of a symphony leads to a music score: a finite sequence of symbols from a finite set of symbols (the notes and their kinds). Let us call ${M}_{s}$ the set of music scores for a given set of instruments. ${M}_{s}$ is a countable set, and the set of all possible symphonies seems to be mathematically well-defined. ${M}_{s}$ is based on the same principle as Leibniz (1991) and Borges (1998) idea of a library containing all books of a finite, given length in our alphabet. We are aware that some composers use extended writing systems, that new musical instruments get continually invented, and that the symphony is not just its music score. However, we focus on the difficulty raised by the concept of novelty when a space of description is well-defined, which is the situation that seems the most opposed to a strong notion of novelty. Let us now phrase our paradox. By defining ${M}_{s}$, it seems that we define all possible symphonies. At the same time, Bergson’s reasoning has no obvious flaw: having a precise and complete idea of the possibility of a symphony implies that this symphony has already been composed. To solve the discrepancy between these two line of reasoning, we will argue that defining ${M}_{s}$ is very different from defining musically relevant symphonies. The core of our argument is the distinction between the set ${M}_{s}$ of possible music scores defined by a writing system and the putative set of possible symphonies endowed with an assessment of their musical quality. 2.1. Defining a set differs from defining each of its elements individually In the following discussion, we carefully analyze the meaning of defining a possibility. We want first to introduce a conceptual distinction between mathematical objects defined collectively, in a generic manner, and the actual definition of an individual or specific element. An example will show why this distinction is mandatory in mathematics. In a given logical axiomatic, the possible definitions that one can produce form a countable set: the possible definitions are as numerous as natural numbers: they are finite combinations of symbols from a finite set of symbols. However, the set of real numbers is not countable; it has a larger cardinality than natural numbers. In this sense, there are far more real numbers than usable definitions of specific real numbers. The real numbers that can be defined individually, such as $\pi$ or $\frac{1}{2}$, are very few in comparison with the ones that cannot be defined individually. Actually, the probability of being able to define specifically a real number chosen randomly^[2] is zero. Another way to emphasize this point is to say that there are more real numbers that possible names to name them individually, which makes most individual real numbers ineffable.^[3] Real numbers can still be defined, for example using the Dedekind cut, but this definition is a generic one. Thus, defining a set of possibilities generically and individually defining each one of its elements are very different notions. The lack of specific definitions for each real numbers does not prevent mathematical reasoning on them. Instead of reasoning on specific numbers, in most cases, reasoning involves generic properties where numbers appear as generic variables. For example, the statement ${x}^{2}\ge 0$ is valid for any real number $x$ and this statement is about a generic $x$. Any given set of axioms enables mathematicians to discuss only certain properties which are the properties of a few individual cases and generic properties. Since only finite proofs are possible, it is only possible to handle a finite number of specific cases. As a result, reasoning on infinite sets requires generic statements. For example, induction on natural numbers consists in the proof of a generic formula $\mathsc{P}\left(n\right)$ for $n=0$ (or any finite number of individual cases) and then on a proof that for a generic $n$, $\mathsc{P}\left (n\right)$ implies $\mathsc{P}\left(n+1\right)$. Then, the axiom of induction states intuitively that the validity of $\mathsc{P}$ is “propagated” from $n=0$ to all $n$. Note that in the case of natural numbers, unlike real numbers, every number can be defined individually, by counting for example. Nevertheless, it is never possible to actually define all natural numbers individually because there is an infinity of them: counting has no end. From the viewpoint of mathematical logic, we thus have three different situations for the definition of individual properties: • The set is defined, but not all of its elements can be defined individually within any axiomatic, like in the case of the real numbers. • The set is defined, and all individual elements can be defined in principle. However, any actual discussion can only involve a subset of individual elements because the set is infinite. A paradigmatic example is the set of natural numbers. • The set is finite, and there is no principled limitation. Let us now go back to symphonies. The definition of the set of music scores ${M}_{s}$ is a generic definition. The paradox that we exposed stems from the idea that defining ${M}_{s}$ would be enough to define all possible symphonies so that ${M}_{s}$ would include the writing of any symphony before it is composed. However, we have shown that conflating the generic definition of a set and the individual definitions of its elements is not logically correct. In Bergson’s words, defining a set does not always provide a “precise and complete idea” of all its individual elements. Now, does this line of reasoning applies to symphonies? ${M}_{s}$ is countable since music scores are finite combinations among a finite number of symbols. Therefore, ${M}_{s}$ is comparable to natural numbers and all individual music score can be defined by a finite axiomatic, even though only a few of them can be defined in any discussion. However, is this sufficient to define all the possible 2.2. The relevant properties are what matters To state the possibility of a symphony, we think that it is necessary to check at least that the putative symphony is an admissible symphony and not just any sequence of symbols. Here, ”symphony” means loosely a musical piece that a music lover enjoys.^[4] The set of music scores which are symphonies would be a subset of all possible music scores ${M}_{s}$. The issue lies in the definition of this set. Even in mathematical logic, problems in the definition of a set are not circumscribed to the definition of individual elements. The issue is sometimes to decide whether elements are part of the set or not. For example, the definition of subsets of natural numbers may require far more complex logics than the definition of natural numbers themselves. Defining a subset is a problem that also appears when the definition depends on the world. For example, the set of the couples (year, French president elected) are a subset of numbers $×$ string of characters. However, the number of elements of this set that we can enumerate depends on when we are performing this enumeration. In sciences, properties which have an explanatory role are central, and it should be the same when defining possibilities and novelties. In physics, it is usual for relevant properties to be generic. For example, the force exerted on an object $A$ in free fall is $m\stackrel{⃗}{g}$ where $\stackrel{⃗}{g}$ is the gravity field, and $m$ is the mass of $A$. The analysis of free fall does not depend on the individual values of $m$ — being real numbers most individual masses are ineffable — or on the nature of the object. Instead all possible $m$ lead to the same analysis of trajectories. The genericity of the analysis is possible because physics is not about quantities. Instead, physics is based on (generic) relations between quantities. In general, dynamical systems are analyzed for generic values of their parameters, with possible punctual bifurcation points corresponding to qualitative changes in the dynamics. Similarly, physicists analyze generic initial conditions. Sets of initial conditions lead to the same qualitative dynamics, and these sets are called the basin of attraction of this qualitative dynamics. These qualitative changes are another example of our discussion in the previous section: mathematics can treat a finite number of individual cases and an infinite number of generic cases. In some situations, there is a finite number of bifurcations, and a discussion of every individual case is possible. In other situations, the number of bifurcations is infinite, but their process is generic which makes an exhaustive analysis possible. We will expand on this point as it illustrates the plasticity of reasoning in terms of genericity and ultimately the importance of this concept. We will consider the paradigmatic example of the period doubling scenario in the case of the logistic map. These dynamics depend on a parameter $r$ with values between 1 and 4. For $r$ between 1 and 3, the trajectory tends towards a single point ( $\left(r-1\right)∕r$). For $r$ between 3 and $1+\sqrt{6}$, the dynamics tends to oscillate between two different values. For now, we have two sets of generic situations, which can be analyzed individually and which correspond to qualitatively different behaviors. Now the situation becomes more complicated as $r$ tends towards ${r}_{c}\approx 3.56\ dots \phantom{\rule{0.3em}{0ex}}$ since the system undergoes a cascade of bifurcations where each bifurcation corresponds to a doubling of the period of the trajectories. There is an infinite number of bifurcation when we increase $r$ till ${r}_{c}$. We cannot analyze an infinite number of situations individually, but physicists and mathematicians point out that all these bifurcations are actually more of the same: they correspond to a doubling of the period. To discuss the situation, they analyze the generic process of period doubling when $r$ becomes close to ${r}_{c}$, and this leads to relevant predictions (Feigenbaum, 1980). The situation is analogous to the analysis of fractals: fractals look heterogeneous with qualitative patterns at all scales, but all scales are symmetric and a generic analysis is then possible. This discussion applies mutadis mutandis to probabilistic models. In these models, sets of possibilities are endowed with probabilistic weights which are used to analyze the intended phenomena. Sets of possibilities with probability $0$ are considered irrelevant, they are not forbidden but never happen in practice and thus do not play a theoretical role. As a result, discussing their specific properties is not required. Mathematicians call ”almost sure” the properties which are met in all cases except for a subset of probability $0$. Being almost sure is a form of genericity which aims to disregard irrelevant qualitative cases. For example, in statistical mechanics, the probability of a configuration with an entropy below the maximum is $0$. Thus, only some macroscopic possibilities are relevant.^[5] Let us emphasize this point. In statistical mechanics, all microscopic states are possible. These possibility spaces include all kinds of patterns that are remarkable for a human observer. For example, some molecules may happen to be aligned in a gas at a given time. These patterns may even include letters and words. However, these patterns are purely accidental, they are not sustained, and they do not play a particular causal role. Instead of discussing these patterns, the theory focuses on generic properties. These generic properties are robust and enable physicists to ultimately restrict the discussion to relatively simple equations such as the ones of thermodynamics. In general, physical models and theories restrict the discussion to generic properties and do not have to examine the specific properties that some individual microscopic states display. By contrast, we think that the relevant properties of symphonies are not generic or at least they are not generic properties of the set ${M}_{s}$ of music scores. Indeed, all music scores do not make sense as symphonies. There are attempts to consider generic properties of a given musical style (usually the style of a specific author or interpret) and then to generate new music scores or soundtracks verifying these generic properties (Pachet & Roy, 2014; Papadopoulos et al., 2016). These attempts use machine learning in combination with a few generic criteria that musical patterns are assumed to follow. The aim is to obtain a generic generator of acceptable soundtracks and thus to define and explore sets that have generic regularities that are assumed to be musically relevant. However, these generic regularities are not written in the algorithm, and they are not pre-stated (except for the generic criteria mentioned above). Instead, they are extracted by machine learning. Thus, the individual works of the musician rigorously preexist the definition of the generic properties extracted from machine learning and not the other way around. It follows that these generic sets are specific to the past of an individual composer or style and are subordinated to it. Moreover, assuming that there can be genuine qualitative novelties that result from probing these sets, there is no guaranty that they would be musically interesting. Let us now conclude on this part. In physics (and epistemologically similar modeling approaches), the relevant properties are usually generic properties. Physicists understand systems whose states are in enormous sets thanks to these generic properties and not by the specific properties of the individual possibilities (the elements of these sets). In the case of music, the set of possible music scores differs from the set of possible symphonies. For “the musician [to have] a precise and complete idea of the symphony”, she needs at least to consider a possible music score as a possible symphony. In other words, our definitions should enable us to discriminate acceptable symphonies from music scores without a musical relevance. A fair generic description of acceptable symphonies would require a generic understanding of how symphonies work in the sense of having a musical meaning where the various possibilities would be understood collectively. There are two issues in reaching such a generic understanding of symphonies. First, musical meaning is not an intrinsic property of a sequence of musical signs. Instead, musical meaning takes place in a historical, cultural context. For example, Erik Satie’s or Moondog’s work would probably not have made much sense for Bach. As a result, musical meaning is not just a function of the music score but also depends on the cultural context. Even though computers can transform music scores into sound automatically, a human interpret needs to be able to make sense of the music score. The situation is extremely similar to the reading out loud of a text which is very different when the text makes sense to the reader and when it does not. Second, musical meaning depends on the specific arrangement of a musical piece and its many interwoven patterns (Mazzola, 2012). There are different qualitative patterns in music scores that may or may not make musical sense for readers. These patterns and their possible recurrences are specific properties of an individual symphony. The precise idea of a symphony includes these patterns and the meaning that they may evoke. 2.3. Novelty in chaotic dynamical systems In this section, we will consider a dynamical system which should help to understand why the concepts of generic versus specific properties are necessary to avoid misconceptions about the concept of possibility. Let us consider an initial condition $0\le x<10$ and its decimal expansion $x={x}_{0}.{x}_{1}{x}_{2}{x}_{3}\dots \phantom{\rule{0.3em}{0ex}}$. Then, we can define a dynamical system ${u} _{0}=x={x}_{0}.{x}_{1}{x}_{2}{x}_{3}\dots \phantom{\rule{0.3em}{0ex}}$, ${u}_{1}={x}_{1}.{x}_{2}{x}_{3}\dots \phantom{\rule{0.3em}{0ex}}$, ${u}_{2}={x}_{2}.{x}_{3}\dots \phantom{\rule{0.3em}{0ex}}$, This dynamic is chaotic in the sense that more and more precise aspects of the initial conditions dominate the trajectory. For example, the integer part of ${u}_{10}$ is the 10^th decimal of the initial condition. Now, instead of using the decimal expansion, it is possible to use base 2 to obtain a binary representation of $x$ or instead to use base 27 and letters (and space) as symbols for digits. Let us do the latter and use the initial condition $x=w.hen\phantom{\rule{1em}{0ex}}a\phantom{\rule{1em}{0ex}}musician\phantom{\rule{1em}{0ex}}writes\phantom{\rule{1em}{0ex}}a\phantom{\rule {1em}{0ex}}symphony\dots \phantom{\rule{0.3em}{0ex}}$ (with the rest of the initial condition following our translated quotation of Bergson). Then the integer part of ${u}_{n}$ spans the text of Does the dynamics ${u}_{n}$ tell us something about Bergson’s text? Yes, in the sense that the text is a sequence of letters. However, this is valid for any text. Actually, another initial condition would have generated one of Shakespeare’s play. Sadly, most of the initial conditions, that is to say, most real numbers, lead to texts that are meaningless for humans. Chaotic dynamical systems may have rich dynamics, but, in a precise sense, they are not creating something new. Mathematically, the richness of their patterns stems from the fact that they are digging deeper and deeper into their initial conditions. Their analysis focuses on the way in which the dynamics are transforming the initial conditions, not the specific pattern stemming from a given initial condition. It is not a generic property of the dynamics ${u}_{n}$ to generate a meaningful text. The mathematical analysis of the generic properties of ${u}_{n}$ does not involve the meaning of Bergson’s text. In other terms, the odds to find initial conditions of ${u}_{n}$ that generate Bergson’s text are almost null. It is necessary to have written this text beforehand to choose initial conditions leading ${u}_{n}$ to generate it. A similar issue appears when Dawkins designs a toy computational model of evolution to show that variation and selection can lead to a specific result: the string $A$. In this model, a population of strings evolves by random variations and selection and converges towards $A$. Fitness is defined by the proximity to $A$. Then, $A$ is specified before the dynamics and cannot genuinely be said to emerge from it. Dawkins fully acknowledges this limitation: “phrases were judged according to the criterion of resemblance to a distant ideal target [...]. Life isn’t like that” (Dawkins, 1986, p. 60). This kind of problems does not disappear easily. For example, (Adams et al., 2017) perform very interesting simulations of dynamical systems to study the appearance of innovations. However, when they allow dynamical rules to change, they change among predefined possibilities. 2.4. Different notions of possibilities We propose to depart sharply from a naive set-theoretic view of possibilities, where the definition of a set would define each of its elements as possibilities and each of its subsets would be valid sets of possibilities. Instead, we advocate a theoretical notion of possibilities, which we also call explicit possibilities, where possibilities are defined if and only if we define explicitly also how they “work”, that is to say how they take place in an appropriate theoretical framework where they have meaning. Our notion of possibility is based on what the theoretician can effectively express with her definitions. Now, we can explain the conceptual articulation between the set of possible music scores ${M}_{s}$ and the possible symphonies. The set of possible music scores has generic properties that are relevant for writing, writing software or printing. These operations are its natural theoretical context. Music scores are used to communicate symphonies as a writing system, which means that they are typically sufficient for constraining the receiver for her to interpret some music scores as symphonies. However, the theoretical construct used to define ${M}_{s}$ is not sufficient for a sound theoretical understanding of symphonies themselves. This limitation is not just due to ${M}_{s}$ lacking generic constructs. Instead, the meaning of musical possibilities lies at the individual level which means that musical sense is not a generic property. Thus, ${M}_{s}$ may only be seen as a set of pre-possibilities for symphonies and not as a set of explicit possibilities. We call ”pre-possibilities” sets whose meaning is not entirely explicit for the intended phenomena. Pre-possibilities are usually possibilities defined in an initial theoretical context^[6] that are used in another context where they do not meet the criteria of explicit possibilities. Elements of a set may have the status of pre-possibility for epistemic or objective reasons. Epistemic reasons correspond to a lack of knowledge, typically when the generic definition of pre-possibilities can be completed with other generic constructs that endow them with a satisfying theoretical structure for the intended phenomena. For example, a set of pre-possibilities can be endowed with probability distributions. However, the status of pre-possibilities can also have a more objective nature. When specific properties of individual elements are what matters, then we cannot transform a generic set of pre-possibilities into explicit possibilities. Then, only some pre-possibilities can be completed to be explicit possibilities. In this case, the notion of new possibilities is irreducible at the theoretical level. In this second case, a set of pre-possibilities is fragile theoretically since this set is not defined by an adequate theoretical structure: it is irreducibly between two frameworks. For example, music scores are fundamentally between music and the concrete activity of writing. Music scores are a limit case since they seem sufficient to represent symphonies and to communicate them. Let us illustrate the theoretical fragility of this set. The relation between music scores and symphonies is not as simple as an automatic mapping. Between the music score and the symphonies played by an orchestra, there are interpretations and musical phrasing which lead to many versions corresponding to the same music score. Moreover, if one defines generic music scores strictly, then musical works will overflow this definition. For example, frequent changes of time signatures were foreign to classical music. Musical notations themselves are adapted to different styles and may be seen as open-ended. A musical notation is not a fundamental invariant of music. Thus, the notations do not define musical possibilities and composing symphonies is not an exploration of the space of music scores. Instead, musical notations enabled the practice of classical music and reciprocally are determined by the historical changes in the practice of music. 2.5. Conclusion on the paradox The core of Bergson’s argument is the identity of having a clear idea of a possible symphony and actually composing it. If Alice thinks about the possibility of a symphony and has an exhaustive account of this possible symphony, this symphony exists and Alice is its author. The issue that we have raised is that the set of possible music scores ${M}_{s}$ is mathematically well-defined and thus is a candidate for stating that all symphonies are defined as possibilities before they are conceived. However, the generic definition of ${M}_{s}$ is not equivalent to defining possible symphonies ahead of conceiving them because criteria to make musical sense explicit are not embedded in this description. They are not embedded because musicality is not attached to generic, collective properties of the elements of ${M}_{s}$. Instead, musical meaning corresponds to specific, individual properties of some elements of ${M}_{s}$. Thus, we ultimately agree with Bergson: the possibility of a symphony does not preexist this symphony. 3. Novelty and possibility spaces in biology We will now apply our concepts to biology. In a first section, we discuss a few examples of sets that are sometimes considered as possibility spaces when they should be considered as pre-possibilities. Then, we discuss another approach to novelty that stems from biophysical models. This notion is weaker than the notion of new possibilities but leads us to useful considerations proper to natural sciences. We finally elaborate on specificities of biological novelties and discuss possible objections. 3.1. Application to biology Some biologists and physicists consider that, in biology, mathematical spaces play a similar role than the mathematical spaces of physics. We will study several cases and argue that their role is closer to the one that music scores play for symphonies. 3.1.1. The space of possible dna sequences As a first example, let us consider complete genetic determinism, that is to say, the assumption that dna sequences entirely determine phenotypes. This viewpoint is no longer dominant since the roles of the environment (Gilbert & Epel, 2009) and of random factors (Paldi, 2003; Heams, 2014; Montévil et al., 2016) are increasingly becoming acknowledged. Nevertheless, it is interesting to confront our notion of possibility with this frame since its determinism seems incompatible with a strong notion of novelty. The space of possible dna sequences, ${D}_{s}$, is the set of finite sequences of the four symbols A, T, G, C. Under the assumption of genetic determinism, ${D}_{s}$ is sufficient to determine phenotypes. In the line of our former discussion, there are several possibilities for the relation between dna sequences and phenotypes. 1. This relation is generic and is conceptually similar to the situation in statistical mechanics where generic properties of microstates are causally relevant. In this context, it is fair to say that all possibilities are predefined. 2. The relation between dna sequences and phenotypes is similar to the relation between music scores and symphonies. It depends on individual sequences. Let us recall that the space of music scores is appropriate for music writing software or printing. Similarly, the generic properties of the space of dna sequences would be appropriate to understand methods like sequencing or phenomena like dna replication. However, it would not be sufficient to state explicitly possible phenotypes. As such, ${D}_{s}$ would be a set of pre-possibilities. The relation between dna and phenotypes has never been described explicitly by a generic causal structure. The genetic code is a partial bridge between the two. However, this generic relation between mrna sequences and amino acids sequences is not sufficient to determine the proteome or even proteins. The relation between individual sequences and protein shapes has a complex structure (Stadler et al., 2001). Moreover, determinants of this relation include alternative splicing, epigenetic effects, non-coding rna and the proteome dynamics itself which all push the relation between dna sequences and the proteome away from a straightforward application of the genetic code (David et al., 2013; Huang, 2009). These phenomena tend to make gene expression contextual and lead to consider that inheritance is the locus of a coupling between physiology and evolution (Danchin & Pocheville, 2014). At the evolutionary level, there is a fundamental reason for the lack of a generic relation between dna and phenotypes: this relation is not a theoretical invariant and, a priori, nothing prevents it from changing in evolution — except when changes lead to non-viable variants. Current life forms include diverse accumulations of such changes. In conclusion, the space of dna sequences ${D}_{s}$ is a theoretical construct that is not sufficient to discuss the phenotypes, starting with their viability. Then, the status of ${D}_{s}$ w.r. to phenotypes is similar to the status of the space of possible music scores ${M}_{s}$ w.r. to symphonies. The latter is not sufficient to assess whether music scores are symphonies or not. The functioning of organisms is not a generic property that can be discussed on the basis of the possible dna sequences alone; it includes specificities proper to different phyla and even proper to some individuals. ${D}_{s}$ defines only pre-possibilities for phenotypes. As a result, even in the framework of complete genetic determinism, it seems necessary to consider that new possibilities appear in evolution.^[7] 3.1.2. Networks and shapes The same reasoning applies to other mathematical spaces used to describe living phenomena, such as networks of chemical interactions or spaces of possible biological forms. An important extension of molecular biology discusses networks of interacting molecules, where interactions are of a chemical nature. This extension defines the field of molecular systems biology. However, biological organizations do not correspond to generic properties of these spaces (possible networks endowed with one structure or another). Network structures are not exhaustive since relevant properties are excluded such as anatomical structures or physical forces. Remarkable evolutionary novelties at the molecular level such as molecular motors (Chowdhury, 2013), microtubules ( Karsenti, 2008), chromatin (Cortini et al., 2016) or fibers ( Barnes et al., 2014) are excluded from the discussion because their causal role does not correspond to generic chemical reactions. All these molecules are examples of molecules with specific properties that appeared in evolution. Chemical networks with particular properties such as autocatalytic sets aim to capture a fundamental property of cells ( Hordijk & Steel, 2017) but the study of their generic properties does not capture the specific properties emerging in evolution. Similarly, physicists interested in biological morphogenesis might want to consider the mathematical space of possible forms in the usual three-dimensional space or the possible positions of a finite number of cells in space. These shape spaces might seem all-encompassing in that they are typically used to describe biological shapes like music scores are used to describe symphonies. However, they are insufficient to describe many models of morphogenesis which typically involve chemicals (morphogens), fibers, mechanical forces, etc. These elements are required to make any biologically meaningful analysis in these spaces and are the historical outcomes of evolution. As a result, relevant properties are not generic features of these spaces which we relate to the critic of D’Arcy Thompson by Gould (2002, chap. 11). These spaces are interesting as pre-possibilities, like music scores for symphonies, but they are not appropriate to prestate explicitly all possible organisms. 3.1.3. Biological possibilities and classical mechanics We now discuss a more philosophical way to criticize the notion of new possibilities and diachronic emergence. The idea is to propose a putative definitive space of possibilities by relying on physics, usually classical mechanics, on the basis of a physicalist and reductionist view of biology. In classical mechanics, a system such as an organism or the biosphere should follow a specific trajectory that follows from its state at a given time point, where the state is defined as the positions and momenta of the particles involved. We will call $S$ the space generated by these quantities. Note that this reasoning is not entirely sound from a physical point of view but it is common and interesting.^[8] Let us now analyze this situation precisely. The fundamental principle of classical mechanics states that, for each particle, mass times acceleration equals the external forces exerted on this particle. Therefore, we have a huge dynamical system $\varphi$ which is written on the basis of generic forces. Determinism follows from the application of the Cauchy-Lipschitz theorem which ensures that this kind of dynamical system has a unique solution for a given state at a time ${t}_{0}$. In short, determinism is a generic property of such dynamical systems. There are a few other generic properties of these systems such as the conservation of energy, of momentum, Etc. However, it does not follow that this generic construct would explicitly define the possibilities of biological evolution or biological organisms. Actually, the sets involved have the same cardinality as real numbers, and this cardinality implies that we cannot define individually all their elements, as discussed in section 2.1. Then, it is not sound to claim that biological possibilities can be derived from physical ones without a very precise discussion. Nevertheless, we cannot conclude immediately that this conclusion is wrong. We have to discuss whether individual properties of the elements of the state space $S$ are theoretically relevant for biology. Since the system is deterministic, these individual properties ultimately correspond to the properties of the initial conditions w.r. to $\varphi$. The question is then to assess whether generic properties of $\varphi$ are sufficient to understand biological possibilities, or on the opposite, if biology is mostly about specific properties of the initial conditions of $\varphi$. In this deterministic frame, all contingent events come down to specific properties of initial conditions. Therefore, all arguments which state that such events are decisive for biological phenomena (for example, Beatty, 1995; Gould, 2002; Montévil et al., 2016) can be translated into arguments to state that biology depends on specific properties of these initial conditions. As a result, we do not think that this system defines explicitly biological possibilities. We provide further arguments in this sense in section 3.3. We have examined a few examples of putative all-encompassing sets and shown that they are compatible with our notion of new possibility. Like the situation in music and unlike the one in statistical mechanics, these spaces do not provide an explicit account of biological possibilities. 3.2. Novelty and physical approach to self-organization To understand development, several biologists and physicists use the concepts of phase transitions, physical morphogenesis or the associated concept of physical self-organization (Moore, 2012; Zhu et al., 2010; Saetzler et al., 2011; Forgacs & Newman, 2005). Turing’s model of morphogenesis (Turing, 1952) typically falls in this category. Other examples are phase transitions such as the formation of graphite mentioned in the introduction, Bénard cells or flames. The corresponding models focus on the formation of a qualitative “new” structure. These concepts have different theoretical backgrounds (Anderson & Stein, 1985) but their mathematical approaches to novelty are sufficiently close for us to discuss them together. The novelty described by these models constitutes a paradigmatic case of diachronic emergence and typically corresponds to a symmetry breaking (Anderson, 1972; Longo & Montévil, 2014). In a nutshell, symmetries are transformations which do not change an intended aspect of an object. This concept applies both to the usual three-dimensional space and more abstract spaces. In a symmetry breaking, the whole description of the object, the state and the equations, initially follows a symmetry. For example, the system may be symmetric for all rotations about a point, which means that all directions are equivalent. After the symmetry breaking, one or several directions are no longer equivalent to the others. Since both the initial state and equations are symmetric, the “choice” of specific directions does not derive from the initial description and is random.^[9] Moreover, these new directions are associated with specific properties and thus correspond to a new qualitative behavior. As a result, in these frameworks, novelty stems from the randomness associated with the symmetry breaking. Nevertheless, several related reasons restrict the strength of this concept of First, these phenomena are spontaneous and may be repeated ad libidum. In models, physical self-organization is generic. It is usually sufficient to change the value of a parameter for the new structure or dynamics to appear. For example, it is sufficient to lower the temperature of liquid water to trigger the formation of ice. There is randomness in these phenomena but this randomness only corresponds to the way in which the symmetry is broken, or in other words the ”choice” of one direction or another. The qualitative aspect of the pattern are always the same (and actually all possible outcomes are equivalent). As a result, contingency and a fortiori unpredictability do not impact the qualitative outcome. This property weakens the notion of novelty associated with these processes (Stephan, 1999). Second, physicists mathematize these phenomena on the basis of invariance. Equations are crucial to understand these systems and these equations do not change during the formation of new structures, at least at the microscopic level. For example, dynamical systems follow the same rules during the dynamics. Similarly, the same equation describes the partition function before and after a phase transition. At the same time, the macroscopic level includes a variable called the order parameter which goes from being uniformly zero to having a non-trivial value. Thus, we can say that the macroscopic equation changes. Statistical mechanics is intrinsically ambivalent since the microscopic equation does not change while the macroscopic one does: there is a duality between the two levels. Nevertheless, this means that, at the microscopic level, the causal structure remains the same before and after the change. As a result, the new patterns stem from a preexisting mathematical structure. The spontaneous nature of these novelties follows from the permanence of these underlying equations, and this permanence justifies that the changes are generic: once a parameter reaches a value given by the equations, the novelty has to appear. Third, the permanence of the equations corresponds to the permanence of a causal structure. For example, a phase transition is the transition of local fluctuations to system-wide effects. The novelty follows from a causal structure that is already relevant and already actual. We propose then to distinguish virtual possibilities from actual possibilities. Virtual possibilities do not follow from the causal relations required to understand the initial situation. For example, Adams et al. (2017) uses virtual possibilities by writing a dynamical system which switches between unrelated rules. By contrast, actual possibilities are possibilities which may be qualitatively different but are nevertheless entailed by the relations between the parts of a system before the possibility becomes actualized. In physical models, the novelty is not just virtual in the initial situation; its formal ingredients are already there. Therefore, the new patterns were actual possibilities before their Fourth, in these frameworks, the formation of a new structure is generally punctual: below a given value of the control parameter, the new structure does not appear, and it does appear above this value. The only middle ground corresponds to a point. Examples include phase transitions or bifurcation points for dynamical systems. It is simple to understand this when the formation of the structure corresponds to a symmetry breaking. Let us recall that a symmetry breaking is a transition from a symmetric situation to a situation with fewer symmetries such as the transition from being symmetric by rotation to having special directions. Since having a given set of symmetries is a property that is either met or not, the transition from the first to the second is an all or nothing phenomenon. Because these transitions are all or nothing, they cannot be decomposed and thus are elementary. Being elementary, they are easier to trigger than more complex novelties. In short, the punctual nature of the appearance of these new structures corresponds to their elementary nature and contributes to explaining why such changes occur spontaneously. Fifth, in these models, the set of qualitatively different macroscopic patterns is usually very small. The examples in the beginning of this discussion lead to a finite and actually a small number of such possibilities which means that they can all be predicted, provided that the equations describing them remain valid. By contrast, following our discussion in section 2, the notion of new possibilities becomes irreducible when all relevant qualitative cases cannot be analyzed. Last, these systems are mostly ahistorical, and this is crucial for the concept of novelty. It does not matter whether a volume of liquid water used to be in a solid state in the past or if it is the first time that this specific volume of water transforms into ice. The transformation from liquid to ice is the same independently of whether the past of the system includes this state or not. In this sense, the exploration of macroscopic possibilities has no permanent consequences for the system beyond the permanence of the realization of these possibilities. By contrast, novelties in biology can have consequences that are not limited to their preservation. The appearance of feathers in some dinosaurs has led to changes in the organizations of these dinosaurs over evolutionary time scales which means that the impact of this appearance is not limited to the permanence of the novelty. Similarly, the specific way in which a two-legged goat learned to walk has led to anatomical accommodations which facilitate this behavior (West-Eberhard, 2003). Biological novelties typically lead to changes beyond just their appearance. Let us conclude on the concept of novelty in these frameworks. We do not aim to criticize the idea that these models correspond to a genuine and objective notion of novelty. However, this notion is a weak one for all the reasons above. In particular, it is insufficient to justify and understand the concept of new possibilities or the historical nature of biological phenomena. These physical novelties correspond to elementary, punctual and generic processes. 3.3. What matters for organisms 3.3.1. The importance of functions In the previous discussion, we have left implicit what matters in the causal structure of organisms. We think that the proper understanding of organisms or species has to include the many functions that contribute to their organization, survival, and reproduction. As a result, biology has a special interest in parts that are functional and in what they do. This reasoning is at least partially in line with Mayr’s statement quoted in introduction ( Mayr, 1963). It follows that generic spaces which do not articulate these aspects cannot provide an explicit account of biological possibilities. They can play an explanatory role, but only as pre-possibilities. Our perspective, here, differs from phylogeny where structures and more precisely synapomorphies (shared novelties) are used to classify organisms. This methodology has been chosen because phylogeny aims to assess genealogical relationships. In our terminology, the best properties for phylogeny are the ones that are sufficiently specific to be unlikely to appear several times in evolution and at the same time generic and stable enough to be shared by genealogically close individuals, up to possible variations. Concerns in discussing functions stem from the idea that similar functions can mold structures towards the same optimal shape so that genericity could be obtained without common descent. However, we can point out that the same applies to elementary morphogenetic processes. For example, Thom’s catastrophe theory provides a systematic, ahistorical classification of at least some of these processes. In the case of morphogenesis, Wagner & Lynch (2010) uses gene networks called character identity networks precisely to ensure the specificity of the novelties. Ultimately what matters is the specificity of the novelty in combination with its theoretical relevance. We will now provide a further justification of the importance of biological functions. As a thought experiment, let us consider entirely silent point mutations that are subject to drift, that is to say, fixation for purely statistical reasons. Assuming there is no lateral transfer and that the sequences are very long, the proximity of two sequences have vanishingly small chances to be obtained without genealogical proximity. This is due to the huge number of possible sequences which prevents ergodicity in practice, that is, the exploration of the full possibility space, see Longo et al. ( 2012). The uniqueness of the outcome is useful to reconstruct genealogies, but it does not mean that the theoretically relevant causal structure is specific. On the opposite, this situation is perfectly well described by the generic process of drift and equiprobable mutations. The causal analysis of the situation is provided by the generic analysis of such a process. By contrast, if there is a feedback between the specificity of a situation and the causal analysis, then there is a strong historicity that prevents the specific situation to be subsumed by a generic analysis. In biology, we then posit that historicity stems from the coupling between specificity and functionality. In the following discussion, we will focus on novelties which are associated with biological functions. Biological functions may be interpreted in different ways depending on the level of description and the theoretical perspective of interest. We discuss two main philosophical accounts of biological functions, and we consider that they are not mutually exclusive. The first framework is called the selective effect account of functions. In this account, heritable traits are functional when, in a nutshell, their consequences have led them to be positively selected ( Godfrey-Smith, 1994; Millikan, 1989; Neander, 1991; Garson, 2016). The second framework is called organizational and states that functionality stems from being included in the circular causal structure that characterizes organisms (Mossio et al., 2009; Montévil & Mossio, 2015). This perspective is in line with former works of Varela et al. (1974), Rosen (1991) and Kauffman (2002). In the framework of closure of constraints, functional parts are called functional “constraints”. A constraint is defined by its causal role with respect to a process and its stability at the time scale of this process: it is not consumed nor destroyed by the process. Let us call $\mathsc{C}$ the set of constraints that are part of an organism. For a constraint $c$ to be in $\mathsc{C}$, $c$ needs to act on at least a process generating another element of $\mathsc{C}$ and to depend on at least another element of $\mathsc{C}$. In a nutshell, constraints of an organism are collectively mutually dependent ( Montévil & Mossio, 2015; Mossio et al., 2016). Since the existence of these constraints depends on their consequences via the other constraints of $\mathsc{C}$, it is relevant to interpret them as being functional. Before elaborating on the consequences of these frameworks on novelties, let us consider the reciprocal question and remark that novelties make it possible to discuss differences between the selective effect and the organizational accounts of functions. Novelties that are restricted to an individual such as the two-legged goat mentioned in section 3.2 cannot be etiological functions since they are not heritable. However, they certainly can be functional in the organizational sense. This example implies that the organizational notion cannot be reduced to the etiological one. 3.3.2. Novelties and functions In both accounts of functions, relevant biological novelties are not just the appearance of patterns. In the selective effect account, defining a function requires discussing its differential effect on the life cycle in a population. In the organizational account, the concept of function directly involves the relationship between the part of interest, associated parts, and ultimately the rest of the organism. In both accounts, the relationships between the part studied and a larger whole are fundamental, and this applies to biological novelties inasmuch as we assume that they are functional. As mentioned in the previous section, novelties in the sense of a single symmetry breaking are limited by the fact that they have no lasting impact beyond the maintaining of the novelty itself (that is the maintaining of the symmetry breaking). It is not the case for functional novelties. In the organizational account, biologically relevant novelties are constraints that become a part of the organization. By definition of an organization, a relevant novelty i) contributes to the maintaining of at least another constraint of the organization and ii) is maintained by processes which are canalized by at least another constraint of the organization. Then, by contrast with the physical novelties in the previous section, the appearance of a biological novelty is generally not punctual. The relations leading to i) and ii) do not necessarily appear simultaneously. The appearance of a biological novelty is then a composite event that corresponds to the integration of the novelty to an organization. In this framework, the focus on functions does not mean that just the role of a part is considered. Instead, it means that both criteria i) and ii) are met. The constraint plays a role and this role is performed in a specific manner by generating other constraints. However, this is not sufficient, and it is also necessary to make explicit its dependence on other constraints. The taking into account of these relations means that the organizational framework reduces the gap between purely functional and structural perspectives on novelties. Moreover, the integration of a constraint to an organization is not restricted to the minimum requirements i) and ii). Instead, this integration may become more intricate over time by the articulation with various other, possibly new constraints. The point is that the theoretical description of functions is no longer elementary in relational accounts such as closure of constraints. For a functional constraint to be an explicit possibility, it is required to define it and to show that i) and ii) are met and lead the constraint to be a part of closure. We illustrate this by discussing a few cases. • Let us consider a part that can be described with the (bio)physical concepts and models discussed in the previous section. Then, changing the value of a parameter leads to a new constraint. At the level of the part, the constraint appears as the generic result of a causal structure that is already actual, it corresponds to an actual possibility. However, these models are not self-sufficient since the inscription of the new constraint in organizations is not made explicit. More precisely, condition ii) can be met ”for free” in some cases since the constraints described by the models are already actual and are presumably maintained by other constraints. Now, if the relation between the new constraint and the organism can be deduced from the current state of affairs, then this new constraint was an actual biological possibility before it If not, the new constraint is an actual biological pre-possibility. In this case, the novelty does not stem just from the new constraint per se but instead from its inscription in the organism. Its possible role, i), is not predefined. • Let us assume that a functional role is associated with a part. Then, this role may enable us to define a specific pre-possibility as biologically relevant. Several biophysical models uses this argument (for example, Lesne & Victor, 2006; West et al., 1999). For example, the specific situation may be optimal w.r. to this role or have functionally remarkable qualitative features. Then the new constraint appears as a specific pre-possibility in the description of the part, but it can be seen as a generic outcome when taking into account its functionality. In this case, it is condition i) which tends to be met directly. But it is not always the case since the specific situation may require a reorganization of the way the role of the functional part is performed. Moreover, condition ii) is not met a priori since this situation is not generic in the description of the part and thus requires an explanation. For example, being at a bifurcation point requires the addition of an entirely different regulatory dynamics (Camalet et al., 2000). • Now let us consider one or several new constraints that were not actual possibilities before they appear. In this situation, the novelty is a virtual biological possibility if its articulation is also defined and a virtual biological pre-possibility if it is not. For example, Adams et al. (2017) define a computational dynamics with changes of dynamical rules along the dynamics. The mathematical structure of these rules cannot be deduced from each other. As a result, when one rule is actual the alternative is virtual: its possibility is not deducible from the current state of affairs. If the new constraint is elementary, it corresponds to a next adjacent possible in Kauffman’s vocabulary (Kauffman, 1996; Longo et al., 2012). In the two first cases, the new constraint itself can be defined as a generic outcome, either as a result of a morphogenetic process or for functional reasons. However, this is not usually sufficient to describe the new constraint as a possible part of the organization since its articulation with the organization is not made explicit. This articulation may even be impossible or require many other changes. The generic definitions of the constraints do not show explicitly that these generic constraints can be articulated to the organization and how. As a result, they are only pre-possibilities. Then, the appearance of the new functional constraint corresponds to a new possibility. In the last case, the new constraint is neither an actual pre-possibility at the level of the part nor generic from a purely functional point of view. This constraint is not deduced from the causal structure of the initial organization. As a result, its description must have another origin. In the example considered (Adams et al., 2017), the new rules were postulated as a way to perform simulations. In a more biological perspective, it is possible to define virtual possibilities by analogy with other phyla. In any cases, we consider that virtual possibilities are not genuine possibilities because they are not actual possibilities: they do not stem from the relations needed to understand the current state of affairs. Should they get actualized, then they are new At this point, we have not argued whether the status of pre-possibility is objective or epistemic as defined in section 2.4. In general, we define possibilities as actual, generic possibilities in the initial situation at the level of organizations. The latter implies that they meet conditions i) and ii). This objective, positive notion of possibility allows distinguishing artificial constructs such as virtual possibilities from genuine possibilities. We will now argue that there are objective new biological possibilities. We mentioned in the preceding section, point three, that physical models of morphogenesis are based on a causal structure that is already actual and not merely virtual. In biology, this perspective is not valid in general. For example, let us consider two novelties, where the second novelty requires the first novelty not only to appear but also to acquire a biological meaning. In other words, the emergence of the first novelty is a necessary ingredient for the second novelty to be able to play a functional role. Then, the causal structure of the second novelty is clearly not involved in the causal structure of the initial situation. For example, articulated jaws enabled teeth such as molars which can crush food. However, crushing food with the mouth was and still is not an actual possibility at all for Chordates without articulated jaws. This difference between physical morphogenesis and biology should not come as a surprise, even from a reductionist point of view. Physical morphogenesis is a framework for systems which are made from predefined components and boundary conditions and aim to derive the appearance of a new structure from these already existing interactions. Biological changes are not bound by these limitations. A part may be involved in the appearance of other new possibilities, and this form of causation has been called enablement (Longo et al., 2012; Longo & Montévil, 2013b). One of the core processes of evolution is the iterative appearance of novelties on the basis of already existing organizations. This iterative process is central to the open-endedness of biological evolution. Then, a novelty may become deeply integrated into certain biological organizations, making its complete disappearance unlikely to be viable. For example, thyroid hormones appeared and are shared among vertebrates. Some of their effects are largely conserved (Tohme et al., 2012) but others are highly specific such as their role in many specific metamorphosis processes (Holzer & Laudet, 2015). To sum up, biological novelties are not elementary events. Instead, they involve the integration to an organization, a life cycle and an environment and this integration typically involves a sequence of changes. It justifies that biological novelties are specific even when some aspects of them are generic. As a result, biological changes involve non-generic changes, and we think that the concept of new possibilities is fundamental for biology. 3.4. Responses to possible mathematical objections A possible mathematical argument against the notion of new possibilities in biology is based on dynamical systems. Some dynamics are indeed very rich in the sense that they can generate many patterns. For example, the dynamical system described in section 2.3 can generate all possible strings of characters. However, as discussed in the same section, it is not sufficient for a dynamics to be able to generate a pattern of interest for this dynamics to actually explain this pattern, that is to say for this pattern to be an explicit possibility of this system. If this pattern stems from a specific initial condition, then this pattern is actually more a property of the specific initial condition than of the rule of the dynamics per se. Therefore, it is necessary (from a statistical or metric point of view) to artificially choose the initial condition that leads to this pattern for the pattern to actually appear. Another mathematical counterargument states that we can propose abstract spaces so large that they can accommodate everything that biologists can encounter (for example a space of infinite dimension). This argument differs from the discussion in section 3.1 since this mathematical space is explicitly built without an interpretation (unlike the spatial position of cells, for example). The response remains similar; these spaces are not endowed with mathematical structures such as equations that would make their biological meaning explicit. As such, these spaces do not enable scientists to state biological possibilities, and do not oppose the notion of new possibilities. Last, a possible objection is based on the following operation: after the observation of a new possibility, this new possibility can be added to the initial set of possibilities. There are two possible relations between a set of pre-possibilities $S$ and a situation $a$ which does not correspond to generic properties of $S$. First, $a$ may be an element of $S$. Second, $a$ may be outside $S$. Going from one of these two situations to the other is to a certain extent arbitrary because $S$ can be extended a posteriori by adding new possibilities. However, in both cases, the properties of $a$ remain non-generic in the initial description which means that there is objectivity in describing $a$ as a new possibility even if we accept this retrospective theoretical move. Now, a further counter-argument would be to change the definition of $S$ a posteriori so that the new possibility $a$ becomes generic. This objection requires a precise discussion. Let us call $S\ left(t\right)$ the possibility space at time $t$. If the observer witnesses a new possibility at time ${t}^{\prime }>t$, then the possibility space $S\left({t}^{\prime }\right)$ is larger than $S\ left(t\right)$. The operation that we have described in the previous paragraph is retrospectively to consider ${S}_{{t}^{\prime }}\left(t\right)$, the space of possibility at time $t$ on the basis of a novelty that appeared between $t$ and ${t}^{\prime }$. Bergson calls conflating ${S}_{{t}^{\prime }}\left(t\right)$ and $S\left(t\right)$ the retrospective illusion. This illusion may be compared with the situation in usual probabilities. Using the result of a random drawing to describe the initial condition makes it always possible to describe the process as deterministic which is clearly wrong at this level of description. The novelty used to define ${S}_{{t}^{\prime }}\left(t\right)$ by comparison with $S\left(t\right)$ does not come from the actual behavior at $t$ or before, ex hypothesi. The cost of conflating ${S}_{{t}^{\prime }}\left(t\right)$ and $S\left(t\right)$ is that the definition of ${S}_{t}$ depends on ulterior phenomena and becomes a finalist description: this methodology has a bias towards a specific outcome by excluding many alternative changes which are not taken into account. 4. Conclusion In this paper, we discuss the concept of novelty in music and biology, and we justify the notion of new possibilities. Our argument starts with a paradox stemming from the analysis of Bergson’s work. For Bergson, the possibility of a symphony does not preexist to its conception because knowing the possibility of the symphony implies that the symphony exists. However, we point out that the set of possible music scores is mathematically well-defined and this set seems to define all possible symphonies. The confrontation of this two lines of reasoning constitutes a paradox. To solve this paradox, we have shown that defining a set is not the same as defining each of its elements individually. More generally, generic, collective definitions and reasoning cannot be conflated with reasoning on individual elements. In physical models and theories, generic properties of sets of possibilities are the theoretically relevant properties. As a result, physicists can discuss huge possibility spaces where the physical, causal properties of these possibilities are made explicit. By contrast, in music, an examination of individual music scores is ultimately necessary to discuss their musical meaning. We then define explicit possibilities, which are endowed with an explicit discussion of the relevant properties. When sets are infinite, explicit possibilities require the genericity of the relevant properties, except for a finite number of specific cases. By contrast, pre-possibilities are relevant sets which do not meet the criterion of possibilities. When the relevant properties are specific, the status of pre-possibility is not due to a lack of knowledge, and the notion of new possibility is objective. In biology, some mathematical structures are often assumed to be sufficient to represent or even determine organisms. For example, complete genetic determinism assumes that dna sequences are sufficient to determine phenotypes. We show that even this extreme assumption is compatible with the idea of new possibilities because such constructs define pre-possibilities and not possibilities. For example, there is no generic relation between genotypes and phenotypes. Instead, this relation changes in evolution. Organisms have specific features that are not covered by the generic properties of mathematical structures such as sequences of nucleotides. Then, the theoretical roles played by such spaces cannot be compared with the ones in physics, where the causal structure of the possibilities is made explicit. We also discuss the idea that biological situations could be seen from the perspective of classical mechanics, with a fixed possibility space and dynamical rule. We show that there is no reason to think that biological properties are generic properties of such a system which means that biological explicit possibilities are not necessarily derived from the physical ones. This reasoning is based on the weight of historical contingency in the determination of biological processes and provides a strong argument for diachronic emergence in biology. We analyze novelty in some physical models. We use these examples to distinguish virtual possibilities from actual possibilities: the latter are the result of a pre-existing causal structure that is already taking place. In these models, the concept of novelty is objective but weak since these novelties are generic, actual possibilities before they appear. In biology, we argue that a strong notion of novelty is given by situations which are specific before being actualized and are associated with functions. Processes leading to specific outcomes are the ones which are likely to have a unique origin. However, they are not sufficient to argue that new possibilities are relevant. Drift in huge spaces provides a weak form of historicity that can be analyzed by generic equations. By contrast, as discussed in section 3.3.1, if there is a feedback between the specificity of a situation and the causal analysis, then there is a strong historicity that prevents the specific situation to be subsumed by a generic framework. From this perspective, we think that functional novelties have a special role. Then, we discuss the properties of biological novelties and show that they are composite. As a result, even in cases where partial generic predictions can be performed, functional novelties are typically specific. As a consequence, we think that the concept of new possibilities is a fundamental biological concept. I am grateful to Ana Soto, Giuseppe Longo, Carlos Sonnenschein, Marc Godinot, Paul-Antoine Miquel, Arnaud Pocheville and the anonymous reviewers for their critical insights on previous versions of this article. I also would like to thank Guillaume Lecointre for helpful discussions and Jean Lassègue for pointing out the work of Leibniz. 1. Adams, A., Zenil, H., Davies, P., & Walker, S. (2017). Formal definitions of unbounded evolution and innovation reveal universal mechanisms for open-ended evolution in dynamical systems. Scientific Reports, 7. doi: 10.1038/s41598-017-00810-8 . 2. Anderson, P. W. (1972). More is different. Science, 177, 393–396. doi: 10.1126/science.177.4047.393 . 3. Anderson, P. W., & Stein, D. L. (1985). Broken symmetry, emergent properties, dissipative structures, life: Are they related? In E. F. Yates (Ed.), Self-organizing systems: The emergence of order (pp. 445–458). Plenum Press. 4. Barnes, C., Speroni, L., Quinn, K., Montévil, M., Saetzler, K., Bode-Animashaun, G., McKerr, G., Georgakoudi, I., Downes, S., Sonnenschein, C., Howard, V., & Soto, A. (2014). From single cells to tissues: Interactions between the matrix and human breast cells in real time. PLoS ONE, 9, e93325. doi: 10.1371/journal. pone.0093325 . 5. Beatty, J. (1995). The evolutionary contingency thesis. Concepts, theories, and rationality in the biological sciences, (pp. 45–81). 6. Bedau, M. A., McCaskill, J. S., Packard, N. H., Rasmussen, S., Adami, C., Green, D. G., Ikegami, T., Kaneko, K., & Ray, T. S. (2000). Open problems in artificial life. Artificial Life, 6, 363–376. doi: 10.1162/106454600300103683 . 7. Bergson, H. (2014). La pensée et le mouvant. Editions Flammarion. 8. Bich, L., & Bocchi, G. (2012). Emergent processes as generation of discontinuities. In Methods, models, simulations and approaches towards a general theory of change. (pp. 135–146). Singapore: World Scientific. 9. Borges, J. L. (1998). The library of babel. Collected fictions,. 10. Camalet, S., Duke, T., Julicher, F., & Prost, J. (2000). Auditory sensitivity provided by self-tuned critical oscillations of hair cells. Proceedings of the National Academy of Sciences, (pp. 3183–3188). doi: 10.1073/pnas.97.7.3183 . 11. Chowdhury, D. (2013). Stochastic mechano-chemical kinetics of molecular motors: a multidisciplinary enterprise from a physicists perspective. Physics Reports, 529, 1–197. doi: 10.1016/ j.physrep.2013.03.005 . 12. Cortini, R., Barbi, M., Caré, B. R., Lavelle, C., Lesne, A., Mozziconacci, J., & Victor, J.-M. (2016). The physics of epigenetics. Reviews of Modern Physics, 88, 025002. doi: 10.1103/ RevModPhys.88.025002 . 13. Danchin, E., & Pocheville, A. (2014). Inheritance is where physiology meets evolution. The Journal of Physiology, 592, 2307–2317. doi: 10.1113/jphysiol.2014.272096 . 14. David, L., Ben-Harosh, Y., Stolovicki, E., Moore, L. S., Nguyen, M., Tamse, R., Dean, J., Mancera, E., Steinmetz, L. M., & Braun, E. (2013). Multiple genomic changes associated with reorganization of gene regulation and adaptation in yeast. Molecular Biology and Evolution, 30, 1514–1526. doi: 10.1093/ molbev/mst071 . 15. Dawkins, R. (1986). The blind watchmaker: Why the evidence of evolution reveals a universe without design. WW Norton & Company. 16. Feigenbaum, M. J. (1980). The metric universal properties of period doubling bifurcations and the spectrum for a route to turbulence. Annals of the New York Academy of Sciences, 357, 330–336. 17. Forgacs, G., & Newman, S. A. (2005). Biological physics of the developing embryo. Cambridge University Press. 18. Garson, J. (2016). A critical overview of biological functions. Springer. 19. Gilbert, S. F., & Epel, D. (2009). Ecological developmental biology: integrating epigenetics, medicine, and evolution. Sinauer Associates Sunderland. 20. Godfrey-Smith, P. (1994). A modern history theory of functions. Noûs, 28, 344–362. 21. Gould, S. J. (2002). The structure of evolutionary theory. Harvard University Press. 22. Heams, T. (2014). Randomness in biology. Math. Structures in Comp. Sci., special issue, 24. doi: 10.1017/S096012951200076X . 23. Holzer, G., & Laudet, V. (2015). Thyroid hormones: a triple-edged sword for life history transitions. Current Biology, 25, R344–R347. doi: 10.1016/j.cub.2015.02.026 . 24. Hordijk, W., & Steel, M. (2017). Chasing the tail: The emergence of autocatalytic networks. Biosystems, 152, 1 – 10. doi: 10.1016/ j.biosystems.2016.12.002 . 25. Huang, S. (2009). Non-genetic heterogeneity of cells in development: more than just noise. Development, 136, 3853–3862. doi: 10.1242/dev.035139 . 26. Karsenti, E. (2008). Self-organization in cell biology: a brief history. Nature Reviews Molecular Cell Biology, 9, 255–262. doi: 10.1038/nrm2357 . 27. Kauffman, S. (1996). At home in the universe: The search for the laws of self-organization and complexity. Oxford university press. 28. Kauffman, S. (2002). Investigations. Oxford University Press, USA. 29. Kauffman, S. A. (2016). Humanity in a Creative Universe. Oxford University Press. 30. Koppl, R., Kauffman, S., Felin, T., & Longo, G. (2015). Economics for a creative world. Journal of Institutional Economics, 11, 1–31. doi: 10.1017/S1744137414000150 . 31. Koutroufinis, S. (2014). Beyond systems theoretical explanations of an organisms becoming: A process philosophical approach. In S. Koutroufinis (Ed.), In Life and Process (pp. 99–132). De Gruyter. doi: 10.1515/9783110352597 . 32. Koutroufinis, S. (2017). Organism, machine, process. towards a process ontology for organismic dynamics. Organisms. Journal of Biological Sciences, 1, 23–44. URL: http://ojs.uniroma1.it/index.php 33. Leibniz, G. W. (1991). De l’horizon de la doctrine humaine. Vrin. 1666. 34. Lesne, A., & Victor, J.-M. (2006). Chromatin fiber functional organization: Some plausible models. Eur Phys J E Soft Matter, 19, 279–290. doi: 10.1140/epje/i2005-10050-6 . 35. Longo, G., & Montévil, M. (2011). From physics to biology by extending criticality and symmetry breakings. Progress in Biophysics and Molecular Biology, 106, 340 – 347. doi: 10.1016/ j.pbiomolbio.2011.03.005 . 36. Longo, G., & Montévil, M. (2013a). Extended criticality, phase spaces and enablement in biology. Chaos, Solitons & Fractals, 55, 64 – 79. doi: 10.1016/j.chaos.2013.03.008 . 37. Longo, G., & Montévil, M. (2013b). Extended criticality, phase spaces and enablement in biology. Chaos, Solitons & Fractals, (pp. –). doi: 10.1016/j.chaos.2013.03.008 . 38. Longo, G., & Montévil, M. (2014). Perspectives on Organisms: Biological time, symmetries and singularities. Lecture Notes in Morphogenesis. Dordrecht: Springer. doi: 10.1007/ 978-3-642-35938-5 . 39. Longo, G., & Montévil, M. (2017). Comparing symmetries in models and simulations. In M. Dorato, L. Magnani, & T. Bertolotti (Eds.), Springer Handbook of Model-Based Science. Springer. doi: 10.1007/978-3-319-30526-4 . 40. Longo, G., Montévil, M., & Kauffman, S. (2012). No entailing laws, but enablement in the evolution of the biosphere. In Genetic and Evolutionary Computation Conference. GECCO12 New York, NY, USA: ACM. doi: 10.1145/2330784.2330946 . 41. Loreto, V., Servedio, V. D. P., Strogatz, S. H., & Tria, F. (2016). Dynamics on expanding spaces: Modeling the emergence of novelties. In M. Degli Esposti, E. G. Altmann, & F. Pachet (Eds.), Creativity and Universality in Language (pp. 59–83). Cham: Springer. doi: 10.1007/978-3-319-24403-7_5 . 42. Mayr, E. (1963). Animal species and evolution volume 797. Belknap Press of Harvard University Press Cambridge, Massachusetts. 43. Mazzola, G. (2012). The topos of music: geometric logic of concepts, theory, and performance. Birkh äuser. 44. Millikan, R. G. (1989). In defense of proper functions. Philosophy of science, 56, 288–302. 45. Montévil, M., & Mossio, M. (2015). Biological organisation as closure of constraints. Journal of Theoretical Biology, 372, 179 – 191. doi: 10.1016/j.jtbi.2015.02.029 . 46. Montévil, M., Mossio, M., Pocheville, A., & Longo, G. (2016). Theoretical principles for biology: Variation. Progress in Biophysics and Molecular Biology, 122, 36 – 50. doi: 10.1016/j. pbiomolbio.2016.08.005 . 47. Moore, A. (2012). Life defined. BioEssays, 34, 253–254. doi: 10.1002/bies.201290011 . 48. Mossio, M., Montévil, M., & Longo, G. (2016). Theoretical principles for biology: Organization. Progress in Biophysics and Molecular Biology, 122, 24 – 35. doi: 10.1016/j.pbiomolbio. 2016.07.005 49. Mossio, M., Saborido, C., & Moreno, A. (2009). An organizational account of biological functions. The British Journal for the Philosophy of Science, 60, 813–841. doi: 10.1093/ bjps/axp036 . 50. Muller, G. B., & Wagner, G. P. (1991). Novelty in evolution: restructuring the concept. Annual Review of Ecology and Systematics, (pp. 229–256). doi: 10.1146/annurev.es.22.110191. 001305 . 51. Neander, K. (1991). Functions as selected effects: The conceptual analyst’s defense. Philosophy of Science, 58, 168–184. doi: 10. 1086/289610 . 52. Pachet, F., & Roy, P. (2014). Imitative leadsheet generation with user constraints. In T. S. et al (Ed.), ECAI (pp. 1077–1078). doi: 10.3233/978-1-61499-419-0-1077 . 53. Paldi, A. (2003). Stochastic gene expression during cell differentiation: order from disorder? Cell Mol. Life Sci., 60, 1775–1779. 54. Papadopoulos, A., Roy, P., & Pachet, F. (2016). Assisted lead sheet composition using flowcomposer. In M. Rueher (Ed.), Principles and Practice of Constraint Programming: 22nd International Conference (pp. 769–785). Cham: Springer. doi: 10. 1007/978-3-319-44953-1_48 . 55. Rosen, R. (1991). Life itself: a comprehensive inquiry into the nature, origin, and fabrication of life. Columbia U. P. 56. Ruiz-Mirazo, K., Peretó, J., & Moreno, A. (2004). A universal definition of life: Autonomy and open-ended evolution. Origins of life and evolution of the biosphere, 34, 323–346. doi: 10.1023/B: ORIG.0000016440.53346.dc . 57. Saetzler, K., Sonnenschein, C., & Soto, A. (2011). Systems biology beyond networks: Generating order from disorder through self-organization. Seminars in Cancer Biology, 21, 165 – 174. doi: 10.1016/j.semcancer.2011.04.004 . 58. Soros, L., & Stanley, K. O. (2014). Identifying necessary conditions for open-ended evolution through the artificial life world of chromaria. In ALIFE 14: The Fourteenth Conference on the Synthesis and Simulation of Living Systems (pp. 793–800). volume 14. doi: 10.7551/978-0-262-32621-6-ch128 . 59. Stadler, B., Stadler, P., Wagner, G., & Fontana, W. (2001). The topology of the possible: Formal spaces underlying patterns of evolutionary change. Journal of Theoretical Biology, 213, 241 – 274. doi: 10.1006/jtbi.2001.2423 . 60. Stephan, A. (1999). Varieties of emergence. Evolution and cognition, 5, 50–59. 61. Tohme, M., Fini, J.-B., Laudet, V., & Demeneix, B. (2012). Chapter 8 small model organisms as tools in food safety research. In Hormone-Disruptive Chemical Contaminants in Food (pp. 136–153). The Royal Society of Chemistry. doi: 10.1039/ 9781849732970-00136 . 62. Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59, 433–460. 63. Turing, A. M. (1952). The chemical basis of morphogenesis. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 237, 37–72. doi: 10.1098/rstb. 1952.0012 . 64. Varela, F., Maturana, H., & Uribe, R. (1974). Autopoiesis: The organization of living systems, its characterization and a model. Biosystems, 5, 187 – 196. doi: 10.1016/0303-2647(74)90031-8 . 65. de Vladar, H. P., Santos, M., & Szathmáry, E. (2017). Grand views of evolution. Trends in Ecology & Evolution, 32, 324 – 334. doi: 10.1016/j.tree.2017.01.008 . 66. Wagner, G. P., & Lynch, V. J. (2010). Evolutionary novelties. Current Biology, 20, R48 – R52. doi: 10.1016/j.cub.2009.11. 010 . 67. West, G., Brown, J., & Enquist, B. (1999). The fourth dimension of life: Fractal geometry and allometric scaling of organisms. Science, 284, 1677–1679. doi: 10.1126/science.284.5420.1677 . 68. West-Eberhard, M. J. (2003). Developmental plasticity and evolution. Oxford University Press. 69. Zhu, J., Zhang, Y.-T., Alber, M. S., & Newman, S. A. (2010). Bare bones pattern formation: A core regulatory network in varying geometries reproduces major features of vertebrate limb development and evolution. PLOS ONE, 5, 1–11. doi: 10.1371/ journal.pone.0010892
{"url":"https://montevil.org/publications/articles/2019-montevil-possibility-spaces-novelty/","timestamp":"2024-11-06T14:24:03Z","content_type":"text/html","content_length":"419512","record_id":"<urn:uuid:1d5f5541-5b5d-469a-8cfe-9ea4c3946eaa>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00103.warc.gz"}
Whole Number -- from Wolfram MathWorld One of the numbers 1, 2, 3, ... (OEIS A000027), also called the counting numbers or natural numbers. 0 is sometimes included in the list of "whole" numbers (Bourbaki 1968, Halmos 1974), but there seems to be no general agreement. Some authors also interpret "whole number" to mean "a number having fractional part of zero," making the whole numbers equivalent to the integers. Due to lack of standard terminology, the following terms are recommended in preference to "counting number," "natural number," and "whole number." set name symbol ..., integers Z 1, 2, 3, 4, ... positive integers Z-+ 0, 1, 2, 3, 4, ... nonnegative integers Z-* 0, nonpositive integers negative integers Z--
{"url":"https://mathworld.wolfram.com/WholeNumber.html","timestamp":"2024-11-04T05:29:26Z","content_type":"text/html","content_length":"55645","record_id":"<urn:uuid:7d468fd7-05dd-40e5-a17a-60966825cd3d>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00689.warc.gz"}
5 Best Ways to Check Congruency of Two Triangles in Python π ‘ Problem Formulation: In geometry, two triangles are considered congruent if their corresponding sides and angles are equal. Our goal is to write a program to verify if a pair of triangles is congruent based upon given side lengths (and possibly angles). For instance, if Triangle A has sides of lengths 3, 4, 5 and Triangle B also has sides of similar lengths, the desired output should affirm congruency between the triangles. Method 1: SSS (Side Side Side) Congruency Check This method involves checking if all three sides of one triangle are equal to the corresponding sides of the other triangle. It assumes that the triangle’s sides are provided in sorted order (ascending or descending). If all three sides match, the triangles are congruent according to SSS postulate. Here’s an example: def is_congruent_sss(tri_a, tri_b): return tri_a == tri_b # Example usage triangle_1 = [3, 4, 5] triangle_2 = [5, 3, 4] # Sort the sides print(is_congruent_sss(triangle_1, triangle_2)) Output: True This code snippet defines a function is_congruent_sss() which takes the sorted sides of two triangles as input and returns True if they are congruent according to the SSS postulate. It demonstrates the use of list sorting and simple equality comparison to verify congruency. Method 2: SAS (Side Angle Side) Congruency Check In this method, triangles are declared congruent if any two sides and the included angle of one triangle are equal to the corresponding two sides and included angle of the other triangle. It is useful when angle measures are also known. Here’s an example: from math import radians, cos def is_congruent_sas(side1_a, angle_a, side2_a, side1_b, angle_b, side2_b): return side1_a == side1_b and side2_a == side2_b and cos(radians(angle_a)) == cos(radians(angle_b)) # Example usage print(is_congruent_sas(3, 90, 4, 3, 90, 4)) Output: True The function is_congruent_sas() compares two sides and the cosine of the included angle for both triangles, returning True if they are equivalent. It shows how trigonometric functions can be applied to check congruency following the SAS postulate. Method 3: ASA (Angle Side Angle) Congruency Check ASA congruence criterion checks if two angles and the intervening side are the same in both triangles. This method is applicable when the length of one side and measures of the adjacent angles are known for each triangle. Here’s an example: def is_congruent_asa(angle1_a, side_a, angle2_a, angle1_b, side_b, angle2_b): return side_a == side_b and angle1_a == angle1_b and angle2_a == angle2_b # Example usage print(is_congruent_asa(45, 5.6569, 90, 45, 5.6569, 90)) Output: True The is_congruent_asa() function verifies congruency based on the ASA postulate. If the provided side and adjacent angles match for both triangles, the function returns True. It highlights the importance of angle measurements for triangle congruency checks. Method 4: AAS (Angle Angle Side) Congruency Check AAS criteria states that two triangles are congruent if two angles and a non-included side of one triangle are equal to the corresponding parts of another triangle. This method is particularly useful when two angles and any side are known. Here’s an example: def is_congruent_aas(angle1_a, angle2_a, side_a, angle1_b, angle2_b, side_b): return side_a == side_b and angle1_a == angle1_b and angle2_a == angle2_b # Example usage print(is_congruent_aas(60, 60, 3, 60, 60, 3)) Output: True The function is_congruent_aas() implements the AAS congruency check. By comparing two angles and a non-included side of the triangles, it properly determines if the triangles are congruent. Bonus One-Liner Method 5: RHS (Right-angle Hypotenuse Side) Congruency Check This is a specialized case of the SSS postulate for right-angled triangles. The hypotenuse and one other side are compared to establish congruency using a concise one-liner function. Here’s an example: is_congruent_rhs = lambda hyp_a, side_a, hyp_b, side_b: hyp_a == hyp_b and side_a == side_b # Example usage print(is_congruent_rhs(5, 3, 5, 3)) Output: True Using a Python lambda function, is_congruent_rhs() provides a quick and elegant one-liner solution for the RHS congruency check specifically applicable to right-angled triangles. • Method 1: SSS Congruency Check. Straightforward for triangles of unknown orientation. Reliable if all sides are known. Limited when angle information is also needed. • Method 2: SAS Congruency Check. Efficient when two sides and included angle can be measured. Requires a bit more math (trigonometry). Can be precise. • Method 3: ASA Congruency Check. Useful for comparing triangles with known angles. Simple calculations but less common real-world application. • Method 4: AAS Congruency Check. Handy when specific non-included side and angles are known. Like ASA, can be limited in application. • Bonus Method 5: RHS Congruency Check. Ideal for right-angled triangles. Very specific use case. Elegant and simple to implement if conditions are met.
{"url":"https://blog.finxter.com/5-best-ways-to-check-congruency-of-two-triangles-in-python/","timestamp":"2024-11-03T05:54:49Z","content_type":"text/html","content_length":"71421","record_id":"<urn:uuid:561a99bb-13a2-42ce-9846-9b4032d23e7e>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00409.warc.gz"}
Cost efficiency and regulation of Slovenian water distribution utilities : an application of stochastic frontier methods Thèse de doctorat: Università della Svizzera italiana, 2006 (jury note: Summa cum laude) English Over the last two decades it has become increasingly important to promote the efficiency and improve the performance of natural monopolies operating within network industries. In line with this, different regulatory approaches have been designed aiming at preventing the abuse of monopoly power and at the same time enhancing the performance of regulated firms. The most widely adopted incentive-based regulatory schemes involve price cap (RPI-X), revenue cap, and yardstick regulation models. These schemes aim to give firms an incentive for efficient production and cost reduction. However, due to the imperfect information available to the regulator there are some drawbacks related to the use of price-cap regulation since the regulator does not know a firm’s true costs. High costs may be due to a firm’s particular production situation or merely to its inefficiency. Thus, in setting the initial price level and the yearly efficiency factor X in the price-cap formula the regulator can use some form of cost-based benchmarking analysis. In this case, benchmarking is used to establish a larger information basis for more effective regulation that reduces the informational asymmetry between firms and the regulator. Hence, there is a close link between efficiency measurements and incentive-based price regulation. Today’s price regulation of Slovenian water distribution utilities resembles the rate-of-return regulation scheme. Nevertheless, the current Rules on Price Determination of Obligatory Local Public Utilities for Environmental Protection (2004) envisage the use of benchmarking in the price-regulation process and defining of the best-practice performance. However, the Rules have not yet been put into practice since the benchmarking method has still not been determined. In the thesis we consider the use of parametric frontier benchmarking methods and suggest how the results could be used in the price-regulation process. The main method employed is Stochastic Frontier Analysis (SFA), while the Corrected Ordinary Least Squares (COLS) method is used to cross-check the results. A translog frontier cost function is estimated based on an unbalanced panel data set of 52 utilities over the 1997-2003 period. The cost inefficiency estimates of Slovenian water distribution utilities are obtained by several different parametric frontier methods. The employed models differ in their assumptions, method of estimation and in their ability to account for firm-specific effects and distinguish between firm heterogeneity and inefficiency. The pooled model does not take into account the panel structure of the data and is therefore unable to separate unobserved heterogeneity from inefficiency. While conventional fixed and random effects panel data models take firm-specific effects into account in the estimation of inefficiency, they treat any time-invariant unobserved heterogeneity as inefficiency. They, too, are found to fail when it comes to separating heterogeneity from inefficiency. This problem is tackled by ‘true’ fixed and random effects models by adding in an additional term in the model which captures time-invariant and firm-specific effects and therefore separates these effects from inefficiency (Greene, 2002a, b). However, it remains debatable whether the time-invariant firm-specific effect should, in fact, be attributed to unobserved heterogeneity or to inefficiency. Mundlak’s (1978) formulation of the random effects model is also considered since it allows controlling for any correlation between unobserved heterogeneity and regressors. In our study it is found that the estimation results based on the conventional random effects models tend to highly overestimate cost inefficiency, while the true fixed effects (TFE) model seems to slightly underestimate it. Nevertheless, since the inefficiency estimates obtained by the TFE model closely correspond to the pooled model it is believed that these two models provide a better approximation of the actual cost inefficiency of Slovenian water distribution utilities, which is found to be close to or slightly above 20% on average. The TFE model is also found to perform the best with respect to the expected signs and significance of the regression coefficients. The inefficiency results indicate that significant cost inefficiency is present in Slovenian water distribution companies and that the utilities would have to considerably cut their costs in order to become efficient. This may be facilitated by a properly designed price regulation that introduces incentives for efficiency improvements. The inefficiency scores obtained from the different methods are, however, not found to be consistent in their levels and rankings of the utilities. A possible explanation of these inconsistent results can be found in the different ability of stochastic frontier methods to account for unobservable heterogeneity. However, since the regulator needs reliable estimates of the efficiency potential of a regulated firm this finding is particularly unwelcome. It is thus recommended to use the benchmarking results obtained by the SFA methods only as a starting point for providing information about the range in which the inefficiency score can be located. Alternatively, the estimated cost function can also be used to predict utilities’ costs, with this approach being in line with yardstick competition. Besides achieving cost efficiency, i.e. operating at minimum cost at a given size, important cost savings may result from achieving scale efficiency, i.e. operating at the size that minimises average production costs. The results from the different models in the latter case prove to be fairly consistent. Based on the obtained results, the presence of economies of output density and customer density in Slovenian water distribution utilities is confirmed. Therefore, it would be beneficial for the utilities if they managed to distribute larger volumes of output to their existing customers as well as to acquire new customers. With respect to economies of scale, medium-sized utilities are found to closely correspond to the optimal size of water distribution utilities in Slovenia. Economies of scale prevail in smaller utilities, implying they should consider expanding the scale of their operations through mergers. Conversely, large utilities are found to operate at levels where economies of scale are already exhausted. Overall, based on the results obtained it can be concluded that there is large potential for cost savings in Slovenian water distribution utilities. However, no evidence of any notable improvements being made can be found so far. The total factor productivity growth over the examined period is found to be around zero where technical progress is established but, on the other hand, no significant improvements in cost efficiency are found. In order to facilitate these improvements, a new regulatory framework is needed, where the choice should be made among incentive-based price regulation schemes. Rate of return regulation combined with benchmarking as proposed by the Rules on Price Determination of Obligatory Local Public Utilities for Environment Protection (2004) would be one of the appropriate alternatives. License undefined Persistent URL
{"url":"https://susi.usi.ch/usi/documents/318063","timestamp":"2024-11-07T04:35:10Z","content_type":"text/html","content_length":"41727","record_id":"<urn:uuid:85ae2d46-d3f9-4b85-aa3d-df4cd9147631>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00479.warc.gz"}
Applications of Trigonometry Optional Exercise AP State Syllabus SSC 10th Class Maths Solutions 12th Lesson Applications of Trigonometry Optional Exercise AP State Board Syllabus AP SSC 10th Class Maths Textbook Solutions Chapter 12 Applications of Trigonometry Optional Exercise Textbook Questions and Answers. 10th Class Maths 12th Lesson Applications of Trigonometry Optional Exercise Textbook Questions and Answers Question 1. A 1.2 m tall girl spots a balloon moving with the wind in a horizontal line at a height of 88.2 m from the ground. The angle of elevation of the balloon from the eyes of the girl at any instant is 60°. After sometime, the angle of elevation reduces to 30°. Find the distance travelled by the balloon during the interval. Height of the balloon from the ground = 88.2 m Height of the girl = 1.2 m Angles of elevations = 60° and 30° Let the distance travelled = dm From the figure tan 60° = \(\frac{87}{x}\) √3 = \(\frac{87}{x}\) ⇒ 87 = √3x …….(1) ⇒ x = \(\frac{87}{\sqrt{3}}\) m Also tan 30° = \(\frac{87}{x+d}\) ⇒ \(\frac{1}{\sqrt{3}}\) = \(\frac{87}{x+d}\) ⇒ 87 = \(\frac{x+d}{\sqrt{3}}\) ………(2) From equations (1) and (2) √3x = \(\frac{x+d}{\sqrt{3}}\) √3 × √3x = x + d ⇒ 3x = x + d ⇒ 2x = d Question 2. The angle of elevation of the top of a tower from the foot of the building is 30° and the angle of elevation of the top of the building from the foot of the tower is 60°. What is the ratio of heights of tower and building? Let the height of the tower = x m Let the height of the building = y m Distance between the tower and building = d m. Angle of elevation of the top of the tower = 30°. From the figure, ∴ x : y = 1 : 3 ∴ The ratio of heights of tower and building = 1 : 3. Question 3. The angles of elevation of the top of a lighthouse from 3 boats A, B and C in a straight line of same side of the light- house are a, 2a, 3a respectively. If the distance between the boats A and B is x meters. Find the height of lighthouse. From the figure, Let PQ be the height of the lighthouse = h m A = First point of observation B = Second point of observation C = Third point of observation Given, AB = x and BC = y (Not given in the text) Exterior angle = Sum of the opposite interior angles ∠PBQ = ∠BQA + ∠BAQ and ∠PCQ = ∠CBQ + ∠CQB ∴ AB = x = OB By applying the sine rule, From △PBQ Question 4. Inner part of a cupboard is in the cuboidical shape with its length, breadth and height in the ratio 1 : √2 : 1. What is the angle made by the longest stick which can be inserted cupboard with its base inside? The ratio of the length, breadth and height = 1 : √2 : 1 Let its length be = x breadth = √2x height = x The longest stick that can be placed on the base is along its hypotenuse [!! Again, the longest stick that can be inserted in the cup board is along the line join of the bottom corn on with’ its opposite top corner, i.e., along the hypotenuse of the right triangle formed by height of the cup board, hypotenuse of the base and the line join of bottom corner with its opposite top corner. Length of the largest stick = \(\sqrt{(\sqrt{3} x)^{2}+x^{2}}\) = \(\sqrt{3 x^{2}+x^{2}}\) = \(\sqrt{4 x^{2}}\) = 2x] Now the angle made by the largest stick be = θ Then tan θ = \(\frac{\text { opp. side }}{\text { adj. side }}\) = \(\frac{x}{\sqrt{3} x}\) = \(\frac{1}{\sqrt{3}}\) tan θ = tan 30° ∴ θ = 30°. Question 5. An iron spherical ball of volume 232848 cm^3 has been melted and converted into a cone with the vertical angle of 120°. What are its height and base? Volume of the spherical ball = Volume of the cone Given that vertical angle = 60° Let its height be h cm. and radius r cm. From the figure Also tan 30° = \(\frac{h}{r}\) ⇒ \(\frac{1}{\sqrt{3}}\) = \(\frac{h}{r}\) ∴ h = \(\frac{r}{\sqrt{3}}\) Substituting h = \(\frac{r}{\sqrt{3}}\) equation (1) we get ⇒ r = h√3 = (22.4) (1.732) = 38.79 m r = 38.79 cm and h = 22.4 cm.
{"url":"https://apboardsolutions.guru/ap-ssc-10th-class-maths-solutions-chapter-12-optional-exercise/","timestamp":"2024-11-04T17:01:11Z","content_type":"text/html","content_length":"60757","record_id":"<urn:uuid:aba5e7a4-7ee8-4413-ba05-a82a29acfc3c>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00423.warc.gz"}
Poster Session 3 Poster Session 3 Hall J (level 1) Ines Chami · Sami Abu-El-Haija · Bryan Perozzi · Christopher Ré · Kevin Murphy There has been a surge of recent interest in graph representation learning (GRL). GRL methods have generally fallen into three main categories, based on the availability of labeled data. The first, network embedding, focuses on learning unsupervised representations of relational structure. The second, graph regularized neural networks, leverages graphs to augment neural network losses with a regularization objective for semi-supervised learning. The third, graph neural networks, aims to learn differentiable functions over discrete topologies with arbitrary structure. However, despite the popularity of these areas there has been surprisingly little work on unifying the three paradigms. Here, we aim to bridge the gap between network embedding, graph regularization and graph neural networks. We propose a comprehensive taxonomy of GRL methods, aiming to unify several disparate bodies of work. Specifically, we propose the GraphEDM framework, which generalizes popular algorithms for semi-supervised learning (e.g. GraphSage, GCN, GAT), and unsupervised learning (e.g. DeepWalk, node2vec) of graph representations into a single consistent approach. To illustrate the generality of GraphEDM, we fit over thirty existing methods into this framework. We believe that this unifying view both provides a solid foundation for understanding the intuition behind these methods, and enables future research in the area. Harsh Rangwani · shrinivas ramasubramanian · Sho Takemori · Kato Takashi · Yuhei Umeda · Venkatesh Babu R Self-training based semi-supervised learning algorithms have enabled the learning of highly accurate deep neural networks, using only a fraction of labeled data. However, the majority of work on self-training has focused on the objective of improving accuracy whereas practical machine learning systems can have complex goals (e.g. maximizing the minimum of recall across classes, etc.) that are non-decomposable in nature. In this work, we introduce the Cost-Sensitive Self-Training (CSST) framework which generalizes the self-training-based methods for optimizing non-decomposable metrics. We prove that our framework can better optimize the desired non-decomposable metric utilizing unlabeled data, under similar data distribution assumptions made for the analysis of self-training. Using the proposed CSST framework, we obtain practical self-training methods (for both vision and NLP tasks) for optimizing different non-decomposable metrics using deep neural networks. Our results demonstrate that CSST achieves an improvement over the state-of-the-art in majority of the cases across datasets and objectives. Piyush Raikwar · Deepak Mishra Distillation in neural networks using only the samples randomly drawn from a Gaussian distribution is possibly the most straightforward solution one can think of for the complex problem of knowledge transfer from one network (teacher) to the other (student). If successfully done, it can eliminate the requirement of teacher's training data for knowledge distillation and avoid often arising privacy concerns in sensitive applications such as healthcare. There have been some recent attempts at Gaussian noise-based data-free knowledge distillation, however, none of them offer a consistent or reliable solution. We identify the shift in the distribution of hidden layer activation as the key limiting factor, which occurs when Gaussian noise is fed to the teacher network instead of the accustomed training data. We propose a simple solution to mitigate this shift and show that for vision tasks, such as classification, it is possible to achieve a performance close to the teacher by just using the samples randomly drawn from a Gaussian distribution. We validate our approach on CIFAR10, CIFAR100, SVHN, and Food101 datasets. We further show that in situations of sparsely available original data for distillation, the proposed Gaussian noise-based knowledge distillation method can outperform the distillation using the available data with a large margin. Our work lays the foundation for further research in the direction of noise-engineered knowledge distillation using random samples. Mikael Henaff · Roberta Raileanu · Minqi Jiang · Tim Rocktäschel In recent years, a number of reinforcement learning (RL) methods have been pro- posed to explore complex environments which differ across episodes. In this work, we show that the effectiveness of these methods critically relies on a count-based episodic term in their exploration bonus. As a result, despite their success in relatively simple, noise-free settings, these methods fall short in more realistic scenarios where the state space is vast and prone to noise. To address this limitation, we introduce Exploration via Elliptical Episodic Bonuses (E3B), a new method which extends count-based episodic bonuses to continuous state spaces and encourages an agent to explore states that are diverse under a learned embed- ding within each episode. The embedding is learned using an inverse dynamics model in order to capture controllable aspects of the environment. Our method sets a new state-of-the-art across 16 challenging tasks from the MiniHack suite, without requiring task-specific inductive biases. E3B also outperforms existing methods in reward-free exploration on Habitat, demonstrating that it can scale to high-dimensional pixel-based observations and realistic Micah Carroll · Orr Paradise · Jessy Lin · Raluca Georgescu · Mingfei Sun · David Bignell · Stephanie Milani · Katja Hofmann · Matthew Hausknecht · Anca Dragan · Sam Devlin Randomly masking and predicting word tokens has been a successful approach in pre-training language models for a variety of downstream tasks. In this work, we observe that the same idea also applies naturally to sequential decision making, where many well-studied tasks like behavior cloning, offline RL, inverse dynamics, and waypoint conditioning correspond to different sequence maskings over a sequence of states, actions, and returns. We introduce the UniMASK framework, which provides a unified way to specify models which can be trained on many different sequential decision making tasks. We show that a single UniMASK model is often capable of carrying out many tasks with performance similar to or better than single-task models. Additionally, after fine-tuning, our UniMASK models consistently outperform comparable single-task models. Manzil Zaheer · Kenneth Marino · Will Grathwohl · John Schultz · Wendy Shang · Sheila Babayan · Arun Ahuja · Ishita Dasgupta · Christine Kaeser-Chen · Rob Fergus A fundamental ability of an intelligent web-based agent is seeking out and acquiring new information. Internet search engines reliably find the correct vicinity but the top results may be a few links away from the desired target. A complementary approach is navigation via hyperlinks, employing a policy that comprehends local content and selects a link that moves it closer to the target. In this paper, we show that behavioral cloning of randomly sampled trajectories is sufficient to learn an effective link selection policy. We demonstrate the approach on a graph version of Wikipedia with 38M nodes and 387M edges. The model is able to efficiently navigate between nodes 5 and 20 steps apart 96% and 92% of the time, respectively. We then use the resulting embeddings and policy in downstream fact verification and question answering tasks where, in combination with basic TF-IDF search and ranking methods, they are competitive results to the state-of-the-art methods. Danny Driess · Ingmar Schubert · Pete Florence · Yunzhu Li · Marc Toussaint It is a long-standing problem to find effective representations for training reinforcement learning (RL) agents. This paper demonstrates that learning state representations with supervision from Neural Radiance Fields (NeRFs) can improve the performance of RL compared to other learned representations or even low-dimensional, hand-engineered state information. Specifically, we propose to train an encoder that maps multiple image observations to a latent space describing the objects in the scene. The decoder built from a latent-conditioned NeRF serves as the supervision signal to learn the latent space. An RL algorithm then operates on the learned latent space as its state representation. We call this NeRF-RL. Our experiments indicate that NeRF as supervision leads to a latent space better suited for the downstream RL tasks involving robotic object manipulations like hanging mugs on hooks, pushing objects, or opening doors.Video: https://dannydriess.github.io/ Sagnik Majumder · Changan Chen · Ziad Al-Halah · Kristen Grauman Room impulse response (RIR) functions capture how the surrounding physical environment transforms the sounds heard by a listener, with implications for various applications in AR, VR, and robotics. Whereas traditional methods to estimate RIRs assume dense geometry and/or sound measurements throughout the environment, we explore how to infer RIRs based on a sparse set of images and echoes observed in the space. Towards that goal, we introduce a transformer-based method that uses self-attention to build a rich acoustic context, then predicts RIRs of arbitrary query source-receiver locations through cross-attention. Additionally, we design a novel training objective that improves the match in the acoustic signature between the RIR predictions and the targets. In experiments using a state-of-the-art audio-visual simulator for 3D environments, we demonstrate that our method successfully generates arbitrary RIRs, outperforming state-of-the-art methods and---in a major departure from traditional methods---generalizing to novel environments in a few-shot manner. Project: http://vision.cs.utexas.edu/projects/fs_rir Keiran Paster · Sheila McIlraith · Jimmy Ba Recently, methods such as Decision Transformer that reduce reinforcement learning to a prediction task and solve it via supervised learning (RvS) have become popular due to their simplicity, robustness to hyperparameters, and strong overall performance on offline RL tasks. However, simply conditioning a probabilistic model on a desired return and taking the predicted action can fail dramatically in stochastic environments since trajectories that result in a return may have only achieved that return due to luck. In this work, we describe the limitations of RvS approaches in stochastic environments and propose a solution. Rather than simply conditioning on returns, as is standard practice, our proposed method, ESPER, conditions on learned average returns which are independent from environment stochasticity. Doing so allows ESPER to achieve strong alignment between target return and expected performance in real environments. We demonstrate this in several challenging stochastic offline-RL tasks including the challenging puzzle game 2048, and Connect Four playing against a stochastic opponent. In all tested domains, ESPER achieves significantly better alignment between the target return and achieved return than simply conditioning on returns. ESPER also achieves higher maximum performance than even the value-based baselines. Sanket Shah · Kai Wang · Bryan Wilder · Andrew Perrault · Milind Tambe Decision-Focused Learning (DFL) is a paradigm for tailoring a predictive model to a downstream optimization task that uses its predictions in order to perform better \textit{on that specific task}. The main technical challenge associated with DFL is that it requires being able to differentiate through the optimization problem, which is difficult due to discontinuous solutions and other challenges. Past work has largely gotten around this this issue by \textit{handcrafting} task-specific surrogates to the original optimization problem that provide informative gradients when differentiated through. However, the need to handcraft surrogates for each new task limits the usability of DFL. In addition, there are often no guarantees about the convexity of the resulting surrogates and, as a result, training a predictive model using them can lead to inferior local optima. In this paper, we do away with surrogates altogether and instead \textit{learn} loss functions that capture task-specific information. To the best of our knowledge, ours is the first approach that entirely replaces the optimization component of decision-focused learning with a loss that is automatically learned. Our approach (a) only requires access to a black-box oracle that can solve the optimization problem and is thus \textit{generalizable}, and (b) can be \textit{convex by construction} and so can be easily optimized over. We evaluate our approach on three resource allocation problems from the literature and find that our approach outperforms learning without taking into account task-structure in all three domains, and even hand-crafted surrogates from the literature. Yao Qin · Chiyuan Zhang · Ting Chen · Balaji Lakshminarayanan · Alex Beutel · Xuezhi Wang We investigate the robustness of vision transformers (ViTs) through the lens of their special patch-based architectural structure, i.e., they process an image as a sequence of image patches. We find that ViTs are surprisingly insensitive to patch-based transformations, even when the transformation largely destroys the original semantics and makes the image unrecognizable by humans. This indicates that ViTs heavily use features that survived such transformations but are generally not indicative of the semantic class to humans. Further investigations show that these features are useful but non-robust, as ViTs trained on them can achieve high in-distribution accuracy, but break down under distribution shifts. From this understanding, we ask: can training the model to rely less on these features improve ViT robustness and out-of-distribution performance? We use the images transformed with our patch-based operations as negatively augmented views and offer losses to regularize the training away from using non-robust features. This is a complementary view to existing research that mostly focuses on augmenting inputs with semantic-preserving transformations to enforce models' invariance. We show that patch-based negative augmentation consistently improves robustness of ViTs on ImageNet based robustness benchmarks across 20+ different experimental settings. Furthermore, we find our patch-based negative augmentation are complementary to traditional (positive) data augmentation techniques and batch-based negative examples in contrastive learning. Ignacio Peis · Chao Ma · José Miguel Hernández-Lobato Variational Autoencoders (VAEs) have recently been highly successful at imputing and acquiring heterogeneous missing data. However, within this specific application domain, existing VAE methods are restricted by using only one layer of latent variables and strictly Gaussian posterior approximations. To address these limitations, we present HH-VAEM, a Hierarchical VAE model for mixed-type incomplete data that uses Hamiltonian Monte Carlo with automatic hyper-parameter tuning for improved approximate inference. Our experiments show that HH-VAEM outperforms existing baselines in the tasks of missing data imputation and supervised learning with missing features. Finally, we also present a sampling-based approach for efficiently computing the information gain when missing features are to be acquired with HH-VAEM. Our experiments show that this sampling-based approach is superior to alternatives based on Gaussian approximations. Natalie Maus · Haydn Jones · Juston Moore · Matt Kusner · John Bradshaw · Jacob Gardner Bayesian optimization over the latent spaces of deep autoencoder models (DAEs) has recently emerged as a promising new approach for optimizing challenging black-box functions over structured, discrete, hard-to-enumerate search spaces (e.g., molecules). Here the DAE dramatically simplifies the search space by mapping inputs into a continuous latent space where familiar Bayesian optimization tools can be more readily applied. Despite this simplification, the latent space typically remains high-dimensional. Thus, even with a well-suited latent space, these approaches do not necessarily provide a complete solution, but may rather shift the structured optimization problem to a high-dimensional one. In this paper, we propose LOL-BO, which adapts the notion of trust regions explored in recent work on high-dimensional Bayesian optimization to the structured setting. By reformulating the encoder to function as both an encoder for the DAE globally and as a deep kernel for the surrogate model within a trust region, we better align the notion of local optimization in the latent space with local optimization in the input space. LOL-BO achieves as much as 20 times improvement over state-of-the-art latent space Bayesian optimization methods across six real-world benchmarks, demonstrating that improvement in optimization strategies is as important as developing better DAE models. Ganchao Wei · Ian H Stevenson · Xiaojing Wang Modern neural recording techniques allow neuroscientists to observe the spiking activity of many neurons simultaneously. Although previous work has illustrated how activity within and between known populations of neurons can be summarized by low-dimensional latent vectors, in many cases what determines a unique population may be unclear. Neurons differ in their anatomical location, but also, in their cell types and response properties. Moreover, multiple distinct populations may not be well described by a single low-dimensional, linear representation.To tackle these challenges, we develop a clustering method based on a mixture of dynamic Poisson factor analyzers (DPFA) model, with the number of clusters treated as an unknown parameter. To do the analysis of DPFA model, we propose a novel Markov chain Monte Carlo (MCMC) algorithm to efficiently sample its posterior distribution. Validating our proposed MCMC algorithm with simulations, we find that it can accurately recover the true clustering and latent states and is insensitive to the initial cluster assignments. We then apply the proposed mixture of DPFA model to multi-region experimental recordings, where we find that the proposed method can identify novel, reliable clusters of neurons based on their activity, and may, thus, be a useful tool for neural data analysis. Larry Zitnick · Abhishek Das · Adeesh Kolluru · Janice Lan · Muhammed Shuaibi · Anuroop Sriram · Zachary Ulissi · Brandon Wood Modeling the energy and forces of atomic systems is a fundamental problem in computational chemistry with the potential to help address many of the world’s most pressing problems, including those related to energy scarcity and climate change. These calculations are traditionally performed using Density Functional Theory, which is computationally very expensive. Machine learning has the potential to dramatically improve the efficiency of these calculations from days or hours to seconds.We propose the Spherical Channel Network (SCN) to model atomic energies and forces. The SCN is a graph neural network where nodes represent atoms and edges their neighboring atoms. The atom embeddings are a set of spherical functions, called spherical channels, represented using spherical harmonics. We demonstrate, that by rotating the embeddings based on the 3D edge orientation, more information may be utilized while maintaining the rotational equivariance of the messages. While equivariance is a desirable property, we find that by relaxing this constraint in both message passing and aggregation, improved accuracy may be achieved. We demonstrate state-of-the-art results on the large-scale Open Catalyst 2020 dataset in both energy and force prediction for numerous tasks and metrics. Setareh Cohan · Nam Hee Kim · David Rolnick · Michiel van de Panne Policies produced by deep reinforcement learning are typically characterised by their learning curves, but they remain poorly understood in many other respects. ReLU-based policies result in a partitioning of the input space into piecewise linear regions. We seek to understand how observed region counts and their densities evolve during deep reinforcement learning using empirical results that span a range of continuous control tasks and policy network dimensions. Intuitively, we may expect that during training, the region density increases in the areas that are frequently visited by the policy, thereby affording fine-grained control. We use recent theoretical and empirical results for the linear regions induced by neural networks in supervised learning settings for grounding and comparison of our results. Empirically, we find that the region density increases only moderately throughout training, as measured along fixed trajectories coming from the final policy. However, the trajectories themselves also increase in length during training, and thus the region densities decrease as seen from the perspective of the current trajectory. Our findings suggest that the complexity of deep reinforcement learning policies does not principally emerge from a significant growth in the complexity of functions observed on-and-around trajectories of the policy. Fred Lu · Joseph Munoz · Maya Fuchs · Tyler LeBlond · Elliott Zaresky-Williams · Edward Raff · Francis Ferraro · Brian Testa We present a framework to statistically audit the privacy guarantee conferred by a differentially private machine learner in practice. While previous works have taken steps toward evaluating privacy loss through poisoning attacks or membership inference, they have been tailored to specific models or have demonstrated low statistical power. Our work develops a general methodology to empirically evaluate the privacy of differentially private machine learning implementations, combining improved privacy search and verification methods with a toolkit of influence-based poisoning attacks. We demonstrate significantly improved auditing power over previous approaches on a variety of models including logistic regression, Naive Bayes, and random forest. Our method can be used to detect privacy violations due to implementation errors or misuse. When violations are not present, it can aid in understanding the amount of information that can be leaked from a given dataset, algorithm, and privacy specification. Shichong Peng · Seyed Alireza Moazenipourasil · Ke Li A persistent challenge in conditional image synthesis has been to generate diverse output images from the same input image despite only one output image being observed per input image. GAN-based methods are prone to mode collapse, which leads to low diversity. To get around this, we leverage Implicit Maximum Likelihood Estimation (IMLE) which can overcome mode collapse fundamentally. IMLE uses the same generator as GANs but trains it with a different, non-adversarial objective which ensures each observed image has a generated sample nearby. Unfortunately, to generate high-fidelity images, prior IMLE-based methods require a large number of samples, which is expensive. In this paper, we propose a new method to get around this limitation, which we dub Conditional Hierarchical IMLE (CHIMLE), which can generate high-fidelity images without requiring many samples. We show CHIMLE significantly outperforms the prior best IMLE, GAN and diffusion-based methods in terms of image fidelity and mode coverage across four tasks, namely night-to-day, 16x single image super-resolution, image colourization and image decompression. Quantitatively, our method improves Fréchet Inception Distance (FID) by 36.9% on average compared to the prior best IMLE-based method, and by 27.5% on average compared to the best non-IMLE-based general-purpose methods. More results and code are available on the project website at https://niopeng.github.io/CHIMLE/. Lorenzo Bonicelli · Matteo Boschini · Angelo Porrello · Concetto Spampinato · SIMONE CALDERARA Rehearsal approaches enjoy immense popularity with Continual Learning (CL) practitioners. These methods collect samples from previously encountered data distributions in a small memory buffer; subsequently, they repeatedly optimize on the latter to prevent catastrophic forgetting. This work draws attention to a hidden pitfall of this widespread practice: repeated optimization on a small pool of data inevitably leads to tight and unstable decision boundaries, which are a major hindrance to generalization. To address this issue, we propose Lipschitz-DrivEn Rehearsal (LiDER), a surrogate objective that induces smoothness in the backbone network by constraining its layer-wise Lipschitz constants w.r.t. replay examples. By means of extensive experiments, we show that applying LiDER delivers a stable performance gain to several state-of-the-art rehearsal CL methods across multiple datasets, both in the presence and absence of pre-training. Through additional ablative experiments, we highlight peculiar aspects of buffer overfitting in CL and better characterize the effect produced by LiDER. Code is available at https://github.com/aimagelab/LiDER. Ruochen Wang · Yuanhao Xiong · Minhao Cheng · Cho-Jui Hsieh Efficient and automated design of optimizers plays a crucial role in full-stack AutoML systems. However, prior methods in optimizer search are often limited by their scalability, generability, or sample efficiency. With the goal of democratizing research and application of optimizer search, we present the first efficient, scalable and generalizable framework that can directly search on the tasks of interest. We first observe that optimizer updates are fundamentally mathematical expressions applied to the gradient. Inspired by the innate tree structure of the underlying math expressions, we re-arrange the space of optimizers into a super-tree, where each path encodes an optimizer. This way, optimizer search can be naturally formulated as a path-finding problem, allowing a variety of well-established tree traversal methods to be used as the search algorithm. We adopt an adaptation of the Monte Carlo method to tree search, equipped with rejection sampling and equivalent-form detection that leverage the characteristics of optimizer update rules to further boost the sample efficiency. We provide a diverse set of tasks to benchmark our algorithm and demonstrate that, with only 128 evaluations, the proposed framework can discover optimizers that surpass both human-designed counterparts and prior optimizer search methods. Our code is publicly available at https://github.com/ruocwang/enos. Michael Poli · Stefano Massaroli · Federico Berto · Jinkyoo Park · Tri Dao · Christopher Ré · Stefano Ermon Spectral analysis provides one of the most effective paradigms for information-preserving dimensionality reduction, as simple descriptions of naturally occurring signals are often obtained via few terms of periodic basis functions. In this work, we study deep neural networks designed to harness the structure in frequency domain for efficient learning of long-range correlations in space or time: frequency-domain models (FDMs). Existing FDMs are based on complex-valued transforms i.e. Fourier Transforms (FT), and layers that perform computation on the spectrum and input data separately. This design introduces considerable computational overhead: for each layer, a forward and inverse FT. Instead, this work introduces a blueprint for frequency domain learning through a single transform: transform once (T1). To enable efficient, direct learning in the frequency domain we derive a variance preserving weight initialization scheme and investigate methods for frequency selection in reduced-order FDMs. Our results noticeably streamline the design process of FDMs, pruning redundant transforms, and leading to speedups of 3x to 10x that increase with data resolution and model size. We perform extensive experiments on learning the solution operator of spatio-temporal dynamics, including incompressible Navier-Stokes, turbulent flows around airfoils and high-resolution video of smoke. T1 models improve on the test performance of FDMs while requiring significantly less computation (5 hours instead of 32 for our large-scale experiment), with over 20% reduction in predictive error across tasks. There is currently a large gap in performance between the statistically rigorous methods like linear regression or additive splines and the powerful deep methods using neural networks. Previous works attempting to close this gap have failed to fully consider the exponentially growing number of feature combinations which deep networks consider automatically during training. In this work, we develop a tractable selection algorithm to efficiently identify the necessary feature combinations by leveraging techniques in feature interaction detection.Our proposed Sparse Interaction Additive Networks (SIAN) construct a bridge from these simple and interpretable models to a fully connected neural network. SIAN achieves competitive performance against state-of-the-art methods across multiple large-scale tabular datasets and consistently finds an optimal tradeoff between the modeling capacity of neural networks and the generalizability of simpler methods. Rui Wang · Robin Walters · Rose Yu Current deep learning models for dynamics forecasting struggle with generalization. They can only forecast in a specific domain and fail when applied to systems with different parameters, external forces, or boundary conditions. We propose a model-based meta-learning method called DyAd which can generalize across heterogeneous domains by partitioning them into different tasks. DyAd has two parts: an encoder that infers the time-invariant hidden features of the task with weak supervision, and a forecaster which learns the shared dynamics of the entire domain. The encoder adapts and controls the forecaster during inference using adaptive instance normalization and adaptive padding. Theoretically, we prove that the generalization error of such a procedure is related to the task relatedness in the source domain, as well as the domain differences between source and target. Experimentally, we demonstrate that our model outperforms state-of-the-art approaches on forecasting complex physical dynamics including turbulent flow, real-world sea surface temperature, and ocean currents. Zekun Hao · Arun Mallya · Serge Belongie · Ming-Yu Liu Coordinate-based networks, usually in the forms of MLPs, have been successfully applied to the task of predicting high-frequency but low-dimensional signals using coordinate inputs. To scale them to model large-scale signals, previous works resort to hybrid representations, combining a coordinate-based network with a grid-based representation, such as sparse voxels. However, such approaches lack a compact global latent representation in its grid, making it difficult to model a distribution of signals, which is important for generalization tasks. To address the limitation, we propose the Levels-of-Experts (LoE) framework, which is a novel coordinate-based representation consisting of an MLP with periodic, position-dependent weights arranged hierarchically. For each linear layer of the MLP, multiple candidate values of its weight matrix are tiled and replicated across the input space, with different layers replicating at different frequencies. Based on the input, only one of the weight matrices is chosen for each layer. This greatly increases the model capacity without incurring extra computation or compromising generalization capability. We show that the new representation is an efficient and competitive drop-in replacement for a wide range of tasks, including signal fitting, novel view synthesis, and generative modeling. Yibo Yang · Hong Wang · Haobo Yuan · Zhouchen Lin Automated machine learning has been widely explored to reduce human efforts in designing neural architectures and looking for proper hyperparameters. In the domain of neural initialization, however, similar automated techniques have rarely been studied. Most existing initialization methods are handcrafted and highly dependent on specific architectures. In this paper, we propose a differentiable quantity, named GradCoisne, with theoretical insights to evaluate the initial state of a neural network. Specifically, GradCosine is the cosine similarity of sample-wise gradients with respect to the initialized parameters. By analyzing the sample-wise optimization landscape, we show that both the training and test performance of a network can be improved by maximizing GradCosine under gradient norm constraint. Based on this observation, we further propose the neural initialization optimization (NIO) algorithm. Generalized from the sample-wise analysis into the real batch setting, NIO is able to automatically look for a better initialization with negligible cost compared with the training time. With NIO, we improve the classification performance of a variety of neural architectures on CIFAR10, CIFAR-100, and ImageNet. Moreover, we find that our method can even help to train large vision Transformer architecture without warmup. Ceyuan Yang · Yujun Shen · Yinghao Xu · Deli Zhao · Bo Dai · Bolei Zhou Discriminator plays a vital role in training generative adversarial networks (GANs) via distinguishing real and synthesized samples. While the real data distribution remains the same, the synthesis distribution keeps varying because of the evolving generator, and thus effects a corresponding change of the bi-classification task assigned to the discriminator. We argue that a discriminator with an on-the-fly adjustment on its capacity can better accommodate such a time-varying task. A comprehensive empirical study confirms that the proposed training strategy, termed as DynamicD, improves the synthesis performance without incurring any additional computation cost or training objectives. Two capacity adjusting schemes are developed for training GANs under different data regimes: i) given a sufficient amount of training data, the discriminator benefits from a progressively increased learning capacity, and ii) when the training data is limited, gradually decreasing the layer width mitigates the over-fitting issue of the discriminator. Experiments on both 2D and 3D-aware image synthesis tasks conducted on a range of datasets substantiate the generalizability of our DynamicD as well as its substantial improvement over the baselines. Furthermore, DynamicD is synergistic to other discriminator-improving approaches (including data augmentation, regularizers, and pre-training), and brings continuous performance gain when combined with them for learning GANs. Code will be made publicly available. Yutian Chen · Xingyou Song · Chansoo Lee · Zi Wang · Richard Zhang · David Dohan · Kazuya Kawakami · Greg Kochanski · Arnaud Doucet · Marc'Aurelio Ranzato · Sagi Perel · Nando de Freitas Meta-learning hyperparameter optimization (HPO) algorithms from prior experiments is a promising approach to improve optimization efficiency over objective functions from a similar distribution. However, existing methods are restricted to learning from experiments sharing the same set of hyperparameters. In this paper, we introduce the OptFormer, the first text-based Transformer HPO framework that provides a universal end-to-end interface for jointly learning policy and function prediction when trained on vast tuning data from the wild, such as Google’s Vizier database, one of the world’s largest HPO datasets. Our extensive experiments demonstrate that the OptFormer can simultaneously imitate at least 7 different HPO algorithms, which can be further improved via its function uncertainty estimates. Compared to a Gaussian Process, the OptFormer also learns a robust prior distribution for hyperparameter response functions, and can thereby provide more accurate and better calibrated predictions. This work paves the path to future extensions for training a Transformer-based model as a general HPO optimizer. Dongki Kim · Jinheon Baek · Sung Ju Hwang Self-supervised learning of graph neural networks (GNNs) aims to learn an accurate representation of the graphs in an unsupervised manner, to obtain transferable representations of them for diverse downstream tasks. Predictive learning and contrastive learning are the two most prevalent approaches for graph self-supervised learning. However, they have their own drawbacks. While the predictive learning methods can learn the contextual relationships between neighboring nodes and edges, they cannot learn global graph-level similarities. Contrastive learning, while it can learn global graph-level similarities, its objective to maximize the similarity between two differently perturbed graphs may result in representations that cannot discriminate two similar graphs with different properties. To tackle such limitations, we propose a framework that aims to learn the exact discrepancy between the original and the perturbed graphs, coined as Discrepancy-based Self-supervised LeArning (D-SLA). Specifically, we create multiple perturbations of the given graph with varying degrees of similarity, and train the model to predict whether each graph is the original graph or the perturbed one. Moreover, we further aim to accurately capture the amount of discrepancy for each perturbed graph using the graph edit distance. We validate our D-SLA on various graph-related downstream tasks, including molecular property prediction, protein function prediction, and link prediction tasks, on which ours largely outperforms relevant baselines. Zhuoer Xu · Guanghui Zhu · Changhua Meng · shiwen cui · Zhenzhe Ying · Weiqiang Wang · Ming GU · Yihua Huang Based on the significant improvement of model robustness by AT (Adversarial Training), various variants have been proposed to further boost the performance. Well-recognized methods have focused on different components of AT (e.g., designing loss functions and leveraging additional unlabeled data). It is generally accepted that stronger perturbations yield more robust models.However, how to generate stronger perturbations efficiently is still missed. In this paper, we propose an efficient automated attacker called A2 to boost AT by generating the optimal perturbations on-the-fly during training. A2 is a parameterized automated attacker to search in the attacker space for the best attacker against the defense model and examples. Extensive experiments across different datasets demonstrate that A2 generates stronger perturbations with low extra cost and reliably improves the robustness of various AT methods against different attacks. Yao Zhu · YueFeng Chen · Chuanlong Xie · Xiaodan Li · Rong Zhang · Hui Xue' · Xiang Tian · bolun zheng · Yaowu Chen Out-of-distribution (OOD) detection is a critical task for ensuring the reliability and safety of deep neural networks in real-world scenarios. Different from most previous OOD detection methods that focus on designing OOD scores or introducing diverse outlier examples to retrain the model, we delve into the obstacle factors in OOD detection from the perspective of typicality and regard the feature's high-probability region of the deep model as the feature's typical set. We propose to rectify the feature into its typical set and calculate the OOD score with the typical features to achieve reliable uncertainty estimation. The feature rectification can be conducted as a plug-and-play module with various OOD scores. We evaluate the superiority of our method on both the commonly used benchmark (CIFAR) and the more challenging high-resolution benchmark with large label space (ImageNet). Notably, our approach outperforms state-of-the-art methods by up to 5.11% in the average FPR95 on the ImageNet benchmark. Wanxing Chang · Ye Shi · Hoang Tuan · Jingya Wang Universal Domain Adaptation (UniDA) aims to transfer knowledge from a source domain to a target domain without any constraints on label sets. Since both domains may hold private classes, identifying target common samples for domain alignment is an essential issue in UniDA. Most existing methods require manually specified or hand-tuned threshold values to detect common samples thus they are hard to extend to more realistic UniDA because of the diverse ratios of common classes. Moreover, they cannot recognize different categories among target-private samples as these private samples are treated as a whole. In this paper, we propose to use Optimal Transport (OT) to handle these issues under a unified framework, namely UniOT. First, an OT-based partial alignment with adaptive filling is designed to detect common classes without any predefined threshold values for realistic UniDA. It can automatically discover the intrinsic difference between common and private classes based on the statistical information of the assignment matrix obtained from OT. Second, we propose an OT-based target representation learning that encourages both global discrimination and local consistency of samples to avoid the over-reliance on the source. Notably, UniOT is the first method with the capability to automatically discover and recognize private categories in the target domain for UniDA. Accordingly, we introduce a new metric H^3-score to evaluate the performance in terms of both accuracy of common samples and clustering performance of private ones. Extensive experiments clearly demonstrate the advantages of UniOT over a wide range of state-of-the-art methods in UniDA. Seon-Ho Lee · Nyeong Ho Shin · Chang-Su Kim A novel approach to rank estimation, called geometric order learning (GOL), is proposed in this paper. First, we construct an embedding space, in which the direction and distance between objects represent order and metric relations between their ranks, by enforcing two geometric constraints: the order constraint compels objects to be sorted according to their ranks, while the metric constraint makes the distance between objects reflect their rank difference. Then, we perform the simple $k$ nearest neighbor ($k$-NN) search in the embedding space to estimate the rank of a test object. Moreover, to assess the quality of embedding spaces for rank estimation, we propose a metric called discriminative ratio for ranking (DRR). Extensive experiments on facial age estimation, historical color image (HCI) classification, and aesthetic score regression demonstrate that GOL constructs effective embedding spaces and thus yields excellent rank estimation performances. The source codes are available at https://github.com/seon92/GOL Yan Huang · Yuming Wang · Yunan Zeng · Liang Wang Recently, the accuracy of image-text matching has been greatly improved by multimodal pretrained models, all of which are trained on millions or billions of paired images and texts. Different from them, this paper studies a new scenario as unpaired image-text matching, in which paired images and texts are assumed to be unavailable during model training. To deal with this, we propose a simple yet effective method namely Multimodal Aligned Conceptual Knowledge (MACK), which is inspired by the knowledge use in human brain. It can be directly used as general knowledge to correlate images and texts even without model training, or further fine-tuned based on unpaired images and texts to better generalize to certain datasets. In addition, we extend it as a re-ranking method, which can be easily combined with existing image-text matching models to substantially improve their performance. Hanoona Bangalath · Muhammad Maaz · Muhammad Uzair Khattak · Salman Khan · Fahad Shahbaz Khan Existing open-vocabulary object detectors typically enlarge their vocabulary sizes by leveraging different forms of weak supervision. This helps generalize to novel objects at inference. Two popular forms of weak-supervision used in open-vocabulary detection (OVD) include pretrained CLIP model and image-level supervision. We note that both these modes of supervision are not optimally aligned for the detection task: CLIP is trained with image-text pairs and lacks precise localization of objects while the image-level supervision has been used with heuristics that do not accurately specify local object regions. In this work, we propose to address this problem by performing object-centric alignment of the language embeddings from the CLIP model. Furthermore, we visually ground the objects with only image-level supervision using a pseudo-labeling process that provides high-quality object proposals and helps expand the vocabulary during training. We establish a bridge between the above two object-alignment strategies via a novel weight transfer function that aggregates their complimentary strengths. In essence, the proposed model seeks to minimize the gap between object and image-centric representations in the OVD setting. On the COCO benchmark, our proposed approach achieves 36.6 AP50 on novel classes, an absolute 8.2 gain over the previous best performance. For LVIS, we surpass the state-of-the-art ViLD model by 5.0 mask AP for rare categories and 3.4 overall. Code: https://github.com/hanoonaR/object-centric-ovd. Jihoon Tack · Jongjin Park · Hankook Lee · Jaeho Lee · Jinwoo Shin The idea of using a separately trained target model (or teacher) to improve the performance of the student model has been increasingly popular in various machine learning domains, and meta-learning is no exception; a recent discovery shows that utilizing task-wise target models can significantly boost the generalization performance. However, obtaining a target model for each task can be highly expensive, especially when the number of tasks for meta-learning is large. To tackle this issue, we propose a simple yet effective method, coined Self-improving Momentum Target (SiMT). SiMT generates the target model by adapting from the temporal ensemble of the meta-learner, i.e., the momentum network. This momentum network and its task-specific adaptations enjoy a favorable generalization performance, enabling self-improving of the meta-learner through knowledge distillation. Moreover, we found that perturbing parameters of the meta-learner, e.g., dropout, further stabilize this self-improving process by preventing fast convergence of the distillation loss during meta-training. Our experimental results demonstrate that SiMT brings a significant performance gain when combined with a wide range of meta-learning methods under various applications, including few-shot regression, few-shot classification, and meta-reinforcement learning. Code is available at https://github.com Wenbo Su · Yuanxing Zhang · Yufeng Cai · Kaixu Ren · Pengjie Wang · Huimin Yi · Yue Song · Jing Chen · Hongbo Deng · Jian Xu · Lin Qu · Bo Zheng High-concurrency asynchronous training upon parameter server (PS) architecture and high-performance synchronous training upon all-reduce (AR) architecture are the most commonly deployed distributed training modes for recommendation models. Although synchronous AR training is designed to have higher training efficiency, asynchronous PS training would be a better choice for training speed when there are stragglers (slow workers) in the shared cluster, especially under limited computing resources. An ideal way to take full advantage of these two training modes is to switch between them upon the cluster status. However, switching training modes often requires tuning hyper-parameters, which is extremely time- and resource-consuming. We find two obstacles to a tuning-free approach: the different distribution of the gradient values and the stale gradients from the stragglers. This paper proposes Global Batch gradients Aggregation (GBA) over PS, which aggregates and applies gradients with the same global batch size as the synchronous training. A token-control process is implemented to assemble the gradients and decay the gradients with severe staleness. We provide the convergence analysis to reveal that GBA has comparable convergence properties with the synchronous training, and demonstrate the robustness of GBA the recommendation models against the gradient staleness. Experiments on three industrial-scale recommendation tasks show that GBA is an effective tuning-free approach for switching. Compared to the state-of-the-art derived asynchronous training, GBA achieves up to 0.2% improvement on the AUC metric, which is significant for the recommendation models. Meanwhile, under the strained hardware resource, GBA speeds up at least 2.4x compared to synchronous training. Ran Ran · Wei Wang · Quan Gang · Jieming Yin · Nuo Xu · Wujie Wen Recently cloud-based graph convolutional network (GCN) has demonstrated great success and potential in many privacy-sensitive applications such as personal healthcare and financial systems. Despite its high inference accuracy and performance on the cloud, maintaining data privacy in GCN inference, which is of paramount importance to these practical applications, remains largely unexplored. In this paper, we take an initial attempt towards this and develop CryptoGCN--a homomorphic encryption (HE) based GCN inference framework. A key to the success of our approach is to reduce the tremendous computational overhead for HE operations, which can be orders of magnitude higher than its counterparts in the plaintext space. To this end, we develop a solution that can effectively take advantage of the sparsity of matrix operations in GCN inference to significantly reduce the encrypted computational overhead. Specifically, we propose a novel Adjacency Matrix-Aware (AMA) data formatting method along with the AMA assisted patterned sparse matrix partitioning, to exploit the complex graph structure and perform efficient matrix-matrix multiplication in HE computation. In this way, the number of HE operations can be significantly reduced. We also develop a co-optimization framework that can explore the trade-offs among the accuracy, security level, and computational overhead by judicious pruning and polynomial approximation of activation modules in GCNs. Based on the NTU-XVIEW skeleton joint dataset, i.e., the largest dataset evaluated homomorphically by far as we are aware of, our experimental results demonstrate that CryptoGCN outperforms state-of-the-art solutions in terms of the latency and number of homomorphic operations, i.e., achieving as much as a 3.10$\times$ speedup on latency and reduces the total Homomorphic Operation Count (HOC) by 77.4\% with a small accuracy loss of 1-1.5$\%$. Our code is publicly available at https://github.com/ Zhaowei Cai · Avinash Ravichandran · Paolo Favaro · Manchen Wang · Davide Modolo · Rahul Bhotika · Zhuowen Tu · Stefano Soatto We study semi-supervised learning (SSL) for vision transformers (ViT), an under-explored topic despite the wide adoption of the ViT architectures to different tasks. To tackle this problem, we use a SSL pipeline, consisting of first un/self-supervised pre-training, followed by supervised fine-tuning, and finally semi-supervised fine-tuning. At the semi-supervised fine-tuning stage, we adopt an exponential moving average (EMA)-Teacher framework instead of the popular FixMatch, since the former is more stable and delivers higher accuracy for semi-supervised vision transformers. In addition, we propose a probabilistic pseudo mixup mechanism to interpolate unlabeled samples and their pseudo labels for improved regularization, which is important for training ViTs with weak inductive bias. Our proposed method, dubbed Semi-ViT, achieves comparable or better performance than the CNN counterparts in the semi-supervised classification setting. Semi-ViT also enjoys the scalability benefits of ViTs that can be readily scaled up to large-size models with increasing accuracy. For example, Semi-ViT-Huge achieves an impressive 80\% top-1 accuracy on ImageNet using only 1\% labels, which is comparable with Inception-v4 using 100\% ImageNet labels. The code is available at https://github.com/amazon-science/semi-vit. Shoufa Chen · Chongjian GE · Zhan Tong · Jiangliu Wang · Yibing Song · Jue Wang · Ping Luo Pretraining Vision Transformers (ViTs) has achieved great success in visual recognition. A following scenario is to adapt a ViT to various image and video recognition tasks. The adaptation is challenging because of heavy computation and memory storage. Each model needs an independent and complete finetuning process to adapt to different tasks, which limits its transferability to different visual domains.To address this challenge, we propose an effective adaptation approach for Transformer, namely AdaptFormer, which can adapt the pre-trained ViTs into many different image and video tasks efficiently.It possesses several benefits more appealing than prior arts.Firstly, AdaptFormer introduces lightweight modules that only add less than 2% extra parameters to a ViT, while it is able to increase the ViT's transferability without updating its original pre-trained parameters, significantly outperforming the existing 100\% fully fine-tuned models on action recognition benchmarks.Secondly, it can be plug-and-play in different Transformers and scalable to many visual tasks.Thirdly, extensive experiments on five image and video datasets show that AdaptFormer largely improves ViTs in the target domains. For example, when updating just 1.5% extra parameters, it achieves about 10% and 19% relative improvement compared to the fully fine-tuned models on Something-Something~v2 and HMDB51, respectively. Code is available at https://github.com/ShoufaChen/AdaptFormer. Haojie Zhang · Ge Li · Jia Li · Zhongjin Zhang · YUQI ZHU · Zhi Jin Large-scale pre-trained language models have achieved impressive results on a wide range of downstream tasks recently. However, fine-tuning an extremely large-scale pre-trained language model on limited target datasets is often plagued by overfitting and representation degradation. In this paper, we propose a Dynamic Parameter Selection (DPS) algorithm for the large-scale pre-trained models during fine-tuning, which adaptively selects a more promising subnetwork to perform staging updates based on gradients of back-propagation. Experiments on the GLUE benchmark show that DPS outperforms previous fine-tuning methods in terms of overall performance and stability, and consistently achieves better results with variable pre-trained language models. In addition, DPS brings a large magnitude of improvement in out-of-domain transferring experiments and low-resource scenarios, which shows that it can maintain stable general contextual features and reduce the representation collapse. We release our code at \url{https://github.com/ZhangHaojie077/DPS}. Xin Wen · Bingchen Zhao · Anlin Zheng · Xiangyu Zhang · Xiaojuan Qi In this paper, we tackle the problem of learning visual representations from unlabeled scene-centric data. Existing works have demonstrated the potential of utilizing the underlying complex structure within scene-centric data; still, they commonly rely on hand-crafted objectness priors or specialized pretext tasks to build a learning framework, which may harm generalizability. Instead, we propose contrastive learning from data-driven semantic slots, namely SlotCon, for joint semantic grouping and representation learning. The semantic grouping is performed by assigning pixels to a set of learnable prototypes, which can adapt to each sample by attentive pooling over the feature and form new slots. Based on the learned data-dependent slots, a contrastive objective is employed for representation learning, which enhances the discriminability of features, and conversely facilitates grouping semantically coherent pixels together. Compared with previous efforts, by simultaneously optimizing the two coupled objectives of semantic grouping and contrastive learning, our approach bypasses the disadvantages of hand-crafted priors and is able to learn object/group-level representations from scene-centric images. Experiments show our approach effectively decomposes complex scenes into semantic groups for feature learning and significantly benefits downstream tasks, including object detection, instance segmentation, and semantic segmentation. Code is available at: https://github.com/CVMI-Lab/SlotCon. Zheng Dong · Ke Xu · Ziheng Duan · Hujun Bao · Weiwei Xu · Rynson Lau Although PIFu-based 3D human reconstruction methods are popular, the quality of recovered details is still unsatisfactory. In a sparse (e.g., 3 RGBD sensors) capture setting, the depth noise is typically amplified in the PIFu representation, resulting in flat facial surfaces and geometry-fallible bodies. In this paper, we propose a novel geometry-aware two-scale PIFu for 3D human reconstruction from sparse, noisy inputs. Our key idea is to exploit the complementary properties of depth denoising and 3D reconstruction, for learning a two-scale PIFu representation to reconstruct high-frequency facial details and consistent bodies separately. To this end, we first formulate depth denoising and 3D reconstruction as a multi-task learning problem. The depth denoising process enriches the local geometry information of the reconstruction features, while the reconstruction process enhances depth denoising with global topology information. We then propose to learn the two-scale PIFu representation using two MLPs based on the denoised depth and geometry-aware features. Extensive experiments demonstrate the effectiveness of our approach in reconstructing facial details and bodies of different poses and its superiority over state-of-the-art methods. Bolivar Solarte · Chin-Hsuan Wu · Yueh-Cheng Liu · Yi-Hsuan Tsai · Min Sun We present 360-MLC, a self-training method based on multi-view layout consistency for finetuning monocular room-layout models using unlabeled 360-images only. This can be valuable in practical scenarios where a pre-trained model needs to be adapted to a new data domain without using any ground truth annotations. Our simple yet effective assumption is that multiple layout estimations in the same scene must define a consistent geometry regardless of their camera positions. Based on this idea, we leverage a pre-trained model to project estimated layout boundaries from several camera views into the 3D world coordinate. Then, we re-project them back to the spherical coordinate and build a probability function, from which we sample the pseudo-labels for self-training. To handle unconfident pseudo-labels, we evaluate the variance in the re-projected boundaries as an uncertainty value to weight each pseudo-label in our loss function during training. In addition, since ground truth annotations are not available during training nor in testing, we leverage the entropy information in multiple layout estimations as a quantitative metric to measure the geometry consistency of the scene, allowing us to evaluate any layout estimator for hyper-parameter tuning, including model selection without ground truth annotations. Experimental results show that our solution achieves favorable performance against state-of-the-art methods when self-training from three publicly available source datasets to a unique, newly labeled dataset consisting of multi-view images of the same Aniket Roy · Anshul Shah · Ketul Shah · Prithviraj Dhar · Anoop Cherian · Rama Chellappa Learning from a few examples is a challenging computer vision task. Traditionally,meta-learning-based methods have shown promise towards solving this problem.Recent approaches show benefits by learning a feature extractor on the abundantbase examples and transferring these to the fewer novel examples. However, thefinetuning stage is often prone to overfitting due to the small size of the noveldataset. To this end, we propose Few shot Learning with hard Mixup (FeLMi)using manifold mixup to synthetically generate samples that helps in mitigatingthe data scarcity issue. Different from a naïve mixup, our approach selects the hardmixup samples using an uncertainty-based criteria. To the best of our knowledge,we are the first to use hard-mixup for the few-shot learning problem. Our approachallows better use of the pseudo-labeled base examples through base-novel mixupand entropy-based filtering. We evaluate our approach on several common few-shotbenchmarks - FC-100, CIFAR-FS, miniImageNet and tieredImageNet and obtainimprovements in both 1-shot and 5-shot settings. Additionally, we experimented onthe cross-domain few-shot setting (miniImageNet → CUB) and obtain Massimiliano Patacchiola · John Bronskill · Aliaksandra Shysheya · Katja Hofmann · Sebastian Nowozin · Richard Turner Recent years have seen a growth in user-centric applications that require effective knowledge transfer across tasks in the low-data regime. An example is personalization, where a pretrained system is adapted by learning on small amounts of labeled data belonging to a specific user. This setting requires high accuracy under low computational complexity, therefore the Pareto frontier of accuracy vs. adaptation cost plays a crucial role. In this paper we push this Pareto frontier in the few-shot image classification setting with a key contribution: a new adaptive block called Contextual Squeeze-and-Excitation (CaSE) that adjusts a pretrained neural network on a new task to significantly improve performance with a single forward pass of the user data (context). We use meta-trained CaSE blocks to conditionally adapt the body of a network and a fine-tuning routine to adapt a linear head, defining a method called UpperCaSE. UpperCaSE achieves a new state-of-the-art accuracy relative to meta-learners on the 26 datasets of VTAB+MD and on a challenging real-world personalization benchmark (ORBIT), narrowing the gap with leading fine-tuning methods with the benefit of orders of magnitude lower adaptation cost. Hongjoon Ahn · Yongyi Yang · Quan Gan · Taesup Moon · David P Wipf Heterogeneous graph neural networks (GNNs) achieve strong performance on node classification tasks in a semi-supervised learning setting. However, as in the simpler homogeneous GNN case, message-passing-based heterogeneous GNNs may struggle to balance between resisting the oversmoothing that may occur in deep models, and capturing long-range dependencies of graph structured data. Moreover, the complexity of this trade-off is compounded in the heterogeneous graph case due to the disparate heterophily relationships between nodes of different types. To address these issues, we propose a novel heterogeneous GNN architecture in which layers are derived from optimization steps that descend a novel relation-aware energy function. The corresponding minimizer is fully differentiable with respect to the energy function parameters, such that bilevel optimization can be applied to effectively learn a functional form whose minimum provides optimal node representations for subsequent classification tasks. In particular, this methodology allows us to model diverse heterophily relationships between different node types while avoiding oversmoothing effects. Experimental results on 8 heterogeneous graph benchmarks demonstrates that our proposed method can achieve competitive node classification accuracy. Shuwen Chai · Jinghui Chen Recent studies show that despite achieving high accuracy on a number of real-world applications, deep neural networks (DNNs) can be backdoored: by injecting triggered data samples into the training dataset, the adversary can mislead the trained model into classifying any test data to the target class as long as the trigger pattern is presented. To nullify such backdoor threats, various methods have been proposed. Particularly, a line of research aims to purify the potentially compromised model. However, one major limitation of this line of work is the requirement to access sufficient original training data: the purifying performance is a lot worse when the available training data is limited. In this work, we propose Adversarial Weight Masking (AWM), a novel method capable of erasing the neural backdoors even in the one-shot setting. The key idea behind our method is to formulate this into a min-max optimization problem: first, adversarially recover the non-robust perturbation patterns and then (soft) mask the network weights that are sensitive to the recovered patterns. Comprehensive evaluations of several benchmark datasets suggest that AWM can largely improve the purifying effects over other state-of-the-art methods on various available training dataset sizes. Xuelong Mi · Mengfan Wang · Alex Chen · Jing-Xuan Lim · Yizhi Wang · Misha B Ahrens · Guoqiang Yu Multiple time series data occur in many real applications and the alignment among them is usually a fundamental step of data analysis. Frequently, these multiple time series are inter-dependent, which provides extra information for the alignment task and this information cannot be well utilized in the conventional pairwise alignment methods. Recently, the joint alignment was modeled as a max-flow problem, in which both the profile similarity between the aligned time series and the distance between adjacent warping functions are jointly optimized. However, despite the new model having elegant mathematical formulation and superior alignment accuracy, the long computation time and large memory usage, due to the use of the existing general-purpose max-flow algorithms, limit significantly its well-deserved wide use. In this report, we present BIdirectional pushing with Linear Component Operations (BILCO), a novel algorithm that solves the joint alignment max-flow problems efficiently and exactly. We develop the strategy of linear component operations that integrates dynamic programming technique and the push-relabel approach. This strategy is motivated by the fact that the joint alignment max-flow problem is a generalization of dynamic time warping (DTW) and numerous individual DTW problems are embedded. Further, a bidirectional-pushing strategy is proposed to introduce prior knowledge and reduce unnecessary computation, by leveraging another fact that good initialization can be easily computed for the joint alignment max-flow problem. We demonstrate the efficiency of BILCO using both synthetic and real experiments. Tested on thousands of datasets under various simulated scenarios and in three distinct application categories, BILCO consistently achieves at least 10 and averagely 20-folds increase in speed, and uses at most 1/8 and averagely 1/10 memory compared with the best existing max-flow method. Our source code can be found at https://github.com/yu-lab-vt/BILCO. Xiaojun Xu · Linyi Li · Bo Li Recent studies show that training deep neural networks (DNNs) with Lipschitz constraints are able to enhance adversarial robustness and other model properties such as stability. In this paper, we propose a layer-wise orthogonal training method (LOT) to effectively train 1-Lipschitz convolution layers via parametrizing an orthogonal matrix with an unconstrained matrix. We then efficiently compute the inverse square root of a convolution kernel by transforming the input domain to the Fourier frequency domain. On the other hand, as existing works show that semi-supervised training helps improve empirical robustness, we aim to bridge the gap and prove that semi-supervised learning also improves the certified robustness of Lipschitz-bounded models. We conduct comprehensive evaluations for LOT under different settings. We show that LOT significantly outperforms baselines regarding deterministic l2 certified robustness, and scales to deeper neural networks. Under the supervised scenario, we improve the state-of-the-art certified robustness for all architectures (e.g. from 59.04% to 63.50% on CIFAR-10 and from 32.57% to 34.59% on CIFAR-100 at radius $\rho=36/255$ for 40-layer networks). With semi-supervised learning over unlabelled data, we are able to improve state-of-the-art certified robustness on CIFAR-10 at $\rho=108/255$ from 36.04% to 42.39%. In addition, LOT consistently outperforms baselines on different model architectures with only 1/3 evaluation time. Kien Do · Thai Hung Le · Dung Nguyen · Dang Nguyen · HARIPRIYA HARIKUMAR · Truyen Tran · Santu Rana · Svetha Venkatesh Data-free Knowledge Distillation (DFKD) has attracted attention recently thanks to its appealing capability of transferring knowledge from a teacher network to a student network without using training data. The main idea is to use a generator to synthesize data for training the student. As the generator gets updated, the distribution of synthetic data will change. Such distribution shift could be large if the generator and the student are trained adversarially, causing the student to forget the knowledge it acquired at the previous steps. To alleviate this problem, we propose a simple yet effective method called Momentum Adversarial Distillation (MAD) which maintains an exponential moving average (EMA) copy of the generator and uses synthetic samples from both the generator and the EMA generator to train the student. Since the EMA generator can be considered as an ensemble of the generator's old versions and often undergoes a smaller change in updates compared to the generator, training on its synthetic samples can help the student recall the past knowledge and prevent the student from adapting too quickly to the new updates of the generator. Our experiments on six benchmark datasets including big datasets like ImageNet and Places365 demonstrate the superior performance of MAD over competing methods for handling the large distribution shift problem. Our method also compares favorably to existing DFKD methods and even achieves state-of-the-art results in some cases. Lijun Zhang · Xiao Liu · Hui Guan Multi-task learning (MTL) jointly learns a set of tasks by sharing parameters among tasks. It is a promising approach for reducing storage costs while improving task accuracy for many computer vision tasks. The effective adoption of MTL faces two main challenges. The first challenge is to determine what parameters to share across tasks to optimize for both memory efficiency and task accuracy. The second challenge is to automatically apply MTL algorithms to an arbitrary CNN backbone without requiring time-consuming manual re-implementation and significant domain expertise. This paper addresses the challenges by developing the first programming framework AutoMTL that automates efficient MTL model development for vision tasks. AutoMTL takes as inputs an arbitrary backbone convolutional neural network (CNN) and a set of tasks to learn, and automatically produces a multi-task model that achieves high accuracy and small memory footprint simultaneously. Experiments on three popular MTL benchmarks (CityScapes, NYUv2, Tiny-Taskonomy) demonstrate the effectiveness of AutoMTL over state-of-the-art approaches as well as the generalizability of AutoMTL across CNNs. AutoMTL is open-sourced and available at https://github.com/zhanglijun95/AutoMTL. Vladimir Fomenko · Ismail Elezi · Deva Ramanan · Laura Leal-Taixé · Aljosa Osep We tackle the problem of novel class discovery and localization (NCDL). In this setting, we assume a source dataset with supervision for only some object classes. Instances of other classes need to be discovered, classified, and localized automatically based on visual similarity without any human supervision. To tackle NCDL, we propose a two-stage object detection network Region-based NCDL (RNCDL) that uses a region proposal network to localize regions of interest (RoIs). We then train our network to learn to classify each RoI, either as one of the known classes, seen in the source dataset, or one of the novel classes, with a long-tail distribution constraint on the class assignments, reflecting the natural frequency of classes in the real world. By training our detection network with this objective in an end-to-end manner, it learns to classify all region proposals for a large variety of classes, including those not part of the labeled object class vocabulary. Our experiments conducted using COCO and LVIS datasets reveal that our method is significantly more effective than multi-stage pipelines that rely on traditional clustering algorithms. Furthermore, we demonstrate the generality of our approach by applying our method to a large-scale Visual Genome dataset, where our network successfully learns to detect various semantic classes without direct Ioana Bica · Mihaela van der Schaar Consider the problem of improving the estimation of conditional average treatment effects (CATE) for a target domain of interest by leveraging related information from a source domain with a different feature space. This heterogeneous transfer learning problem for CATE estimation is ubiquitous in areas such as healthcare where we may wish to evaluate the effectiveness of a treatment for a new patient population for which different clinical covariates and limited data are available. In this paper, we address this problem by introducing several building blocks that use representation learning to handle the heterogeneous feature spaces and a flexible multi-task architecture with shared and private layers to transfer information between potential outcome functions across domains. Then, we show how these building blocks can be used to recover transfer learning equivalents of the standard CATE learners. On a new semi-synthetic data simulation benchmark for heterogeneous transfer learning, we not only demonstrate performance improvements of our heterogeneous transfer causal effect learners across datasets, but also provide insights into the differences between these learners from a transfer perspective. Robin Winter · Marco Bertolini · Tuan Le · Frank Noe · Djork-Arné Clevert Equivariant neural networks, whose hidden features transform according to representations of a group $G$ acting on the data, exhibit training efficiency and an improved generalisation performance. In this work, we extend group invariant and equivariant representation learning to the field of unsupervised deep learning. We propose a general learning strategy based on an encoder-decoder framework in which the latent representation is separated in an invariant term and an equivariant group action component. The key idea is that the network learns to encode and decode data to and from a group-invariant representation by additionally learning to predict the appropriate group action to align input and output pose to solve the reconstruction task. We derive the necessary conditions on the equivariant encoder, and we present a construction valid for any $G$, both discrete and continuous. We describe explicitly our construction for rotations, translations and permutations. We test the validity and the robustness of our approach in a variety of experiments with diverse data types employing different network architectures. Ziming Liu · Ouail Kitouni · Niklas S Nolte · Eric Michaud · Max Tegmark · Mike Williams We aim to understand grokking, a phenomenon where models generalize long after overfitting their training set. We present both a microscopic analysis anchored by an effective theory and a macroscopic analysis of phase diagrams describing learning performance across hyperparameters. We find that generalization originates from structured representations, whose training dynamics and dependence on training set size can be predicted by our effective theory (in a toy setting). We observe empirically the presence of four learning phases: comprehension, grokking, memorization, and confusion. We find representation learning to occur only in a "Goldilocks zone" (including comprehension and grokking) between memorization and confusion. Compared to the comprehension phase, the grokking phase stays closer to the memorization phase, leading to delayed generalization. The Goldilocks phase is reminiscent of "intelligence from starvation" in Darwinian evolution, where resource limitations drive discovery of more efficient solutions. This study not only provides intuitive explanations of the origin of grokking, but also highlights the usefulness of physics-inspired tools, e.g., effective theories and phase diagrams, for understanding deep learning. Alberto Silvio Chiappa · Alessandro Marin Vargas · Alexander Mathis Biological and artificial agents need to deal with constant changes in the real world. We study this problem in four classical continuous control environments, augmented with morphological perturbations. Learning to locomote when the length and the thickness of different body parts vary is challenging, as the control policy is required to adapt to the morphology to successfully balance and advance the agent. We show that a control policy based on the proprioceptive state performs poorly with highly variable body configurations, while an (oracle) agent with access to a learned encoding of the perturbation performs significantly better. We introduce DMAP, a biologically-inspired, attention-based policy network architecture. DMAP combines independent proprioceptive processing, a distributed policy with individual controllers for each joint, and an attention mechanism, to dynamically gate sensory information from different body parts to different controllers. Despite not having access to the (hidden) morphology information, DMAP can be trained end-to-end in all the considered environments, overall matching or surpassing the performance of an oracle agent. Thus DMAP, implementing principles from biological motor control, provides a strong inductive bias for learning challenging sensorimotor tasks. Overall, our work corroborates the power of these principles in challenging locomotion tasks. The code is available at the following link: https://github.com/amathislab/dmap Binhang Yuan · Yongjun He · Jared Davis · Tianyi Zhang · Tri Dao · Beidi Chen · Percy Liang · Christopher Ré · Ce Zhang Training foundation models, such as GPT-3 and PaLM, can be extremely expensive, often involving tens of thousands of GPUs running continuously for months. These models are typically trained in specialized clusters featuring fast, homogeneous interconnects and using carefully designed software systems that support both data parallelism and model/pipeline parallelism. Such dedicated clusters can be costly and difficult to obtain. Can we instead leverage the much greater amount of decentralized, heterogeneous, and lower-bandwidth interconnected compute? Previous works examining the heterogeneous, decentralized setting focus on relatively small models that can be trained in a purely data parallel manner. State-of-the-art schemes for model parallel foundation model training, such as Megatron and Deepspeed, only consider the homogeneous data center setting. In this paper, we present the first study of training large foundation models with model parallelism in a decentralized regime over a heterogeneous network. Our key technical contribution is a scheduling algorithm that allocates different computational “tasklets” in the training of foundation models to a group of decentralized GPU devices connected by a slow heterogeneous network. We provide a formal cost model and further propose an efficient evolutionary algorithm to find the optimal allocation strategy. We conduct extensive experiments that represent different scenarios for learning over geo-distributed devices simulated using real-world network measurements. In the most extreme case, across 8 different cities spanning 3 continents, our approach is 4.8× faster than prior state-of-the-art training systems. Haotian Fu · Shangqun Yu · Michael Littman · George Konidaris We propose a model-based lifelong reinforcement-learning approach that estimates a hierarchical Bayesian posterior distilling the common structure shared across different tasks. The learned posterior combined with a sample-based Bayesian exploration procedure increases the sample efficiency of learning across a family of related tasks. We first derive an analysis of the relationship between the sample complexity and the initialization quality of the posterior in the finite MDP setting. We next scale the approach to continuous-state domains by introducing a Variational Bayesian Lifelong Reinforcement Learning algorithm that can be combined with recent model-based deep RL methods, and that exhibits backward transfer. Experimental results on several challenging domains show that our algorithms achieve both better forward and backward transfer performance than state-of-the-art lifelong RL methods. Peide Huang · Mengdi Xu · Jiacheng Zhu · Laixi Shi · Fei Fang · DING ZHAO Curriculum Reinforcement Learning (CRL) aims to create a sequence of tasks, starting from easy ones and gradually learning towards difficult tasks. In this work, we focus on the idea of framing CRL as interpolations between a source (auxiliary) and a target task distribution. Although existing studies have shown the great potential of this idea, it remains unclear how to formally quantify and generate the movement between task distributions. Inspired by the insights from gradual domain adaptation in semi-supervised learning, we create a natural curriculum by breaking down the potentially large task distributional shift in CRL into smaller shifts. We propose GRADIENT which formulates CRL as an optimal transport problem with a tailored distance metric between tasks. Specifically, we generate a sequence of task distributions as a geodesic interpolation between the source and target distributions, which are actually the Wasserstein barycenter. Different from many existing methods, our algorithm considers a task-dependent contextual distance metric and is capable of handling nonparametric distributions in both continuous and discrete context settings. In addition, we theoretically show that GRADIENT enables smooth transfer between subsequent stages in the curriculum under certain conditions. We conduct extensive experiments in locomotion and manipulation tasks and show that our proposed GRADIENT achieves higher performance than baselines in terms of learning efficiency and asymptotic performance. A. Tuan Nguyen · Philip Torr · Ser Nam Lim Federated Learning (FL) refers to the decentralized and privacy-preserving machine learning framework in which multiple clients collaborate (with the help of a central server) to train a global model without sharing their data. However, most existing FL methods only focus on maximizing the model's performance on the source clients' data (e.g., mobile users) without considering its generalization ability to unknown target data (e.g., a new user). In this paper, we incorporate the problem of Domain Generalization (DG) into Federated Learning to tackle the aforementioned issue. However, virtually all existing DG methods require a centralized setting where data is shared across the domains, which violates the principles of decentralized FL and hence not applicable. To this end, we propose a simple yet novel representation learning framework, namely FedSR, which enables domain generalization while still respecting the decentralized and privacy-preserving natures of this FL setting. Motivated by classical machine learning algorithms, we aim to learn a simple representation of the data for better generalization. In particular, we enforce an L2-norm regularizer on the representation and a conditional mutual information (between the representation and the data given the label) regularizer to encourage the model to only learn essential information (while ignoring spurious correlations such as the background). Furthermore, we provide theoretical connections between the above two objectives and representation alignment in domain generalization. Extensive experimental results suggest that our method significantly outperforms relevant baselines in this particular problem. Zhongxiang Dai · YAO SHU · Bryan Kian Hsiang Low · Patrick Jaillet Bayesian optimization (BO), which uses a Gaussian process (GP) as a surrogate to model its objective function, is popular for black-box optimization. However, due to the limitations of GPs, BO underperforms in some problems such as those with categorical, high-dimensional or image inputs. To this end, recent works have used the highly expressive neural networks (NNs) as the surrogate model and derived theoretical guarantees using the theory of neural tangent kernel (NTK). However, these works suffer from the limitations of the requirement to invert an extremely large parameter matrix and the restriction to the sequential (rather than batch) setting. To overcome these limitations, we introduce two algorithms based on the Thompson sampling (TS) policy named Sample-Then-Optimize Batch Neural TS (STO-BNTS) and STO-BNTS-Linear. To choose an input query, we only need to train an NN (resp. a linear model) and then choose the query by maximizing the trained NN (resp. linear model), which is equivalently sampled from the GP posterior with the NTK as the kernel function. As a result, our algorithms sidestep the need to invert the large parameter matrix yet still preserve the validity of the TS policy. Next, we derive regret upper bounds for our algorithms with batch evaluations, and use insights from batch BO and NTK to show that they are asymptotically no-regret under certain conditions. Finally, we verify their empirical effectiveness using practical AutoML and reinforcement learning experiments. Hyungjin Chung · Byeongsu Sim · Dohoon Ryu · Jong Chul Ye Recently, diffusion models have been used to solve various inverse problems in an unsupervised manner with appropriate modifications to the sampling process. However, the current solvers, which recursively apply a reverse diffusion step followed by a projection-based measurement consistency step, often produce sub-optimal results. By studying the generative sampling path, here we show that current solvers throw the sample path off the data manifold, and hence the error accumulates. To address this, we propose an additional correction term inspired by the manifold constraint, which can be used synergistically with the previous solvers to make the iterations close to the manifold. The proposed manifold constraint is straightforward to implement within a few lines of code, yet boosts the performance by a surprisingly large margin. With extensive experiments, we show that our method is superior to the previous methods both theoretically and empirically, producing promising results in many applications such as image inpainting, colorization, and sparse-view computed tomography. Code available https://github.com/HJ-harry/MCG_diffusion Liam Collins · Hamed Hassani · Aryan Mokhtari · Sanjay Shakkottai The Federated Averaging (FedAvg) algorithm, which consists of alternating between a few local stochastic gradient updates at client nodes, followed by a model averaging update at the server, is perhaps the most commonly used method in Federated Learning. Notwithstanding its simplicity, several empirical studies have illustrated that the model output by FedAvg leads to a model that generalizes well to new unseen tasks after a few fine-tuning steps. This surprising performance of such a simple method, however, is not fully understood from a theoretical point of view. In this paper, we formally investigate this phenomenon in the multi-task linear regression setting. We show that the reason behind the generalizability of the FedAvg output is FedAvg’s power in learning the common data representation among the clients’ tasks, by leveraging the diversity among client data distributions via multiple local updates between communication rounds. We formally establish the iteration complexity required by the clients for proving such result in the setting where the underlying shared representation is a linear map. To the best of our knowledge, this is the first result showing that FedAvg learns an expressive representation in any setting. Moreover, we show that multiple local updates between communication rounds are necessary for representation learning, as distributed gradient methods that make only one local update between rounds provably cannot recover the ground-truth representation in the linear setting, and empirically yield neural network representations that generalize drastically worse to new clients than those learned by FedAvg trained on heterogeneous image classification datasets. Daeha Kim · Byung Cheol Song Identity-invariant facial expression recognition (FER) has been one of the challenging computer vision tasks. Since conventional FER schemes do not explicitly address the inter-identity variation of facial expressions, their neural network models still operate depending on facial identity. This paper proposes to quantify the inter-identity variation by utilizing pairs of similar expressions explored through a specific matching process. We formulate the identity matching process as an Optimal Transport (OT) problem. Specifically, to find pairs of similar expressions from different identities, we define the inter-feature similarity as a transportation cost. Then, optimal identity matching to find the optimal flow with minimum transportation cost is performed by Sinkhorn-Knopp iteration. The proposed matching method is not only easy to plug in to other models, but also requires only acceptable computational overhead. Extensive simulations prove that the proposed FER method improves the PCC/CCC performance by up to 10% or more compared to the runner-up on wild datasets. The source code and software demo are available at https://github.com/kdhht2334/ELIM_FER. Huaibo Huang · Xiaoqiang Zhou · Ran He We present a general vision transformer backbone, called as Orthogonal Transformer, in pursuit of both efficiency and effectiveness. A major challenge for vision transformer is that self-attention, as the key element in capturing long-range dependency, is very computationally expensive for dense prediction tasks (e.g., object detection). Coarse global self-attention and local self-attention are then designed to reduce the cost, but they suffer from either neglecting local correlations or hurting global modeling. We present an orthogonal self-attention mechanism to alleviate these issues. Specifically, self-attention is computed in the orthogonal space that is reversible to the spatial domain but has much lower resolution. The capabilities of learning global dependency and exploring local correlations are maintained because every orthogonal token in self-attention can attend to the entire visual tokens. Remarkably, orthogonality is realized by constructing an endogenously orthogonal matrix that is friendly to neural networks and can be optimized as arbitrary orthogonal matrices. We also introduce Positional MLP to incorporate position information for arbitrary input resolutions as well as enhance the capacity of MLP. Finally, we develop a hierarchical architecture for Orthogonal Transformer. Extensive experiments demonstrate its strong performance on a broad range of vision tasks, including image classification, object detection, instance segmentation and semantic segmentation. Haoyi Zhou · Siyang Xiao · Shanghang Zhang · Jieqi Peng · Shuai Zhang · Jianxin Li The recent success of Transformer has benefited many real-world applications, with its capability of building long dependency through pairwise dot-products. However, the strong assumption that elements are directly attentive to each other limits the performance of tasks with high-order dependencies such as natural language understanding and Image captioning. To solve such problems, we are the first to define the Jump Self-attention (JAT) to build Transformers. Inspired by the pieces moving of English Draughts, we introduce the spectral convolutional technique to calculate JAT on the dot-product feature map. This technique allows JAT's propagation in each self-attention head and is interchangeable with the canonical self-attention. We further develop the higher-order variants under the multi-hop assumption to increase the generality. Moreover, the proposed architecture is compatible with the pre-trained models. With extensive experiments, we empirically show that our methods significantly increase the performance on ten different tasks. Jean-Baptiste Alayrac · Jeff Donahue · Pauline Luc · Antoine Miech · Iain Barr · Yana Hasson · Karel Lenc · Arthur Mensch · Katherine Millican · Malcolm Reynolds · Roman Ring · Eliza Rutherford · Serkan Cabi · Tengda Han · Zhitao Gong · Sina Samangooei · Marianne Monteiro · Jacob L Menick · Sebastian Borgeaud · Andy Brock · Aida Nematzadeh · Sahand Sharifzadeh · Mikołaj Bińkowski · Ricardo Barreira · Oriol Vinyals · Andrew Zisserman · Karén Simonyan Building models that can be rapidly adapted to novel tasks using only a handful of annotated examples is an open challenge for multimodal machine learning research. We introduce Flamingo, a family of Visual Language Models (VLM) with this ability. We propose key architectural innovations to: (i) bridge powerful pretrained vision-only and language-only models, (ii) handle sequences of arbitrarily interleaved visual and textual data, and (iii) seamlessly ingest images or videos as inputs. Thanks to their flexibility, Flamingo models can be trained on large-scale multimodal web corpora containing arbitrarily interleaved text and images, which is key to endow them with in-context few-shot learning capabilities. We perform a thorough evaluation of our models, exploring and measuring their ability to rapidly adapt to a variety of image and video tasks. These include open-ended tasks such as visual question-answering, where the model is prompted with a question which it has to answer, captioning tasks, which evaluate the ability to describe a scene or an event, and close-ended tasks such as multiple-choice visual question-answering. For tasks lying anywhere on this spectrum, a single Flamingo model can achieve a new state of the art with few-shot learning, simply by prompting the model with task-specific examples. On numerous benchmarks, Flamingo outperforms models fine-tuned on thousands of times more task-specific data. Amir Bar · Yossi Gandelsman · Trevor Darrell · Amir Globerson · Alexei Efros How does one adapt a pre-trained visual model to novel downstream tasks without task-specific finetuning or any model modification? Inspired by prompting in NLP, this paper investigates visual prompting: given input-output image example(s) of a new task at test time and a new input image, the goal is to automatically produce the output image, consistent with the given examples. We show that posing this problem as simple image inpainting -- literally just filling in a hole in a concatenated visual prompt image -- turns out to be surprisingly effective, provided that the inpainting algorithm has been trained on the right data. We train masked auto-encoders on a new dataset that we curated -- 88k unlabeled figures from academic papers sources on Arxiv. We apply visual prompting to these pretrained models and demonstrate results on various downstream image-to-image tasks, including foreground segmentation, single object detection, colorization, edge detection, etc. Project page: https://yossigandelsman.github.io/visual_prompt Yue-Ting Pan · Jing-Lun Chou · Chun-Shu Wei Recognition of electroencephalographic (EEG) signals highly affect the efficiency of non-invasive brain-computer interfaces (BCIs). While recent advances of deep-learning (DL)-based EEG decoders offer improved performances, the development of geometric learning (GL) has attracted much attention for offering exceptional robustness in decoding noisy EEG data. However, there is a lack of studies on the merged use of deep neural networks (DNNs) and geometric learning for EEG decoding. We herein propose a manifold attention network (mAtt), a novel geometric deep learning (GDL)-based model, featuring a manifold attention mechanism that characterizes spatiotemporal representations of EEG data fully on a Riemannian symmetric positive definite (SPD). The evaluation of the proposed mAtt on both time-synchronous and -asyncronous EEG datasets suggests its superiority over other leading DL methods for general EEG decoding. Furthermore, analysis of model interpretation reveals the capability of mAtt in capturing informative EEG features and handling the non-stationarity of brain dynamics. Giyoung Jeon · Haedong Jeong · Jaesik Choi Measuring the attribution of input features toward the model output is one of the popular post-hoc explanations on the Deep Neural Networks (DNNs). Among various approaches to compute the attribution, the gradient-based methods are widely used to generate attributions, because of its ease of implementation and the model-agnostic characteristic. However, existing gradient integration methods such as Integrated Gradients (IG) suffer from (1) the noisy attributions which cause the unreliability of the explanation, and (2) the selection for the integration path which determines the quality of explanations. FullGrad (FG) is an another approach to construct the reliable attributions by focusing the locality of piece-wise linear network with the bias gradient. Although FG has shown reasonable performance for the given input, as the shortage of the global property, FG is vulnerable to the small perturbation, while IG which includes the exploration over the input space is robust. In this work, we design a new input attribution method which adopt the strengths of both local and global attributions.In particular, we propose a novel approach to distill input features using weak and extremely positive contributor masks. We aggregate the intermediate local attributions obtained from the distillation sequence to provide reliable attribution. We perform the quantitative evaluation compared to various attribution methods and show that our method outperforms others. We also provide the qualitative result that our method obtains object-aligned and sharp attribution heatmap. Tao Yang · Yuwang Wang · Yan Lu · Nanning Zheng Obtaining the human-like perception ability of abstracting visual concepts from concrete pixels has always been a fundamental and important target in machine learning research fields such as disentangled representation learning and scene decomposition. Towards this goal, we propose an unsupervised transformer-based Visual Concepts Tokenization framework, dubbed VCT, to perceive an image into a set of disentangled visual concept tokens, with each concept token responding to one type of independent visual concept. Particularly, to obtain these concept tokens, we only use cross-attention to extract visual information from the image tokens layer by layer without self-attention between concept tokens, preventing information leakage across concept tokens. We further propose a Concept Disentangling Loss to facilitate that different concept tokens represent independent visual concepts. The cross-attention and disentangling loss play the role of induction and mutual exclusion for the concept tokens, respectively. Extensive experiments on several popular datasets verify the effectiveness of VCT on the tasks of disentangled representation learning and scene decomposition. VCT achieves the state of the art results by a large margin. Linhao Qu · xiaoyuan luo · Manning Wang · Zhijian Song Computer-aided pathology diagnosis based on the classification of Whole Slide Image (WSI) plays an important role in clinical practice, and it is often formulated as a weakly-supervised Multiple Instance Learning (MIL) problem. Existing methods solve this problem from either a bag classification or an instance classification perspective. In this paper, we propose an end-to-end weakly supervised knowledge distillation framework (WENO) for WSI classification, which integrates a bag classifier and an instance classifier in a knowledge distillation framework to mutually improve the performance of both classifiers. Specifically, an attention-based bag classifier is used as the teacher network, which is trained with weak bag labels, and an instance classifier is used as the student network, which is trained using the normalized attention scores obtained from the teacher network as soft pseudo labels for the instances in positive bags. An instance feature extractor is shared between the teacher and the student to further enhance the knowledge exchange between them. In addition, we propose a hard positive instance mining strategy based on the output of the student network to force the teacher network to keep mining hard positive instances. WENO is a plug-and-play framework that can be easily applied to any existing attention-based bag classification methods. Extensive experiments on five datasets demonstrate the efficiency of WENO. Code is available at https://github.com/miccaiif/WENO. Sang-Hoon Lee · Seung-Bin Kim · Ji-Hyun Lee · Eunwoo Song · Min-Jae Hwang · Seong-Whan Lee This paper presents HierSpeech, a high-quality end-to-end text-to-speech (TTS) system based on a hierarchical conditional variational autoencoder (VAE) utilizing self-supervised speech representations. Recently, single-stage TTS systems, which directly generate raw speech waveform from text, have been getting interest thanks to their ability in generating high-quality audio within a fully end-to-end training pipeline. However, there is still a room for improvement in the conventional TTS systems. Since it is challenging to infer both the linguistic and acoustic attributes from the text directly, missing the details of attributes, specifically linguistic information, is inevitable, which results in mispronunciation and over-smoothing problem in their synthetic speech. To address the aforementioned problem, we leverage self-supervised speech representations as additional linguistic representations to bridge an information gap between text and speech. Then, the hierarchical conditional VAE is adopted to connect these representations and to learn each attribute hierarchically by improving the linguistic capability in latent representations. Compared with the state-of-the-art TTS system, HierSpeech achieves +0.303 comparative mean opinion score, and reduces the phoneme error rate of synthesized speech from 9.16% to 5.78% on the VCTK dataset. Furthermore, we extend our model to HierSpeech-U, an untranscribed text-to-speech system. Specifically, HierSpeech-U can adapt to a novel speaker by utilizing self-supervised speech representations without text transcripts. The experimental results reveal that our method outperforms publicly available TTS models, and show the effectiveness of speaker adaptation with untranscribed speech. Dongze Lian · Daquan Zhou · Jiashi Feng · Xinchao Wang Existing fine-tuning methods either tune all parameters of the pre-trained model (full fine-tuning), which is not efficient, or only tune the last linear layer (linear probing), which suffers a significant accuracy drop compared to the full fine-tuning. In this paper, we propose a new parameter-efficient fine-tuning method termed as SSF, representing that researchers only need to Scale and Shift the deep Features extracted by a pre-trained model to catch up with the performance of full fine-tuning. In this way, SSF also surprisingly outperforms other parameter-efficient fine-tuning approaches even with a smaller number of tunable parameters. Furthermore, different from some existing parameter-efficient fine-tuning methods (e.g., Adapter or VPT) that introduce the extra parameters and computational cost in the training and inference stages, SSF only adds learnable parameters during the training stage, and these additional parameters can be merged into the original pre-trained model weights via re-parameterization in the inference phase. With the proposed SSF, our model obtains 2.46% (90.72% vs. 88.54%) and 11.48% (73.10% vs. 65.57%) performance improvement on FGVC and VTAB-1k in terms of Top-1 accuracy compared to the full fine-tuning but only fine-tuning about 0.3M parameters. We also conduct amounts of experiments in various model families (CNNs, Transformers, and MLPs) and datasets. Results on 26 image classification datasets in total and 3 robustness & out-of-distribution datasets show the effectiveness of SSF. Code is available at https:// Miao Zhang · Li Wang · David Campos · Wei Huang · Chenjuan Guo · Bin Yang Online distillation attracts attention from the community as it simplifies the traditional two-stage knowledge distillation process into a single stage. Online distillation collaboratively trains a group of peer models, which are treated as students, and all students gain extra knowledge from each other. However, memory consumption and diversity among peers are two key challenges to the scalability and quality of online distillation. To address the two challenges, this paper presents a framework called Weighted Mutual Learning with Diversity-Driven Model Compression (WML) for online distillation. First, at the base of a hierarchical structure where peers share different parts, we leverage the structured network pruning to generate diversified peer models and reduce the memory requirements. Second, rather than taking the average of peers, this paper, for the first time, leverages a bi-level formulation to estimate the relative importance of peers with a close-form, to further boost the effectiveness of the distillation from each other. Extensive experiments show the generalization of the proposed framework, which outperforms existing online distillation methods on a variety of deep neural networks. More interesting, as a byproduct, \WML produces a series of pruned models under different model sizes in a single run, which also achieves competitive results compared with existing channel pruning methods. Lisha Chen · Songtao Lu · Tianyi Chen Meta learning has demonstrated tremendous success in few-shot learning with limited supervised data. In those settings, the meta model is usually overparameterized. While the conventional statistical learning theory suggests that overparameterized models tend to overfit, empirical evidence reveals that overparameterized meta learning methods still work well -- a phenomenon often called ``benign overfitting.'' To understand this phenomenon, we focus on the meta learning settings with a challenging bilevel structure that we term the gradient-based meta learning, and analyze its generalization performance under an overparameterized meta linear regression model. While our analysis uses the relatively tractable linear models, our theory contributes to understanding the delicate interplay among data heterogeneity, model adaptation and benign overfitting in gradient-based meta learning tasks. We corroborate our theoretical claims through numerical simulations. Tian Yu Liu · Yu Yang · Baharan Mirzasoleiman A powerful category of (invisible) data poisoning attacks modify a subset of training examples by small adversarial perturbations to change the prediction of certain test-time data. Existing defense mechanisms are not desirable to deploy in practice, as they ofteneither drastically harm the generalization performance, or are attack-specific, and prohibitively slow to apply. Here, we propose a simple but highly effective approach that unlike existing methods breaks various types of invisible poisoning attacks with the slightest drop in the generalization performance. We make the key observation that attacks introduce local sharp regions of high training loss, which when minimized, results in learning the adversarial perturbations and makes the attack successful. To break poisoning attacks, our key idea is to alleviate the sharp loss regions introduced by poisons. To do so, our approach comprises two components: an optimized friendly noise that is generated to maximally perturb examples without degrading the performance, and a randomly varying noise component. The combination of both components builds a very light-weight but extremely effective defense against the most powerful triggerless targeted and hidden-trigger backdoor poisoning attacks, including Gradient Matching, Bulls-eye Polytope, and Sleeper Agent. We show that our friendly noise is transferable to other architectures, and adaptive attacks cannot break our defense due to its random noise component. Tabular data (or tables) are the most widely used data format in machine learning (ML). However, ML models often assume the table structure keeps fixed in training and testing. Before ML modeling, heavy data cleaning is required to merge disparate tables with different columns. This preprocessing often incurs significant data waste (e.g., removing unmatched columns and samples). How to learn ML models from multiple tables with partially overlapping columns? How to incrementally update ML models as more columns become available over time? Can we leverage model pretraining on multiple distinct tables? How to train an ML model which can predict on an unseen table? To answer all those questions, we propose to relax fixed table structures by introducing a Transferable Tabular Transformer (TransTab) for tables. The goal of TransTab is to convert each sample (a row in the table) to a generalizable embedding vector, and then apply stacked transformers for feature encoding. One methodology insight is combining column description and table cells as the raw input to a gated transformer model. The other insight is to introduce supervised and self-supervised pretraining to improve model performance. We compare TransTab with multiple baseline methods on diverse benchmark datasets and five oncology clinical trial datasets. Overall, TransTab ranks 1.00, 1.00, 1.78 out of 12 methods in supervised learning, incremental feature learning, and transfer learning scenarios, respectively; and the proposed pretraining leads to 2.3\% AUC lift on average over the supervised Yue Tan · Guodong Long · Jie Ma · LU LIU · Tianyi Zhou · Jing Jiang Federated Learning (FL) is a machine learning paradigm that allows decentralized clients to learn collaboratively without sharing their private data. However, excessive computation and communication demands pose challenges to current FL frameworks, especially when training large-scale models. To prevent these issues from hindering the deployment of FL systems, we propose a lightweight framework where clients jointly learn to fuse the representations generated by multiple fixed pre-trained models rather than training a large-scale model from scratch. This leads us to a more practical FL problem by considering how to capture more client-specific and class-relevant information from the pre-trained models and jointly improve each client's ability to exploit those off-the-shelf models. Here, we design a Federated Prototype-wise Contrastive Learning (FedPCL) approach which shares knowledge across clients through their class prototypes and builds client-specific representations in a prototype-wise contrastive manner. Sharing prototypes rather than learnable model parameters allows each client to fuse the representations in a personalized way while keeping the shared knowledge in a compact form for efficient communication. We perform a thorough evaluation of the proposed FedPCL in the lightweight framework, measuring and visualizing its ability to fuse various pre-trained models on popular FL datasets. Shiyu Wang · Xiaojie Guo · Xuanyang Lin · Bo Pan · Yuanqi Du · Yinkai Wang · Yanfang Ye · Ashley Petersen · Austin Leitgeb · Saleh Alkhalifa · Kevin Minbiole · William M. Wuest · Amarda Shehu · Liang Developing deep generative models has been an emerging field due to the ability to model and generate complex data for various purposes, such as image synthesis and molecular design. However, the advance of deep generative models is limited by the challenges to generate objects that possess multiple desired properties because: 1) the existence of complex correlation among real-world properties is common but hard to identify; 2) controlling individual property enforces an implicit partially control of its correlated properties, which is difficult to model; 3) controlling multiple properties under variour manners simultaneously is hard and underexplored. We address these challenges by proposing a novel deep generative framework that recovers semantics and correlation of properties through disentangled latent vectors. The correlation is handled via an explainable mask pooling layer, and properties are precisely retained by the generated objects via the mutual dependence between latent vectors and properties. Our generative model preserves properties of interest while handles correlation and conflicts of properties under a multi-objective optimization framework. The experiments demonstrate our model's superior performance in generating objects with desired properties. Ruijie Wang · Zheng Li · Dachun Sun · Shengzhong Liu · Jinning Li · Bing Yin · Tarek Abdelzaher In this paper, we investigate a realistic but underexplored problem, called few-shot temporal knowledge graph reasoning, that aims to predict future facts for newly emerging entities based on extremely limited observations in evolving graphs. It offers practical value in applications that need to derive instant new knowledge about new entities in temporal knowledge graphs (TKGs) with minimal supervision. The challenges mainly come from the few-shot and time shift properties of new entities. First, the limited observations associated with them are insufficient for training a model from scratch. Second, the potentially dynamic distributions from the initially observable facts to the future facts ask for explicitly modeling the evolving characteristics of new entities. We correspondingly propose a novel Meta Temporal Knowledge Graph Reasoning (MetaTKGR) framework. Unlike prior work that relies on rigid neighborhood aggregation schemes to enhance low-data entity representation, MetaTKGR dynamically adjusts the strategies of sampling and aggregating neighbors from recent facts for new entities, through temporally supervised signals on future facts as instant feedback. Besides, such a meta temporal reasoning procedure goes beyond existing meta-learning paradigms on static knowledge graphs that fail to handle temporal adaptation with large entity variance. We further provide a theoretical analysis and propose a temporal adaptation regularizer to stabilize the meta temporal reasoning over time. Empirically, extensive experiments on three real-world TKGs demonstrate the superiority of MetaTKGR over eight state-of-the-art baselines by a large margin. Ruoyu Cheng · Xianglong Lyu · Yang Li · Junjie Ye · Jianye Hao · Junchi Yan Placement and routing are two critical yet time-consuming steps of chip design in modern VLSI systems. Distinct from traditional heuristic solvers, this paper on one hand proposes an RL-based model for mixed-size macro placement, which differs from existing learning-based placers that often consider the macro by coarse grid-based mask. While the standard cells are placed via gradient-based GPU acceleration. On the other hand, a one-shot conditional generative routing model, which is composed of a special-designed input-size-adapting generator and a bi-discriminator, is devised to perform one-shot routing to the pins within each net, and the order of nets to route is adaptively learned. Combining these techniques, we develop a flexible and efficient neural pipeline, which to our best knowledge, is the first joint placement and routing network without involving any traditional heuristic solver. Experimental results on chip design benchmarks showcase the effectiveness of our approach, with code that will be made publicly available. Da JU · Stephen Roller · Sainbayar Sukhbaatar · Jason E Weston Attention mechanisms have become a standard tool for sequence modeling tasks, in particular by stacking self-attention layers over the entire input sequence as in the Transformer architecture. In this work we introduce a novel attention procedure called staircase attention that, unlike self-attention, operates across the sequence (in time) recurrently processing the input by adding another step of processing. A step in the staircase comprises of backward tokens (encoding the sequence so far seen) and forward tokens (ingesting a new part of the sequence). Thus our model can trade off performance and compute, by increasing the amount of recurrence through time and depth. Staircase attention is shown to be able to solve tasks that involve tracking that conventional Transformers cannot, due to this recurrence. Further, it is shown to provide improved modeling power for the same size model (number of parameters) compared to self-attentive Transformers on large language modeling and dialogue tasks, yielding significant perplexity gains. Naoki Hiratani · Yash Mehta · Timothy Lillicrap · Peter E Latham To survive, animals must adapt synaptic weights based on external stimuli and rewards. And they must do so using local, biologically plausible, learning rules -- a highly nontrivial constraint. One possible approach is to perturb neural activity (or use intrinsic, ongoing noise to perturb it), determine whether performance increases or decreases, and use that information to adjust the weights. This algorithm -- known as node perturbation -- has been shown to work on simple problems, but little is known about either its stability or its scalability with respect to network size. We investigate these issues both analytically, in deep linear networks, and numerically, in deep nonlinear ones.We show analytically that in deep linear networks with one hidden layer, both learning time and performance depend very weakly on hidden layer size. However, unlike stochastic gradient descent, when there is model mismatch between the student and teacher networks, node perturbation is always unstable. The instability is triggered by weight diffusion, which eventually leads to very large weights. This instability can be suppressed by weight normalization, at the cost of bias in the learning rule. We confirm numerically that a similar instability, and to a lesser extent scalability, exist in deep nonlinear networks trained on both a motor control task and image classification tasks. Our study highlights the limitations and potential of node perturbation as a biologically plausible learning rule in the brain. Abhishek Gupta · Aldo Pacchiano · Yuexiang Zhai · Sham Kakade · Sergey Levine The success of reinforcement learning in a variety of challenging sequential decision-making problems has been much discussed, but often ignored in this discussion is the consideration of how the choice of reward function affects the behavior of these algorithms. Most practical RL algorithms require copious amounts of reward engineering in order to successfully solve challenging tasks. The idea of this type of ``reward-shaping'' has been often discussed in the literature and is used in practical instantiations, but there is relatively little formal characterization of how the choice of reward shaping can yield benefits in sample complexity for RL problems. In this work, we build on the framework of novelty-based exploration to provide a simple scheme for incorporating shaped rewards into RL along with an analysis tool to show that particular choices of reward shaping provably improve sample efficiency. We characterize the class of problems where these gains are expected to be significant and show how this can be connected to practical algorithms in the literature. We show that these results hold in practice in experimental evaluations as well, providing an insight into the mechanisms through which reward shaping can significantly improve the complexity of reinforcement learning while retaining asymptotic performance. Kwangho Kim · Edward Kennedy · Jose Zubizarreta We study counterfactual classification as a new tool for decision-making under hypothetical (contrary to fact) scenarios. We propose a doubly-robust nonparametric estimator for a general counterfactual classifier, where we can incorporate flexible constraints by casting the classification problem as a nonlinear mathematical program involving counterfactuals. We go on to analyze the rates of convergence of the estimator and provide a closed-form expression for its asymptotic distribution. Our analysis shows that the proposed estimator is robust against nuisance model misspecification, and can attain fast $\sqrt{n}$ rates with tractable inference even when using nonparametric machine learning approaches. We study the empirical performance of our methods by simulation and apply them for recidivism risk prediction. Xuan Li · Yadi Cao · Minchen Li · Yin Yang · Craig Schroeder · Chenfanfu Jiang In this paper, we propose a neural network-based approach for learning to represent the behavior of plastic solid materials ranging from rubber and metal to sand and snow. Unlike elastic forces such as spring forces, these plastic forces do not result from the positional gradient of any potential energy, imposing great challenges on the stability and flexibility of their simulation. Our method effectively resolves this issue by learning a generalizable plastic energy whose derivative closely matches the analytical behavior of plastic forces. Our method, for the first time, enables the simulation of a wide range of arbitrary elasticity-plasticity combinations using time step-independent, unconditionally stable optimization-based time integrators. We demonstrate the efficacy of our method by learning and producing challenging 2D and 3D effects of metal, sand, and snow with complex dynamics. Abir De · Soumen Chakrabarti Submodular functions and variants, through their ability to characterize diversity and coverage, have emerged as a key tool for data selection and summarization. Many recent approaches to learn submodular functions suffer from limited expressiveness. In this work, we propose FlexSubNet, a family of flexible neural models for both monotone and non-monotone submodular functions. To fit a latent submodular function from (set, value) observations, our method applies a concave function on modular functions in a recursive manner. We do not draw the concave function from a restricted family, but rather learn from data using a highly expressive neural network that implements a differentiable quadrature procedure. Such an expressive neural model for concave functions may be of independent interest. Next, we extend this setup to provide a novel characterization of monotone $\alpha$-submodular functions, a recently introduced notion of approximate submodular functions. We then use this characterization to design a novel neural model for such functions. Finally, we consider learning submodular set functions under distant supervision in the form of (perimeter, high-value-subset) pairs. This yields a novel subset selection method based on an order-invariant, yet greedy sampler built around the above neural set functions. Our experiments on synthetic and real data show that FlexSubNet outperforms several baselines. Shengpu Tang · Maggie Makar · Michael Sjoding · Finale Doshi-Velez · Jenna Wiens Many reinforcement learning (RL) applications have combinatorial action spaces, where each action is a composition of sub-actions. A standard RL approach ignores this inherent factorization structure, resulting in a potential failure to make meaningful inferences about rarely observed sub-action combinations; this is particularly problematic for offline settings, where data may be limited. In this work, we propose a form of linear Q-function decomposition induced by factored action spaces. We study the theoretical properties of our approach, identifying scenarios where it is guaranteed to lead to zero bias when used to approximate the Q-function. Outside the regimes with theoretical guarantees, we show that our approach can still be useful because it leads to better sample efficiency without necessarily sacrificing policy optimality, allowing us to achieve a better bias-variance trade-off. Across several offline RL problems using simulators and real-world datasets motivated by healthcare, we demonstrate that incorporating factored action spaces into value-based RL can result in better-performing policies. Our approach can help an agent make more accurate inferences within underexplored regions of the state-action space when applying RL to observational datasets. Lara Scavuzzo · Feng Chen · Didier Chetelat · Maxime Gasse · Andrea Lodi · Neil Yorke-Smith · Karen Aardal State-of-the-art Mixed Integer Linear Programming (MILP) solvers combine systematic tree search with a plethora of hard-coded heuristics, such as branching rules. While approaches to learn branching strategies have received increasing attention and have shown very promising results, most of the literature focuses on learning fast approximations of the \emph{strong branching} rule. Instead, we propose to learn branching rules from scratch with Reinforcement Learning (RL). We revisit the work of Etheve et al. (2020) and propose a generalization of Markov Decisions Processes (MDP), which we call \emph{tree MDP}, that provides a more suitable formulation of the branching problem. We derive a policy gradient theorem for tree MDPs that exhibits a better credit assignment compared to its temporal counterpart. We demonstrate through computational experiments that this new framework is suitable to tackle the learning-to-branch problem in MILP, and improves the learning convergence. Sejun Park · Umut Simsekli · Murat Erdogdu In this paper, we propose a new covering technique localized for the trajectories of SGD. This localization provides an algorithm-specific complexity measured by the covering number, which can have dimension-independent cardinality in contrast to standard uniform covering arguments that result in exponential dimension dependency. Based on this localized construction, we show that if the objective function is a finite perturbation of a piecewise strongly convex and smooth function with $P$ pieces, i.e., non-convex and non-smooth in general, the generalization error can be upper bounded by $O(\sqrt{(\log n\log(nP))/n})$, where $n$ is the number of data samples. In particular, this rate is independent of dimension and does not require early stopping and decaying step size. Finally, we employ these results in various contexts and derive generalization bounds for multi-index linear models, multi-class support vector machines, and $K$-means clustering for both hard and soft label setups, improving the previously known state-of-the-art rates. Liangzu Peng · Christian Kümmerle · Rene Vidal We advance both the theory and practice of robust $\ell_p$-quasinorm regression for $p \in (0,1]$ by using novel variants of iteratively reweighted least-squares (IRLS) to solve the underlying non-smooth problem. In the convex case, $p=1$, we prove that this IRLS variant converges globally at a linear rate under a mild, deterministic condition on the feature matrix called the stable range space property. In the non-convex case, $p\in(0,1)$, we prove that under a similar condition, IRLS converges locally to the global minimizer at a superlinear rate of order $2-p$; the rate becomes quadratic as $p\to 0$. We showcase the proposed methods in three applications: real phase retrieval, regression without correspondences, and robust face restoration. The results show that (1) IRLS can handle a larger number of outliers than other methods, (2) it is faster than competing methods at the same level of accuracy, (3) it restores a sparsely corrupted face image with satisfactory visual quality. Matteo Castiglioni · Andrea Celli · Alberto Marchesi · Giulia Romano · Nicola Gatti We study online learning problems in which a decision maker has to take a sequence of decisions subject to $m$ long-term constraints. The goal of the decision maker is to maximize their total reward, while at the same time achieving small cumulative constraints violations across the $T$ rounds. We present the first best-of-both-world type algorithm for this general class of problems, with no-regret guarantees both in the case in which rewards and constraints are selected according to an unknown stochastic model, and in the case in which they are selected at each round by an adversary. Our algorithm is the first to provide guarantees in the adversarial setting with respect to the optimal fixed strategy that satisfies the long-term constraints. In particular, it guarantees a $\rho/ (1+\rho)$ fraction of the optimal utility and sublinear regret, where $\rho$ is a feasibility parameter related to the existence of strictly feasible solutions. Our framework employs traditional regret minimizers as black-box components. Therefore, by instantiating it with an appropriate choice of regret minimizers it can handle both the full-feedback as well as the bandit-feedback setting. Moreover, it allows the decision maker to seamlessly handle scenarios with non-convex reward and constraints. We show how our framework may be applied in the context of budget-management mechanisms for repeated auctions in order to guarantee long-term constraints which are not packing (e.g., ROI constraints). Omar Besbes · Will Ma · Omar Mouchtaki In this work, we study data-driven decision-making and depart from the classical identically and independently distributed (i.i.d.) assumption. We present a new framework in which historical samples are generated from unknown and different distributions, which we dub \textit{heterogeneous environments}. These distributions are assumed to lie in a heterogeneity ball with known radius and centered around the (also) unknown future (out-of-sample) distribution on which the performance of a decision will be evaluated. We quantify the asymptotic worst-case regret that is achievable by central data-driven policies such as Sample Average Approximation, but also by rate-optimal ones, as a function of the radius of the heterogeneity ball. Our work shows that the type of achievable performance varies considerably across different combinations of problem classes and notions of heterogeneity. We demonstrate the versatility of our framework by comparing achievable guarantees for the heterogeneous version of widely studied data-driven problems such as pricing, ski-rental, and newsvendor. En route, we establish a new connection between data-driven decision-making and distributionally robust optimization. Recent research in the theory of overparametrized learning has sought to establish generalization guarantees in the interpolating regime. Such results have been established for a few common classes of methods, but so far not for ensemble methods. We devise an ensemble classification method that simultaneously interpolates the training data, and is consistent for a broad class of data distributions. To this end, we define the manifold-Hilbert kernel for data distributed on a Riemannian manifold. We prove that kernel smoothing regression using the manifold-Hilbert kernel is weakly consistent in the setting of Devroye et al. 1998. For the sphere, we show that the manifold-Hilbert kernel can be realized as a weighted random partition kernel, which arises as an infinite ensemble of partition-based classifiers. Steven Yin · Christian Kroer We consider the problem of allocating a distribution of items to $n$ recipients where each recipient has to be allocated a fixed, pre-specified fraction of all items, while ensuring that each recipient does not experience too much envy. We show that this problem can be formulated as a variant of the semi-discrete optimal transport (OT) problem, whose solution structure in this case has a concise representation and a simple geometric interpretation. Unlike existing literature that treats envy-freeness as a hard constraint, our formulation allows us to \emph{optimally} trade off efficiency and envy continuously. Additionally, we study the statistical properties of the space of our OT based allocation policies by showing a polynomial bound on the number of samples needed to approximate the optimal solution from samples. Our approach is suitable for large-scale fair allocation problems such as the blood donation matching problem, and we show numerically that it performs well on a prior realistic data simulator. Jinglin Chen · Aditya Modi · Akshay Krishnamurthy · Nan Jiang · Alekh Agarwal We study reward-free reinforcement learning (RL) under general non-linear function approximation, and establish sample efficiency and hardness results under various standard structural assumptions. On the positive side, we propose the RFOLIVE (Reward-Free OLIVE) algorithm for sample-efficient reward-free exploration under minimal structural assumptions, which covers the previously studied settings of linear MDPs (Jin et al., 2020b), linear completeness (Zanette et al., 2020b) and low-rank MDPs with unknown representation (Modi et al., 2021). Our analyses indicate that the explorability or reachability assumptions, previously made for the latter two settings, are not necessary statistically for reward-free exploration. On the negative side, we provide a statistical hardness result for both reward-free and reward-aware exploration under linear completeness assumptions when the underlying features are unknown, showing an exponential separation between low-rank and linear completeness settings. Lucas Monteiro Paes · Carol Long · Berk Ustun · Flavio Calmon Machine learning models are often personalized by using group attributes that encode personal characteristics (e.g., sex, age group, HIV status). In such settings, individuals expect to receive more accurate predictions in return for disclosing group attributes to the personalized model. We study when we can tell that a personalized model upholds this principle for every group who provides personal data. We introduce a metric called the benefit of personalization (BoP) to measure the smallest gain in accuracy that any group expects to receive from a personalized model. We describe how the BoP can be used to carry out basic routines to audit a personalized model, including: (i) hypothesis tests to check that a personalized model improves performance for every group; (ii) estimation procedures to bound the minimum gain in personalization. We characterize the reliability of these routines in a finite-sample regime and present minimax bounds on both the probability of error for BoP hypothesis tests and the mean-squared error of BoP estimates. Our results show that we can only claim that personalization improves performance for each group who provides data when we explicitly limit the number of group attributes used by a personalized model. In particular, we show that it is impossible to reliably verify that a personalized classifier with $k \geq 19$ binary group attributes will benefit every group who provides personal data using a dataset of $n = 8\times10^9$ samples -- one for each person in the world. Daniil Tiapkin · Denis Belomestny · Daniele Calandriello · Eric Moulines · Remi Munos · Alexey Naumov · Mark Rowland · Michal Valko · Pierre Ménard We consider reinforcement learning in an environment modeled by an episodic, tabular, step-dependent Markov decision process of horizon $H$ with $S$ states, and $A$ actions. The performance of an agent is measured by the regret after interacting with the environment for $T$ episodes. We propose an optimistic posterior sampling algorithm for reinforcement learning (OPSRL), a simple variant of posterior sampling that only needs a number of posterior samples logarithmic in $H$, $S$, $A$, and $T$ per state-action pair. For OPSRL we guarantee a high-probability regret bound of order at most $O(\sqrt{H^3SAT})$ ignoring $\text{poly}\log(HSAT)$ terms. The key novel technical ingredient is a new sharp anti-concentration inequality for linear forms of a Dirichlet random vector which may be of independent interest. Specifically, we extend the normal approximation-based lower bound for Beta distributions by Alfers and Dinges (1984) to Dirichlet distributions. Our bound matches the lower bound of order $\Omega(\sqrt{H^3SAT})$, thereby answering the open problems raised by Agrawal and Jia (2017) for the episodic setting. Pranjal Awasthi · Anqi Mao · Mehryar Mohri · Yutao Zhong We present an extensive study of $H$-consistency bounds for multi-class classification. These are upper bounds on the target loss estimation error of a predictor in a hypothesis set $H$, expressed in terms of the surrogate loss estimation error of that predictor. They are stronger and more significant guarantees than Bayes-consistency, $H$-calibration or $H$-consistency, and more informative than excess error bounds derived for $H$ being the family of all measurable functions. We give a series of new $H$-consistency bounds for surrogate multi-class losses, including max losses, sum losses, and constrained losses, both in the non-adversarial and adversarial cases, and for different differentiable or convex auxiliary functions used. We also prove that no non-trivial $H$-consistency bound can be given in some cases. To our knowledge, these are the first $H$-consistency bounds proven for the multi-class setting. Our proof techniques are also novel and likely to be useful in the analysis of other such guarantees. Qizhao Chen · Vasilis Syrgkanis · Morgane Austern Estimation and inference on causal parameters is typically reduced to a generalized method of moments problem, which involves auxiliary functions that correspond to solutions to a regression or classification problem. Recent line of work on debiased machine learning shows how one can use generic machine learning estimators for these auxiliary problems, while maintaining asymptotic normality and root-$n$ consistency of the target parameter of interest, while only requiring mean-squared-error guarantees from the auxiliary estimation algorithms. The literature typically requires that these auxiliary problems are fitted on a separate sample or in a cross-fitting manner. We show that when these auxiliary estimation algorithms satisfy natural leave-one-out stability properties, then sample splitting is not required. This allows for sample re-use, which can be beneficial in moderately sized sample regimes. For instance, we show that the stability properties that we propose are satisfied for ensemble bagged estimators, built via sub-sampling without replacement, a popular technique in machine learning practice. Gal Vardi · Ohad Shamir · Nati Srebro We study norm-based uniform convergence bounds for neural networks, aiming at a tight understanding of how these are affected by the architecture and type of norm constraint, for the simple class of scalar-valued one-hidden-layer networks, and inputs bounded in Euclidean norm. We begin by proving that in general, controlling the spectral norm of the hidden layer weight matrix is insufficient to get uniform convergence guarantees (independent of the network width), while a stronger Frobenius norm control is sufficient, extending and improving on previous work. Motivated by the proof constructions, we identify and analyze two important settings where (perhaps surprisingly) a mere spectral norm control turns out to be sufficient: First, when the network's activation functions are sufficiently smooth (with the result extending to deeper networks); and second, for certain types of convolutional networks. In the latter setting, we study how the sample complexity is additionally affected by parameters such as the amount of overlap between patches and the overall number of patches. Saurabh Sihag · Gonzalo Mateos · Corey McMillan · Alejandro Ribeiro Graph neural networks (GNN) are an effective framework that exploit inter-relationships within graph-structured data for learning. Principal component analysis (PCA) involves the projection of data on the eigenspace of the covariance matrix and draws similarities with the graph convolutional filters in GNNs. Motivated by this observation, we study a GNN architecture, called coVariance neural network (VNN), that operates on sample covariance matrices as graphs. We theoretically establish the stability of VNNs to perturbations in the covariance matrix, thus, implying an advantage over standard PCA-based data analysis approaches that are prone to instability due to principal components associated with close eigenvalues. Our experiments on real-world datasets validate our theoretical results and show that VNN performance is indeed more stable than PCA-based statistical approaches. Moreover, our experiments on multi-resolution datasets also demonstrate that VNNs are amenable to transferability of performance over covariance matrices of different dimensions; a feature that is infeasible for PCA-based approaches. Dennis Wei · Rahul Nair · Amit Dhurandhar · Kush Varshney · Elizabeth Daly · Moninder Singh Interpretable and explainable machine learning has seen a recent surge of interest. We focus on safety as a key motivation behind the surge and make the relationship between interpretability and safety more quantitative. Toward assessing safety, we introduce the concept of maximum deviation via an optimization problem to find the largest deviation of a supervised learning model from a reference model regarded as safe. We then show how interpretability facilitates this safety assessment. For models including decision trees, generalized linear and additive models, the maximum deviation can be computed exactly and efficiently. For tree ensembles, which are not regarded as interpretable, discrete optimization techniques can still provide informative bounds. For a broader class of piecewise Lipschitz functions, we leverage the multi-armed bandit literature to show that interpretability produces tighter (regret) bounds on the maximum deviation. We present case studies, including one on mortgage approval, to illustrate our methods and the insights about models that may be obtained from deviation maximization. Tao Liu · P. R. Kumar · Ruida Zhou · Xi Liu Motivated by the problem of learning with small sample sizes, this paper shows how to incorporate into support-vector machines (SVMs) those properties that have made convolutional neural networks (CNNs) successful. Particularly important is the ability to incorporate domain knowledge of invariances, e.g., translational invariance of images. Kernels based on the \textit{maximum} similarity over a group of transformations are not generally positive definite. Perhaps it is for this reason that they have not been studied theoretically. We address this lacuna and show that positive definiteness indeed holds \textit{with high probability} for kernels based on the maximum similarity in the small training sample set regime of interest, and that they do yield the best results in that regime. We also show how additional properties such as their ability to incorporate local features at multiple spatial scales, e.g., as done in CNNs through max pooling, and to provide the benefits of composition through the architecture of multiple layers, can also be embedded into SVMs. We verify through experiments on widely available image sets that the resulting SVMs do provide superior accuracy in comparison to well-established deep neural network benchmarks for small sample sizes. Yifan Yang · Yang Liu · Parinaz Naghizadeh Biases in existing datasets used to train algorithmic decision rules can raise ethical and economic concerns due to the resulting disparate treatment of different groups. We propose an algorithm for sequentially debiasing such datasets through adaptive and bounded exploration in a classification problem with costly and censored feedback. Exploration in this context means that at times, and to a judiciously-chosen extent, the decision maker deviates from its (current) loss-minimizing rule, and instead accepts some individuals that would otherwise be rejected, so as to reduce statistical data biases. Our proposed algorithm includes parameters that can be used to balance between the ultimate goal of removing data biases -- which will in turn lead to more accurate and fair decisions, and the exploration risks incurred to achieve this goal. We analytically show that such exploration can help debias data in certain distributions. We further investigate how fairness criteria can work in conjunction with our data debiasing algorithm. We illustrate the performance of our algorithm using experiments on synthetic and real-world datasets. Aivar Sootla · Alexander Cowen-Rivers · Jun Wang · Haitham Bou Ammar Safe exploration is a challenging and important problem in model-free reinforcement learning (RL). Often the safety cost is sparse and unknown, which unavoidably leads to constraint violations - a phenomenon ideally to be avoided in safety-critical applications. We tackle this problem by augmenting the state-space with a safety state, which is nonnegative if and only if the constraint is satisfied. The value of this state also serves as a distance toward constraint violation, while its initial value indicates the available safety budget. This idea allows us to derive policies for scheduling the safety budget during training. We call our approach Simmer (Safe policy IMproveMEnt for RL) to reflect the careful nature of these schedules. We apply this idea to two safe RL problems: RL with constraints imposed on an average cost, and RL with constraints imposed on a cost with probability one. Our experiments suggest that "simmering" a safe algorithm can improve safety during training for both settings. We further show that Simmer can stabilize training and improve the performance of safe RL with average constraints. Wei Zhang · Yanjun Han · Zhengyuan Zhou · Aaron Flores · Tsachy Weissman With the advent and increasing consolidation of e-commerce, digital advertising has very recently replaced traditional advertising as the main marketing force in the economy. In the past four years, a particularly important development in the digital advertising industry is the shift from second-price auctions to first-price auctions for online display ads. This shift immediately motivated the intellectually challenging question of how to bid in first-price auctions, because unlike in second-price auctions, bidding one's private value truthfully is no longer optimal. Following a series of recent works in this area, we consider a differentiated setup: we do not make any assumption about other bidders' maximum bid (i.e. it can be adversarial over time), and instead assume that we have access to a hint that serves as a prediction of other bidders' maximum bid, where the prediction is learned through some blackbox machine learning model. We consider two types of hints: one where a single point-prediction is available, and the other where a hint interval (representing a type of confidence region into which others' maximum bid falls) is available. We establish minimax optimal regret bounds for both cases and highlight the quantitatively different behavior between the two settings. We also provide improved regret bounds when the others' maximum bid exhibits the further structure of sparsity. Finally, we complement the theoretical results with demonstrations using real bidding data. Yash Chandak · Shiv Shankar · Nathaniel Bastian · Bruno da Silva · Emma Brunskill · Philip Thomas Methods for sequential decision-making are often built upon a foundational assumption that the underlying decision process is stationary. This limits the application of such methods because real-world problems are often subject to changes due to external factors (\textit{passive} non-stationarity), changes induced by interactions with the system itself (\textit{active} non-stationarity), or both (\textit{hybrid} non-stationarity). In this work, we take the first steps towards the fundamental challenge of on-policy and off-policy evaluation amidst structured changes due to active, passive, or hybrid non-stationarity. Towards this goal, we make a \textit{higher-order stationarity} assumption such that non-stationarity results in changes over time, but the way changes happen is fixed. We propose, OPEN, an algorithm that uses a double application of counterfactual reasoning and a novel importance-weighted instrument-variable regression to obtain both a lower bias and a lower variance estimate of the structure in the changes of a policy's past performances. Finally, we show promising results on how OPEN can be used to predict future performances for several domains inspired by real-world applications that exhibit non-stationarity. Xinmeng Huang · Yiming Chen · Wotao Yin · Kun Yuan Recent advances in distributed optimization and learning have shown that communication compression is one of the most effective means of reducing communication. While there have been many results for convergence rates with compressed communication, a lower bound is still missing.Analyses of algorithms with communication compression have identified two abstract properties that guarantee convergence: the unbiased property or the contractive property. They can be applied either unidirectionally (compressing messages from worker to server) or bidirectionally. In the smooth and non-convex stochastic regime, this paper establishes a lower bound for distributed algorithms whether using unbiased or contractive compressors in unidirection or bidirection. To close the gap between this lower bound and the best existing upper bound, we further propose an algorithm, NEOLITHIC, that almost reaches our lower bound (except for a logarithm factor) under mild conditions. Our results also show that using contractive compressors in bidirection can yield iterative methods that converge as fast as those using unbiased compressors unidirectionally. We report experimental results that validate our findings. Mathieu Molina · Patrick Loiseau Discrimination in machine learning often arises along multiple dimensions (a.k.a. protected attributes); it is then desirable to ensure \emph{intersectional fairness}---i.e., that no subgroup is discriminated against. It is known that ensuring \emph{marginal fairness} for every dimension independently is not sufficient in general. Due to the exponential number of subgroups, however, directly measuring intersectional fairness from data is impossible. In this paper, our primary goal is to understand in detail the relationship between marginal and intersectional fairness through statistical analysis. We first identify a set of sufficient conditions under which an exact relationship can be obtained. Then, we prove bounds (easily computable through marginal fairness and other meaningful statistical quantities) in high-probability on intersectional fairness in the general case. Beyond their descriptive value, we show that these theoretical bounds can be leveraged to derive a heuristic improving the approximation and bounds of intersectional fairness by choosing, in a relevant manner, protected attributes for which we describe intersectional subgroups. Finally, we test the performance of our approximations and bounds on real and synthetic data-sets. Justin Whitehouse · Aaditya Ramdas · Steven Wu · Ryan Rogers There is a disconnect between how researchers and practitioners handle privacy-utility tradeoffs. Researchers primarily operate from a privacy first perspective, setting strict privacy requirements and minimizing risk subject to these constraints. Practitioners often desire an accuracy first perspective, possibly satisfied with the greatest privacy they can get subject to obtaining sufficiently small error. Ligett et al. have introduced a `"noise reduction" algorithm to address the latter perspective. The authors show that by adding correlated Laplace noise and progressively reducing it on demand, it is possible to produce a sequence of increasingly accurate estimates of a private parameter and only pay a privacy cost for the least noisy iterate released. In this work, we generalize noise reduction to the setting of Gaussian noise, introducing the Brownian mechanism. The Brownian mechanism works by first adding Gaussian noise of high variance corresponding to the final point of a simulated Brownian motion. Then, at the practitioner's discretion, noise is gradually decreased by tracing back along the Brownian path to an earlier time. Our mechanism is more naturally applicable to the common setting of bounded $\ell_2$-sensitivity, empirically outperforms existing work on common statistical tasks, and provides customizable control of privacy loss over the entire interaction with the practitioner. We complement our Brownian mechanism with ReducedAboveThreshold, a generalization of the classical AboveThreshold algorithm that provides adaptive privacy guarantees. Overall, our results demonstrate that one can meet utility constraints while still maintaining strong levels of privacy. Konstantin Mishchenko · Francis Bach · Mathieu Even · Blake Woodworth The existing analysis of asynchronous stochastic gradient descent (SGD) degrades dramatically when any delay is large, giving the impression that performance depends primarily on the delay. On the contrary, we prove much better guarantees for the same asynchronous SGD algorithm regardless of the delays in the gradients, depending instead just on the number of parallel devices used to implement the algorithm. Our guarantees are strictly better than the existing analyses, and we also argue that asynchronous SGD outperforms synchronous minibatch SGD in the settings we consider. For our analysis, we introduce a novel recursion based on ``virtual iterates'' and delay-adaptive stepsizes, which allow us to derive state-of-the-art guarantees for both convex and non-convex objectives. Ibrahim Alabdulmohsin · Jessica Schrouff · Sanmi Koyejo We propose a novel reduction-to-binary (R2B) approach that enforces demographic parity for multiclass classification with non-binary sensitive attributes via a reduction to a sequence of binary debiasing tasks. We prove that R2B satisfies optimality and bias guarantees and demonstrate empirically that it can lead to an improvement over two baselines: (1) treating multiclass problems as multi-label by debiasing labels independently and (2) transforming the features instead of the labels. Surprisingly, we also demonstrate that independent label debiasing yields competitive results in most (but not all) settings. We validate these conclusions on synthetic and real-world datasets from social science, computer vision, and healthcare. Harikrishna Narasimhan · Wittawat Jitkrittum · Aditya Menon · Ankit Rawat · Sanjiv Kumar Many practical settings allow a learner to defer predictions to one or more costly experts. For example, the learning to defer paradigm allows a learner to defer to a human expert, at some monetary cost. Similarly, the adaptive inference paradigm allows a base model to defer to one or more large models, at some computational cost. The goal in these settings is to learn classification and deferral mechanisms to optimise a suitable accuracy-cost tradeoff. To achieve this, a central issue studied in prior work is the design of a coherent loss function for both mechanisms. In this work, we demonstrate that existing losses have two subtle limitations: they can encourage underfitting when there is a high cost of deferring, and the deferral function can have a weak dependence on the base model predictions. To resolve these issues, we propose a post-hoc training scheme: we train a deferral function on top of a base model, with the objective of predicting to defer when the base model's error probability exceeds the cost of the expert model. This may be viewed as applying a partial surrogate to the ideal deferral loss, which can lead to a tighter approximation and thus better performance. Empirically, we verify the efficacy of post-hoc training on benchmarks for learning to defer and adaptive inference. James Harrison · Luke Metz · Jascha Sohl-Dickstein Learned optimizers---neural networks that are trained to act as optimizers---have the potential to dramatically accelerate training of machine learning models. However, even when meta-trained across thousands of tasks at huge computational expense, blackbox learned optimizers often struggle with stability and generalization when applied to tasks unlike those in their meta-training set. In this paper, we use tools from dynamical systems to investigate the inductive biases and stability properties of optimization algorithms, and apply the resulting insights to designing inductive biases for blackbox optimizers. Our investigation begins with a noisy quadratic model, where we characterize conditions in which optimization is stable, in terms of eigenvalues of the training dynamics. We then introduce simple modifications to a learned optimizer's architecture and meta-training procedure which lead to improved stability, and improve the optimizer's inductive bias. We apply the resulting learned optimizer to a variety of neural network training tasks, where it outperforms the current state of the art learned optimizer---at matched optimizer computational overhead---with regard to optimization performance and meta-training speed, and is capable of generalization to tasks far different from those it was meta-trained on. Jingfeng Wu · Difan Zou · Vladimir Braverman · Quanquan Gu · Sham Kakade We study linear regression under covariate shift, where the marginal distribution over the input covariates differs in the source and the target domains, while the conditional distribution of the output given the input covariates is similar across the two domains. We investigate a transfer learning approach with pretraining on the source data and finetuning based on the target data (both conducted by online SGD) for this problem. We establish sharp instance-dependent excess risk upper and lower bounds for this approach. Our bounds suggest that for a large class of linear regression instances, transfer learning with $O(N^2)$ source data (and scarce or no target data) is as effective as supervised learning with $N$ target data. In addition, we show that finetuning, even with only a small amount of target data, could drastically reduce the amount of source data required by pretraining. Our theory sheds light on the effectiveness and limitation of pretraining as well as the benefits of finetuning for tackling covariate shift problems. Edoardo Cetin · Oya Celiktutan We introduce a new framework that performs decision-making in reinforcement learning (RL) as an iterative reasoning process. We model agent behavior as the steady-state distribution of a parameterized reasoning Markov chain (RMC), optimized with a new tractable estimate of the policy gradient. We perform action selection by simulating the RMC for enough reasoning steps to approach its steady-state distribution. We show our framework has several useful properties that are inherently missing from traditional RL. For instance, it allows agent behavior to approximate any continuous distribution over actions by parameterizing the RMC with a simple Gaussian transition function. Moreover, the number of reasoning steps to reach convergence can scale adaptively with the difficulty of each action selection decision and can be accelerated by re-using past solutions. Our resulting algorithm achieves state-of-the-art performance in popular Mujoco and DeepMind Control benchmarks, both for proprioceptive and pixel-based tasks. Outstanding Paper Ben Sorscher · Robert Geirhos · Shashank Shekhar · Surya Ganguli · Ari Morcos Widely observed neural scaling laws, in which error falls off as a power of the training set size, model size, or both, have driven substantial performance improvements in deep learning. However, these improvements through scaling alone require considerable costs in compute and energy. Here we focus on the scaling of error with dataset size and show how in theory we can break beyond power law scaling and potentially even reduce it to exponential scaling instead if we have access to a high-quality data pruning metric that ranks the order in which training examples should be discarded to achieve any pruned dataset size. We then test this improved scaling prediction with pruned dataset size empirically, and indeed observe better than power law scaling in practice on ResNets trained on CIFAR-10, SVHN, and ImageNet. Next, given the importance of finding high-quality pruning metrics, we perform the first large-scale benchmarking study of ten different data pruning metrics on ImageNet. We find most existing high performing metrics scale poorly to ImageNet, while the best are computationally intensive and require labels for every image. We therefore developed a new simple, cheap and scalable self-supervised pruning metric that demonstrates comparable performance to the best supervised metrics. Overall, our work suggests that the discovery of good data-pruning metrics may provide a viable path forward to substantially improved neural scaling laws, thereby reducing the resource costs of modern deep learning. Dieqiao Feng · Carla Gomes · Bart Selman Despite the success of practical solvers in various NP-complete domains such as SAT and CSP as well as using deep reinforcement learning to tackle two-player games such as Go, certain classes of PSPACE-hard planning problems have remained out of reach. Even carefully designed domain-specialized solvers can fail quickly due to the exponential search space on hard instances. Recent works that combine traditional search methods, such as best-first search and Monte Carlo tree search, with Deep Neural Networks' (DNN) heuristics have shown promising progress and can solve a significant number of hard planning instances beyond specialized solvers. To better understand why these approaches work, we studied the interplay of the policy and value networks of DNN-based best-first search on Sokoban and show the surprising effectiveness of the policy network, further enhanced by the value network, as a guiding heuristic for the search. To further understand the phenomena, we studied the cost distribution of the search algorithms and found that Sokoban instances can have heavy-tailed runtime distributions, with tails both on the left and right-hand sides. In particular, for the first time, we show the existence of \textit{left heavy tails} and propose an abstract tree model that can empirically explain the appearance of these tails. The experiments show the critical role of the policy network as a powerful heuristic guiding the search, which can lead to left heavy tails with polynomial scaling by avoiding exploring exponentially sized subtrees. Our results also demonstrate the importance of random restarts, as are widely used in traditional combinatorial solvers, for DNN-based search methods to avoid left and right heavy tails. Guiliang Liu · Yudong Luo · Oliver Schulte · Pascal Poupart A major task of sports analytics is player evaluation. Previous methods commonly measured the impact of players' actions on desirable outcomes (e.g., goals or winning) without considering the risk induced by stochastic game dynamics. In this paper, we design an uncertainty-aware Reinforcement Learning (RL) framework to learn a risk-sensitive player evaluation metric from stochastic game dynamics. To embed the risk of a player’s movements into the distribution of action-values, we model their 1) aleatoric uncertainty, which represents the intrinsic stochasticity in a sports game, and 2) epistemic uncertainty, which is due to a model's insufficient knowledge regarding Out-of-Distribution (OoD) samples. We demonstrate how a distributional Bellman operator and a feature-space density model can capture these uncertainties. Based on such uncertainty estimation, we propose a Risk-sensitive Game Impact Metric (RiGIM) that measures players' performance over a season by conditioning on a specific confidence level. Empirical evaluation, based on over 9M play-by-play ice hockey and soccer events, shows that RiGIM correlates highly with standard success measures and has a consistent risk sensitivity. Besides per-pixel accuracy, topological correctness is also crucial for the segmentation of images with fine-scale structures, e.g., satellite images and biomedical images. In this paper, by leveraging the theory of digital topology, we identify pixels in an image that are critical for topology. By focusing on these critical pixels, we propose a new \textbf{homotopy warping loss} to train deep image segmentation networks for better topological accuracy. To efficiently identify these topologically critical pixels, we propose a new algorithm exploiting the distance transform. The proposed algorithm, as well as the loss function, naturally generalize to different topological structures in both 2D and 3D settings. The proposed loss function helps deep nets achieve better performance in terms of topology-aware metrics, outperforming state-of-the-art structure/topology-aware segmentation methods. Yasmin Salehi · Dennis Giannacopoulos Correctly capturing intraoperative brain shift in image-guided neurosurgical procedures is a critical task for aligning preoperative data with intraoperative geometry for ensuring accurate surgical navigation. While the finite element method (FEM) is a proven technique to effectively approximate soft tissue deformation through biomechanical formulations, their degree of success boils down to a trade-off between accuracy and speed. To circumvent this problem, the most recent works in this domain have proposed leveraging data-driven models obtained by training various machine learning algorithms---e.g., random forests, artificial neural networks (ANNs)---with the results of finite element analysis (FEA) to speed up tissue deformation approximations by prediction. These methods, however, do not account for the structure of the finite element (FE) mesh during training that provides information on node connectivities as well as the distance between them, which can aid with approximating tissue deformation based on the proximity of force load points with the rest of the mesh nodes. Therefore, this work proposes a novel framework, PhysGNN, a data-driven model that approximates the solution of the FEM by leveraging graph neural networks (GNNs), which are capable of accounting for the mesh structural information and inductive learning over unstructured grids and complex topological structures. Empirically, we demonstrate that the proposed architecture, PhysGNN, promises accurate and fast soft tissue deformation approximations, and is competitive with the state-of-the-art (SOTA) algorithms while promising enhanced computational feasibility, therefore suitable for neurosurgical settings. Tim Brooks · Janne Hellsten · Miika Aittala · Ting-Chun Wang · Timo Aila · Jaakko Lehtinen · Ming-Yu Liu · Alexei Efros · Tero Karras We present a video generation model that accurately reproduces object motion, changes in camera viewpoint, and new content that arises over time. Existing video generation methods often fail to produce new content as a function of time while maintaining consistencies expected in real environments, such as plausible dynamics and object persistence. A common failure case is for content to never change due to over-reliance on inductive bias to provide temporal consistency, such as a single latent code that dictates content for the entire video. On the other extreme, without long-term consistency, generated videos may morph unrealistically between different scenes. To address these limitations, we prioritize the time axis by redesigning the temporal latent representation and learning long-term consistency from data by training on longer videos. We leverage a two-phase training strategy, where we separately train using longer videos at a low resolution and shorter videos at a high resolution. To evaluate the capabilities of our model, we introduce two new benchmark datasets with explicit focus on long-term temporal dynamics. Yitian Hong · Yaochu Jin · Yang Tang In cooperative multi-agent reinforcement learning, centralized training and decentralized execution (CTDE) has achieved remarkable success. Individual Global Max (IGM) decomposition, which is an important element of CTDE, measures the consistency between local and joint policies. The majority of IGM-based research focuses on how to establish this consistent relationship, but little attention has been paid to examining IGM's potential flaws. In this work, we reveal that the IGM condition is a lossy decomposition, and the error of lossy decomposition will accumulated in hypernetwork-based methods. To address the above issue, we propose to adopt an imitation learning strategy to separate the lossy decomposition from Bellman iterations, thereby avoiding error accumulation. The proposed strategy is theoretically proved and empirically verified on the StarCraft Multi-Agent Challenge benchmark problem with zero sight view. The results also confirm that the proposed method outperforms state-of-the-art IGM-based approaches. Zihan Liu · Yun Luo · Lirong Wu · Zicheng Liu · Stan Z. Li It has become cognitive inertia to employ cross-entropy loss function in classification related tasks. In the untargeted attacks on graph structure, the gradients derived from the attack objective are the attacker's basis for evaluating a perturbation scheme. Previous methods use negative cross-entropy loss as the attack objective in attacking node-level classification models. However, the suitability of the cross-entropy function for constructing the untargeted attack objective has yet been discussed in previous works. This paper argues about the previous unreasonable attack objective from the perspective of budget allocation. We demonstrate theoretically and empirically that negative cross-entropy tends to produce more significant gradients from nodes with lower confidence in the labeled classes, even if the predicted classes of these nodes have been misled. To free up these inefficient attack budgets, we propose a simple attack model for untargeted attacks on graph structure based on a novel attack objective which generates unweighted gradients on graph structures that are not affected by the node confidence. By conducting experiments in gray-box poisoning attack scenarios, we demonstrate that a reasonable budget allocation can significantly improve the effectiveness of gradient-based edge perturbations without any extra hyper-parameter. Edouard YVINEC · Arnaud Dapogny · Matthieu Cord · Kevin Bailly The leap in performance in state-of-the-art computer vision methods is attributed to the development of deep neural networks. However it often comes at a computational price which may hinder their deployment. To alleviate this limitation, structured pruning is a well known technique which consists in removing channels, neurons or filters, and is commonly applied in order to produce more compact models. In most cases, the computations to remove are selected based on a relative importance criterion. At the same time, the need for explainable predictive models has risen tremendously and motivated the development of robust attribution methods that highlight the relative importance of pixels of an input image or feature map. In this work, we discuss the limitations of existing pruning heuristics, among which magnitude and gradient-based methods. We draw inspiration from attribution methods to design a novel integrated gradient pruning criterion, in which the relevance of each neuron is defined as the integral of the gradient variation on a path towards this neuron removal. Furthermore, We propose an entwined DNN pruning and fine-tuning flowchart to better preserve DNN accuracy while removing parameters. We show through extensive validation on several datasets, architectures as well as pruning scenarios that the proposed method, dubbed SInGE, significantly outperforms existing state-of-the-art DNN pruning methods. Djordje Miladinovic · Kumar Shridhar · Kushal Jain · Max Paulus · Joachim M Buhmann · Carl Allen In principle, applying variational autoencoders (VAEs) to sequential data offers a method for controlled sequence generation, manipulation, and structured representation learning. However, training sequence VAEs is challenging: autoregressive decoders can often explain the data without utilizing the latent space, known as posterior collapse. To mitigate this, state-of-the-art models weaken' thepowerful decoder' by applying uniformly random dropout to the decoder input.We show theoretically that this removes pointwise mutual information provided by the decoder input, which is compensated for by utilizing the latent space. We then propose an adversarial training strategy to achieve information-based stochastic dropout. Compared to uniform dropout on standard text benchmark datasets, our targeted approach increases both sequence modeling performance and the information captured in the latent space. Bowen Jing · Gabriele Corso · Jeffrey Chang · Regina Barzilay · Tommi Jaakkola Molecular conformer generation is a fundamental task in computational chemistry. Several machine learning approaches have been developed, but none have outperformed state-of-the-art cheminformatics methods. We propose torsional diffusion, a novel diffusion framework that operates on the space of torsion angles via a diffusion process on the hypertorus and an extrinsic-to-intrinsic score model. On a standard benchmark of drug-like molecules, torsional diffusion generates superior conformer ensembles compared to machine learning and cheminformatics methods in terms of both RMSD and chemical properties, and is orders of magnitude faster than previous diffusion-based models. Moreover, our model provides exact likelihoods, which we employ to build the first generalizable Boltzmann generator. Code is available at https://github.com/gcorso/torsional-diffusion. Pouya M. Ghari · Yanning Shen Multi-kernel learning (MKL) exhibits well-documented performance in online non-linear function approximation. Federated learning enables a group of learners (called clients) to train an MKL model on the data distributed among clients to perform online non-linear function approximation. There are some challenges in online federated MKL that need to be addressed: i) Communication efficiency especially when a large number of kernels are considered ii) Heterogeneous data distribution among clients. The present paper develops an algorithmic framework to enable clients to communicate with the server to send their updates with affordable communication cost while clients employ a large dictionary of kernels. Utilizing random feature (RF) approximation, the present paper proposes scalable online federated MKL algorithm. We prove that using the proposed online federated MKL algorithm, each client enjoys sub-linear regret with respect to the RF approximation of its best kernel in hindsight, which indicates that the proposed algorithm can effectively deal with heterogeneity of the data distributed among clients. Experimental results on real datasets showcase the advantages of the proposed algorithm compared with other online federated kernel learning ones. Derek Hansen · Brian Manzo · Jeffrey Regier Controlled feature selection aims to discover the features a response depends on while limiting the false discovery rate (FDR) to a predefined level. Recently, multiple deep-learning-based methods have been proposed to perform controlled feature selection through the Model-X knockoff framework. We demonstrate, however, that these methods often fail to control the FDR for two reasons. First, these methods often learn inaccurate models of features. Second, the "swap" property, which is required for knockoffs to be valid, is often not well enforced. We propose a new procedure called FlowSelect to perform controlled feature selection that does not suffer from either of these two problems. To more accurately model the features, FlowSelect uses normalizing flows, the state-of-the-art method for density estimation. Instead of enforcing the "swap" property, FlowSelect uses a novel MCMC-based procedure to calculate p-values for each feature directly. Asymptotically, FlowSelect computes valid p-values. Empirically, FlowSelect consistently controls the FDR on both synthetic and semi-synthetic benchmarks, whereas competing knockoff-based approaches do not. FlowSelect also demonstrates greater power on these benchmarks. Additionally, FlowSelect correctly infers the genetic variants associated with specific soybean traits from GWAS data. Jie Hu · Vishwaraj Doshi · Do-Young Eun We consider the stochastic gradient descent (SGD) algorithm driven by a general stochastic sequence, including i.i.d noise and random walk on an arbitrary graph, among others; and analyze it in the asymptotic sense. Specifically, we employ the notion of `efficiency ordering', a well-analyzed tool for comparing the performance of Markov Chain Monte Carlo (MCMC) samplers, for SGD algorithms in the form of Loewner ordering of covariance matrices associated with the scaled iterate errors in the long term. Using this ordering, we show that input sequences that are more efficient for MCMC sampling also lead to smaller covariance of the errors for SGD algorithms in the limit. This also suggests that an arbitrarily weighted MSE of SGD iterates in the limit becomes smaller when driven by more efficient chains. Our finding is of particular interest in applications such as decentralized optimization and swarm learning, where SGD is implemented in a random walk fashion on the underlying communication graph for cost issues and/or data privacy. We demonstrate how certain non-Markovian processes, for which typical mixing-time based non-asymptotic bounds are intractable, can outperform their Markovian counterparts in the sense of efficiency ordering for SGD. We show the utility of our method by applying it to gradient descent with shuffling and mini-batch gradient descent, reaffirming key results from existing literature under a unified framework. Empirically, we also observe efficiency ordering for variants of SGD such as accelerated SGD and Adam, open up the possibility of extending our notion of efficiency ordering to a broader family of stochastic optimization algorithms. José Vinícius de Miranda Cardoso · Jiaxi Ying · Daniel Palomar We investigate the problem of learning an undirected, weighted bipartite graph under the Gaussian Markov random field model, for which we present an optimization formulation along with an efficient algorithm based on the projected gradient descent. Motivated by practical applications, where outliers or heavy-tailed events are present, we extend the proposed learning scheme to the case in which the data follow a multivariate Student-$t$ distribution. As a result, the optimization program is no longer convex, but a verifiably convergent iterative algorithm is proposed based on the majorization-minimization framework. Finally, we propose an efficient and provably convergent algorithm for learning $k$-component bipartite graphs that leverages rank constraints of the underlying graph Laplacian matrix. The proposed estimators outperform state-of-the-art methods for bipartite graph learning, as evidenced by real-world experiments using financial time series data. Rafael Oliveira · Louis Tiao · Fabio Ramos Bayesian optimisation (BO) algorithms have shown remarkable success in applications involving expensive black-box functions. Traditionally BO has been set as a sequential decision-making process which estimates the utility of query points via an acquisition function and a prior over functions, such as a Gaussian process. Recently, however, a reformulation of BO via density-ratio estimation (BORE) allowed reinterpreting the acquisition function as a probabilistic binary classifier, removing the need for an explicit prior over functions and increasing scalability. In this paper, we present a theoretical analysis of BORE's regret and an extension of the algorithm with improved uncertainty estimates. We also show that BORE can be naturally extended to a batch optimisation setting by recasting the problem as approximate Bayesian inference. The resulting algorithms come equipped with theoretical performance guarantees and are assessed against other batch and sequential BO baselines in a series of experiments. Uri Shaham · Jonathan Svirsky · Ori Katz · Ronen Talmon Latent variable discovery is a central problem in data analysis with a broad range of applications in applied science.In this work, we consider data given as an invertible mixture of two statistically independent components, and assume that one of the components is observed while the other is hidden. Our goal is to recover the hidden component.For this purpose, we propose an autoencoder equipped with a discriminator.Unlike the standard nonlinear ICA problem, which was shown to be non-identifiable, in the special case of ICA we consider here, we show that our approach can recover the component of interest up to entropy-preserving transformation.We demonstrate the performance of the proposed approach in several tasks, including image synthesis, voice cloning, and fetal ECG extraction. Michael Poli · Winnie Xu · Stefano Massaroli · Chenlin Meng · Kuno Kim · Stefano Ermon Many patterns in nature exhibit self-similarity: they can be compactly described via self-referential transformations. Said patterns commonly appear in natural and artificial objects, such as molecules, shorelines, galaxies, and even images. In this work, we investigate the role of learning in the automated discovery of self-similarity and in its utilization for downstream tasks. To this end, we design a novel class of implicit operators, Neural Collages, which (1) represent data as the parameters of a self-referential, structured transformation, and (2) employ hypernetworks to amortize the cost of finding these parameters to a single forward pass. We detail how to leverage the representations produced by Neural Collages in various tasks, including data compression and generation. Neural Collage image compressors are orders of magnitude faster than other self-similarity-based algorithms during encoding and offer compression rates competitive with implicit methods. Finally, we showcase applications of Neural Collages for fractal art and as deep generative models. Hansheng Xue · Vaibhav Rajan · Yu Lin Understanding genetic variation, e.g., through mutations, in organisms is crucial to unravel their effects on the environment and human health. A fundamental characterization can be obtained by solving the haplotype assembly problem, which yields the variation across multiple copies of chromosomes. Variations among fast evolving viruses that lead to different strains (called quasispecies) are also deciphered with similar approaches. In both these cases, high-throughput sequencing technologies that provide oversampled mixtures of large noisy fragments (reads) of genomes, are used to infer constituent components (haplotypes or quasispecies). The problem is harder for polyploid species where there are more than two copies of chromosomes. State-of-the-art neural approaches to solve this NP-hard problem do not adequately model relations among the reads that are important for deconvolving the input signal. We address this problem by developing a new method, called NeurHap, that combines graph representation learning with combinatorial optimization. Our experiments demonstrate the substantially better performance of NeurHap in real and synthetic datasets compared to competing approaches. Sebastian Gruber · Florian Buettner With model trustworthiness being crucial for sensitive real-world applications, practitioners are putting more and more focus on improving the uncertainty calibration of deep neural networks.Calibration errors are designed to quantify the reliability of probabilistic predictions but their estimators are usually biased and inconsistent.In this work, we introduce the framework of \textit{proper calibration errors}, which relates every calibration error to a proper score and provides a respective upper bound with optimal estimation properties.This relationship can be used to reliably quantify the model calibration improvement.We theoretically and empirically demonstrate the shortcomings of commonly used estimators compared to our approach.Due to the wide applicability of proper scores, this gives a natural extension of recalibration beyond classification. Alexander Immer · Tycho van der Ouderaa · Gunnar Rätsch · Vincent Fortuin · Mark van der Wilk Data augmentation is commonly applied to improve performance of deep learning by enforcing the knowledge that certain transformations on the input preserve the output. Currently, the data augmentation parameters are chosen by human effort and costly cross-validation, which makes it cumbersome to apply to new datasets. We develop a convenient gradient-based method for selecting the data augmentation without validation data during training of a deep neural network. Our approach relies on phrasing data augmentation as an invariance in the prior distribution on the functions of a neural network, which allows us to learn it using Bayesian model selection. This has been shown to work in Gaussian processes, but not yet for deep neural networks. We propose a differentiable Kronecker-factored Laplace approximation to the marginal likelihood as our objective, which can be optimised without human supervision or validation data. We show that our method can successfully recover invariances present in the data, and that this improves generalisation and data efficiency on image datasets. James Gardner · Bernhard Egger · William Smith Inverse rendering is an ill-posed problem. Previous work has sought to resolve this by focussing on priors for object or scene shape or appearance. In this work, we instead focus on a prior for natural illuminations. Current methods rely on spherical harmonic lighting or other generic representations and, at best, a simplistic prior on the parameters. We propose a conditional neural field representation based on a variational auto-decoder with a SIREN network and, extending Vector Neurons, build equivariance directly into the network. Using this, we develop a rotation-equivariant, high dynamic range (HDR) neural illumination model that is compact and able to express complex, high-frequency features of natural environment maps. Training our model on a curated dataset of 1.6K HDR environment maps of natural scenes, we compare it against traditional representations, demonstrate its applicability for an inverse rendering task and show environment map completion from partial Jieyu Zhang · Haonan Wang · Cheng-Yu Hsieh · Alexander Ratner Programmatic Weak Supervision (PWS) aggregates the source votes of multiple weak supervision sources into probabilistic training labels, which are in turn used to train an end model. With its increasing popularity, it is critical to have some tool for users to understand the influence of each component (\eg, the source vote or training data) in the pipeline and interpret the end model behavior. To achieve this, we build on Influence Function (IF) and propose source-aware IF, which leverages the generation process of the probabilistic labels to decompose the end model's training objective and then calculate the influence associated with each (data, source, class) tuple. These primitive influence score can then be used to estimate the influence of individual component of PWS, such as source vote, supervision source, and training data. On datasets of diverse domains, we demonstrate multiple use cases: (1) interpreting incorrect predictions from multiple angles that reveals insights for debugging the PWS pipeline, (2) identifying mislabeling of sources with a gain of 9\%-37\% over baselines, and (3) improving the end model's generalization performance by removing harmful components in the training objective (13\%-24\% better than ordinary IF). Jing Liu · Chulin Xie · Sanmi Koyejo · Bo Li Collaborative inference leverages diverse features provided by different agents (e.g., sensors) for more accurate inference. A common setup is where each agent sends its embedded features instead of the raw data to the Fusion Center (FC) for joint prediction. In this setting, we consider the inference-time attacks when a small fraction of agents are compromised. The compromised agent either does not send embedded features to the FC, or sends arbitrarily embedded features. To address this, we propose a certifiably robust COllaborative inference framework via feature PURification (CoPur), by leveraging the block-sparse nature of adversarial perturbations on the feature vector, as well as exploring the underlying redundancy across the embedded features (by assuming the overall features lie on an underlying lower dimensional manifold). We theoretically show that the proposed feature purification method can robustly recover the true feature vector, despite adversarial corruptions and /or incomplete observations. We also propose and test an untargeted distributed feature-flipping attack, which is agnostic to the model, training data, label, as well as the features held by other agents, and is shown to be effective in attacking state-of-the-art defenses. Experiments on ExtraSensory and NUS-WIDE datasets show that CoPur significantly outperforms existing defenses in terms of robustness against targeted and untargeted adversarial attacks. Xuefei Ning · Zixuan Zhou · Junbo Zhao · Tianchen Zhao · Yiping Deng · Changcheng Tang · Shuang Liang · Huazhong Yang · Yu Wang Neural architecture search tries to shift the manual design of neural network (NN) architectures to algorithmic design. In these cases, the NN architecture itself can be viewed as data and needs to be modeled. A better modeling could help explore novel architectures automatically and open the black box of automated architecture design. To this end, this work proposes a new encoding scheme for neural architectures, the Training-Analogous Graph-based ArchiTecture Encoding Scheme (TA-GATES). TA-GATES encodes an NN architecture in a way that is analogous to its training. Extensive experiments demonstrate that the flexibility and discriminative power of TA-GATES lead to better modeling of NN architectures. We expect our methodology of explicitly modeling the NN training process to benefit broader automated deep learning systems. The code is available at https://github.com/walkerning/aw_nas. Yao Lai · Yao Mu · Ping Luo Placement is an essential task in modern chip design, aiming at placing millions of circuit modules on a 2D chip canvas. Unlike the human-centric solution, which requires months of intense effort by hardware engineers to produce a layout to minimize delay and energy consumption, deep reinforcement learning has become an emerging autonomous tool. However, the learning-centric method is still in its early stage, impeded by a massive design space of size ten to the order of a few thousand. This work presents MaskPlace to automatically generate a valid chip layout design within a few hours, whose performance can be superior or comparable to recent advanced approaches. It has several appealing benefits that prior arts do not have. Firstly, MaskPlace recasts placement as a problem of learning pixel-level visual representation to comprehensively describe millions of modules on a chip, enabling placement in a high-resolution canvas and a large action space. It outperforms recent methods that represent a chip as a hypergraph. Secondly, it enables training the policy network by an intuitive reward function with dense reward, rather than a complicated reward function with sparse reward from previous methods. Thirdly, extensive experiments on many public benchmarks show that MaskPlace outperforms existing RL approaches in all key performance metrics, including wirelength, congestion, and density. For example, it achieves 60%-90% wirelength reduction and guarantees zero overlaps. We believe MaskPlace can improve AI-assisted chip layout design. The deliverables are released at https://laiyao1.github.io/maskplace. Yixing Xu · Xinghao Chen · Yunhe Wang This paper studies the problem of designing compact binary architectures for vision multi-layer perceptrons (MLPs). We provide extensive analysis on the difficulty of binarizing vision MLPs and find that previous binarization methods perform poorly due to limited capacity of binary MLPs. In contrast with the traditional CNNs that utilizing convolutional operations with large kernel size, fully-connected (FC) layers in MLPs can be treated as convolutional layers with kernel size $1\times1$. Thus, the representation ability of the FC layers will be limited when being binarized, and places restrictions on the capability of spatial mixing and channel mixing on the intermediate features. To this end, we propose to improve the performance of binary MLP (BiMLP) model by enriching the representation ability of binary FC layers. We design a novel binary block that contains multiple branches to merge a series of outputs from the same stage, and also a universal shortcut connection that encourages the information flow from the previous stage. The downsampling layers are also carefully designed to reduce the computational complexity while maintaining the classification performance. Experimental results on benchmark dataset ImageNet-1k demonstrate the effectiveness of the proposed BiMLP models, which achieve state-of-the-art accuracy compared to prior binary CNNs.The MindSpore code is available at \url{https://gitee.com/mindspore/models/tree/master/research/cv/BiMLP}. Athresh Karanam · Krishnateja Killamsetty · Harsha Kokel · Rishabh Iyer Real-world machine-learning applications require robust models that generalize well to distribution shift settings, which is typical in real-world situations. Domain adaptation techniques aim to address this issue of distribution shift by minimizing the disparities between domains to ensure that the model trained on the source domain performs well on the target domain. Nevertheless, the existing domain adaptation methods are computationally very expensive. In this work, we aim to improve the efficiency of existing supervised domain adaptation (SDA) methods by using a subset of source data that is similar to target data for faster model training. Specifically, we propose ORIENT, a subset selection framework that uses the submodular mutual information (SMI) functions to select a source data subset similar to the target data for faster training. Additionally, we demonstrate how existing robust subset selection strategies, such as GLISTER, GRADMATCH, and CRAIG, when used with a held-out query set, fit within our proposed framework and demonstrate the connections with them. Finally, we empirically demonstrate that SDA approaches like d-SNE, CCSA, and standard Cross-entropy training, when employed together with ORIENT, achieve a) faster training and b) better performance on the target data. Ivan Marisca · Andrea Cini · Cesare Alippi Modeling multivariate time series as temporal signals over a (possibly dynamic) graph is an effective representational framework that allows for developing models for time series analysis. In fact, discrete sequences of graphs can be processed by autoregressive graph neural networks to recursively learn representations at each discrete point in time and space. Spatiotemporal graphs are often highly sparse, with time series characterized by multiple, concurrent, and long sequences of missing data, e.g., due to the unreliable underlying sensor network. In this context, autoregressive models can be brittle and exhibit unstable learning dynamics. The objective of this paper is, then, to tackle the problem of learning effective models to reconstruct, i.e., impute, missing data points by conditioning the reconstruction only on the available observations. In particular, we propose a novel class of attention-based architectures that, given a set of highly sparse discrete observations, learn a representation for points in time and space by exploiting a spatiotemporal propagation architecture aligned with the imputation task. Representations are trained end-to-end to reconstruct observations w.r.t. the corresponding sensor and its neighboring nodes. Compared to the state of the art, our model handles sparse data without propagating prediction errors or requiring a bidirectional model to encode forward and backward time dependencies. Empirical results on representative benchmarks show the effectiveness of the proposed method. Donghyeon Baek · Youngmin Oh · Sanghoon Lee · Junghyup Lee · Bumsub Ham Class-incremental semantic segmentation (CISS) labels each pixel of an image with a corresponding object/stuff class continually. To this end, it is crucial to learn novel classes incrementally without forgetting previously learned knowledge. Current CISS methods typically use a knowledge distillation (KD) technique for preserving classifier logits, or freeze a feature extractor, to avoid the forgetting problem. The strong constraints, however, prevent learning discriminative features for novel classes. We introduce a CISS framework that alleviates the forgetting problem and facilitates learning novel classes effectively. We have found that a logit can be decomposed into two terms. They quantify how likely an input belongs to a particular class or not, providing a clue for a reasoning process of a model. The KD technique, in this context, preserves the sum of two terms ($\textit{i.e.}$, a class logit), suggesting that each could be changed and thus the KD does not imitate the reasoning process. To impose constraints on each term explicitly, we propose a new decomposed knowledge distillation (DKD) technique, improving the rigidity of a model and addressing the forgetting problem more effectively. We also introduce a novel initialization method to train new classifiers for novel classes. In CISS, the number of negative training samples for novel classes is not sufficient to discriminate old classes. To mitigate this, we propose to transfer knowledge of negatives to the classifiers successively using an auxiliary classifier, boosting the performance significantly. Experimental results on standard CISS benchmarks demonstrate the effectiveness of our framework. Seungyong Moon · JunYeong Lee · Hyun Oh Song Our work focuses on training RL agents on multiple visually diverse environments to improve observational generalization performance. In prior methods, policy and value networks are separately optimized using a disjoint network architecture to avoid interference and obtain a more accurate value function. We identify that a value network in the multi-environment setting is more challenging to optimize and prone to memorizing the training data than in the conventional single-environment setting. In addition, we find that appropriate regularization on the value network is necessary to improve both training and test performance. To this end, we propose Delayed-Critic Policy Gradient (DCPG), a policy gradient algorithm that implicitly penalizes value estimates by optimizing the value network less frequently with more training data than the policy network. This can be implemented using a single unified network architecture. Furthermore, we introduce a simple self-supervised task that learns the forward and inverse dynamics of environments using a single discriminator, which can be jointly optimized with the value network. Our proposed algorithms significantly improve observational generalization performance and sample efficiency on the Procgen Benchmark. Fotis Iliopoulos · Vasilis Kontonis · Cenk Baykal · Gaurav Menghani · Khoa Trinh · Erik Vee Distillation with unlabeled examples is a popular and powerful method for training deep neural networks in settings where the amount of labeled data is limited: A large “teacher” neural network is trained on the labeled data available, and then it is used to generate labels on an unlabeled dataset (typically much larger in size). These labels are then utilized to train the smaller “student” model which will actually be deployed. Naturally, the success of the approach depends on the quality of the teacher’s labels, since the student could be confused if trained on inaccurate data. This paper proposes a principled approach for addressing this issue based on a “debiasing" reweighting of the student’s loss function tailored to the distillation training paradigm. Our method is hyper-parameter free, data-agnostic, and simple to implement. We demonstrate significant improvements on popular academic datasets and we accompany our results with a theoretical analysis which rigorously justifies the performance of our method in certain settings. Cenk Baykal · Nishanth Dikkala · Rina Panigrahy · Cyrus Rashtchian · Xin Wang Deep and wide neural networks successfully fit very complex functions today, but dense models are starting to be prohibitively expensive for inference. To mitigate this, one promising research direction is networks that activate a sparse subgraph of the network. The subgraph is chosen by a data-dependent routing function, enforcing a fixed mapping of inputs to subnetworks (e.g., the Mixture of Experts (MoE) paradigm in Switch Transformers). However, there is no theoretical grounding for these sparsely activated models. As our first contribution, we present a formal model of data-dependent sparse networks that captures salient aspects of popular architectures. Then, we show how to construct sparse networks that provably match the approximation power and total size of dense networks on Lipschitz functions. The sparse networks use much fewer inference operations than dense networks, leading to a faster forward pass. The key idea is to use locality sensitive hashing on the input vectors and then interpolate the function in subregions of the input space. This offers a theoretical insight into why sparse networks work well in practice. Finally, we present empirical findings that support our theory; compared to dense networks, sparse networks give a favorable trade-off between number of active units and approximation quality. Benjamin Kompa · David Bellamy · Tom Kolokotrones · james m robins · Andrew Beam The No Unmeasured Confounding Assumption is widely used to identify causal effects in observational studies. Recent work on proximal inference has provided alternative identification results that succeed even in the presence of unobserved confounders, provided that one has measured a sufficiently rich set of proxy variables, satisfying specific structural conditions. However, proximal inference requires solving an ill-posed integral equation. Previous approaches have used a variety of machine learning techniques to estimate a solution to this integral equation, commonly referred to as the bridge function. However, prior work has often been limited by relying on pre-specified kernel functions, which are not data adaptive and struggle to scale to large datasets. In this work, we introduce a flexible and scalable method based on a deep neural network to estimate causal effects in the presence of unmeasured confounding using proximal inference. Our method achieves state of the art performance on two well-established proximal inference benchmarks. Finally, we provide theoretical consistency guarantees for our method. Yunwen Lei · Rong Jin · Yiming Ying While significant theoretical progress has been achieved, unveiling the generalization mystery of overparameterized neural networks still remains largely elusive. In this paper, we study the generalization behavior of shallow neural networks (SNNs) by leveraging the concept of algorithmic stability. We consider gradient descent (GD) and stochastic gradient descent (SGD) to train SNNs, for both of which we develop consistent excess risk bounds by balancing the optimization and generalization via early-stopping. As compared to existing analysis on GD, our new analysis requires a relaxed overparameterization assumption and also applies to SGD. The key for the improvement is a better estimation of the smallest eigenvalues of the Hessian matrices of the empirical risks and the loss function along the trajectories of GD and SGD by providing a refined estimation of their iterates. Zi Wang · Gautam Prakriya · Somesh Jha Fast and precise Lipschitz constant estimation of neural networks is an important task for deep learning. Researchers have recently found an intrinsic trade-off between the accuracy and smoothness of neural networks, so training a network with a loose Lipschitz constant estimation imposes a strong regularization, and can hurt the model accuracy significantly. In this work, we provide a unified theoretical framework, a quantitative geometric approach, to address the Lipschitz constant estimation. By adopting this framework, we can immediately obtain several theoretical results, including the computational hardness of Lipschitz constant estimation and its approximability. We implement the algorithms induced from this quantitative geometric approach, which are based on semidefinite programming (SDP). Our empirical evaluation demonstrates that they are more scalable and precise than existing tools on Lipschitz constant estimation for $\ell_\infty$-perturbations. Furthermore, we also show their intricate relations with other recent SDP-based techniques, both theoretically and empirically. We believe that this unified quantitative geometric perspective can bring new insights and theoretical tools to the investigation of neural-network smoothness and robustness. Romain Lopez · Jan-Christian Huetter · Jonathan Pritchard · Aviv Regev A common theme in causal inference is learning causal relationships between observed variables, also known as causal discovery. This is usually a daunting task, given the large number of candidate causal graphs and the combinatorial nature of the search space. Perhaps for this reason, most research has so far focused on relatively small causal graphs, with up to hundreds of nodes. However, recent advances in fields like biology enable generating experimental data sets with thousands of interventions followed by rich profiling of thousands of variables, raising the opportunity and urgent need for large causal graph models. Here, we introduce the notion of factor directed acyclic graphs ($f$-DAGs) as a way to restrict the search space to non-linear low-rank causal interaction models. Combining this novel structural assumption with recent advances that bridge the gap between causal discovery and continuous optimization, we achieve causal discovery on thousands of variables. Additionally, as a model for the impact of statistical noise on this estimation procedure, we study a model of edge perturbations of the $f$-DAG skeleton based on random graphs and quantify the effect of such perturbations on the $f$-DAG rank. This theoretical analysis suggests that the set of candidate $f$-DAGs is much smaller than the whole DAG space and thus may be more suitable as a search space in the high-dimensional regime where the underlying skeleton is hard to assess. We propose Differentiable Causal Discovery of Factor Graphs (DCD-FG), a scalable implementation of $f$-DAG constrained causal discovery for high-dimensional interventional data. DCD-FG uses a Gaussian non-linear low-rank structural equation model and shows significant improvements compared to state-of-the-art methods in both simulations as well as a recent large-scale single-cell RNA sequencing data set with hundreds of genetic interventions. Brandon Cui · Hengyuan Hu · Andrei Lupu · Samuel Sokota · Jakob Foerster Zero-shot coordination (ZSC) evaluates an algorithm by the performance of a team of agents that were trained independently under that algorithm. Off-belief learning (OBL) is a recent method that achieves state-of-the-art results in ZSC in the game Hanabi. However, the implementation of OBL relies on a belief model that experiences covariate shift. Moreover, during ad-hoc coordination, OBL or any other neural policy may experience test-time covariate shift. We present two methods addressing these issues. The first method, off-team belief learning (OTBL), attempts to improve the accuracy of the belief model of a target policy πT on a broader range of inputs by weighting trajectories approximately according to the distribution induced by a different policy πb. The second, off-team off-belief learning (OT-OBL), attempts to compute an OBL equilibrium, where fixed point error is weighted according to the distribution induced by cross-play between the training policy π and a different fixed policy πb instead of self-play of π. We investigate these methods in variants of Hanabi. Filip Radenovic · Abhimanyu Dubey · Dhruv Mahajan Due to the widespread use of complex machine learning models in real-world applications, it is becoming critical to explain model predictions. However, these models are typically black-box deep neural networks, explained post-hoc via methods with known faithfulness limitations. Generalized Additive Models (GAMs) are an inherently interpretable class of models that address this limitation by learning a non-linear shape function for each feature separately, followed by a linear model on top. However, these models are typically difficult to train, require numerous parameters, and are difficult to scale. We propose an entirely new subfamily of GAMs that utilizes basis decomposition of shape functions. A small number of basis functions are shared among all features, and are learned jointly for a given task, thus making our model scale much better to large-scale data with high-dimensional features, especially when features are sparse. We propose an architecture denoted as the Neural Basis Model (NBM) which uses a single neural network to learn these bases. On a variety of tabular and image datasets, we demonstrate that for interpretable machine learning, NBMs are the state-of-the-art in accuracy, model size, and, throughput and can easily model all higher-order feature interactions. Source code is available at \href{https://github.com/facebookresearch/nbm-spam}{\ ttfamily github.com/facebookresearch/nbm-spam}. Allison Tam · Neil Rabinowitz · Andrew Lampinen · Nicholas Roy · Stephanie Chan · DJ Strouse · Jane Wang · Andrea Banino · Felix Hill Effective exploration is a challenge in reinforcement learning (RL). Novelty-based exploration methods can suffer in high-dimensional state spaces, such as continuous partially-observable 3D environments. We address this challenge by defining novelty using semantically meaningful state abstractions, which can be found in learned representations shaped by natural language. In particular, we evaluate vision-language representations, pretrained on natural image captioning datasets. We show that these pretrained representations drive meaningful, task-relevant exploration and improve performance on 3D simulated environments. We also characterize why and how language provides useful abstractions for exploration by considering the impacts of using representations from a pretrained model, a language oracle, and several ablations. We demonstrate the benefits of our approach with on- and off-policy RL algorithms and in two very different task domains---one that stresses the identification and manipulation of everyday objects, and one that requires navigational exploration in an expansive world. Our results suggest that using language-shaped representations could improve exploration for various algorithms and agents in challenging environments. Tianyu Cui · Yogesh Kumar · Pekka Marttinen · Samuel Kaski Similarity metrics such as representational similarity analysis (RSA) and centered kernel alignment (CKA) have been used to understand neural networks by comparing their layer-wise representations. However, these metrics are confounded by the population structure of data items in the input space, leading to inconsistent conclusions about the \emph{functional} similarity between neural networks, such as spuriously high similarity of completely random neural networks and inconsistent domain relations in transfer learning. We introduce a simple and generally applicable fix to adjust for the confounder with covariate adjustment regression, which improves the ability of CKA and RSA to reveal functional similarity and also retains the intuitive invariance properties of the original similarity measures. We show that deconfounding the similarity metrics increases the resolution of detecting functionally similar neural networks across domains. Moreover, in real-world applications, deconfounding improves the consistency between CKA and domain similarity in transfer learning, and increases the correlation between CKA and model out-of-distribution accuracy similarity. Juhan Bae · Nathan Ng · Alston Lo · Marzyeh Ghassemi · Roger Grosse Influence functions efficiently estimate the effect of removing a single training data point on a model's learned parameters. While influence estimates align well with leave-one-out retraining for linear models, recent works have shown this alignment is often poor in neural networks. In this work, we investigate the specific factors that cause this discrepancy by decomposing it into five separate terms. We study the contributions of each term on a variety of architectures and datasets and how they vary with factors such as network width and training time. While practical influence function estimates may be a poor match to leave-one-out retraining for nonlinear networks, we show that they are often a good approximation to a different object we term the proximal Bregman response function (PBRF). Since the PBRF can still be used to answer many of the questions motivating influence functions, such as identifying influential or mislabeled examples, our results suggest that current algorithms for influence function estimation give more informative results than previous error analyses would suggest. The convergence of GD and SGD when training mildly parameterized neural networks starting from random initialization is studied. For a broad range of models and loss functions, including the widely used square loss and cross entropy loss, we prove an ''early stage convergence'' result. We show that the loss is decreased by a significant amount in the early stage of the training, and this decreasing is fast. Furthurmore, for exponential type loss functions, and under some assumptions on the training data, we show global convergence of GD. Instead of relying on extreme over-parameterization, our study is based on a microscopic analysis of the activation patterns for the neurons, which helps us derive gradient lower bounds. The results on activation patterns, which we call ``neuron partition'', help build intuitions for understanding the behavior of neural networks' training dynamics, and may be of independent interest. Yining Chen · Elan Rosenfeld · Mark Sellke · Tengyu Ma · Andrej Risteski Domain generalization aims at performing well on unseen test environments with data from a limited number of training environments. Despite a proliferation of proposed algorithms for this task, assessing their performance both theoretically and empirically is still very challenging. Distributional matching algorithms such as (Conditional) Domain Adversarial Networks [Ganin et al., 2016, Long et al., 2018] are popular and enjoy empirical success, but they lack formal guarantees. Other approaches such as Invariant Risk Minimization (IRM) require a prohibitively large number of training environments---linear in the dimension of the spurious feature space $d_s$---even on simple data models like the one proposed by [Rosenfeld et al., 2021]. Under a variant of this model, we show that ERM and IRM can fail to find the optimal invariant predictor with $o(d_s)$ environments. We then present an iterative feature matching algorithm that is guaranteed with high probability to find the optimal invariant predictor after seeing only $O(\log d_s)$ environments. Our results provide the first theoretical justification for distribution-matching algorithms widely used in practice under a concrete nontrivial data model. Soon Hoe Lim · Yijun Wan · Umut Simsekli Recent studies have shown that gradient descent (GD) can achieve improved generalization when its dynamics exhibits a chaotic behavior. However, to obtain the desired effect, the step-size should be chosen sufficiently large, a task which is problem dependent and can be difficult in practice. In this study, we incorporate a chaotic component to GD in a controlled manner, and introduce \emph {multiscale perturbed GD} (MPGD), a novel optimization framework where the GD recursion is augmented with chaotic perturbations that evolve via an independent dynamical system. We analyze MPGD from three different angles: (i) By building up on recent advances in rough paths theory, we show that, under appropriate assumptions, as the step-size decreases, the MPGD recursion converges weakly to a stochastic differential equation (SDE) driven by a heavy-tailed L\'{e}vy-stable process. (ii) By making connections to recently developed generalization bounds for heavy-tailed processes, we derive a generalization bound for the limiting SDE and relate the worst-case generalization error over the trajectories of the process to the parameters of MPGD. (iii) We analyze the implicit regularization effect brought by the dynamical regularization and show that, in the weak perturbation regime, MPGD introduces terms that penalize the Hessian of the loss function. Empirical results are provided to demonstrate the advantages of MPGD. Mirco Mutti · Riccardo De Santi · Piersilvio De Bartolomeis · Marcello Restelli The classic Reinforcement Learning (RL) formulation concerns the maximization of a scalar reward function. More recently, convex RL has been introduced to extend the RL formulation to all the objectives that are convex functions of the state distribution induced by a policy. Notably, convex RL covers several relevant applications that do not fall into the scalar formulation, including imitation learning, risk-averse RL, and pure exploration. In classic RL, it is common to optimize an infinite trials objective, which accounts for the state distribution instead of the empirical state visitation frequencies, even though the actual number of trajectories is always finite in practice. This is theoretically sound since the infinite trials and finite trials objectives are equivalent and thus lead to the same optimal policy. In this paper, we show that this hidden assumption does not hold in convex RL. In particular, we prove that erroneously optimizing the infinite trials objective in place of the actual finite trials one, as it is usually done, can lead to a significant approximation error. Since the finite trials setting is the default in both simulated and real-world RL, we believe shedding light on this issue will lead to better approaches and methodologies for convex RL, impacting relevant research areas such as imitation learning, risk-averse RL, and pure exploration among others. Takumi Tanabe · Rei Sato · Kazuto Fukuchi · Jun Sakuma · Youhei Akimoto In the field of reinforcement learning, because of the high cost and risk of policy training in the real world, policies are trained in a simulation environment and transferred to the corresponding real-world environment.However, the simulation environment does not perfectly mimic the real-world environment, lead to model misspecification. Multiple studies report significant deterioration of policy performance in a real-world environment.In this study, we focus on scenarios involving a simulation environment with uncertainty parameters and the set of their possible values, called the uncertainty parameter set. The aim is to optimize the worst-case performance on the uncertainty parameter set to guarantee the performance in the corresponding real-world environment.To obtain a policy for the optimization, we propose an off-policy actor-critic approach called the Max-Min Twin Delayed Deep Deterministic Policy Gradient algorithm (M2TD3), which solves a max-min optimization problem using a simultaneous gradient ascent descent approach.Experiments in multi-joint dynamics with contact (MuJoCo) environments show that the proposed method exhibited a worst-case performance superior to several baseline approaches. Benjamin K Miller · Christoph Weniger · Patrick Forré Likelihood-to-evidence ratio estimation is usually cast as either a binary (NRE-A) or a multiclass (NRE-B) classification task. In contrast to the binary classification framework, the current formulation of the multiclass version has an intrinsic and unknown bias term, making otherwise informative diagnostics unreliable. We propose a multiclass framework free from the bias inherent to NRE-B at optimum, leaving us in the position to run diagnostics that practitioners depend on. It also recovers NRE-A in one corner case and NRE-B in the limiting case. For fair comparison, we benchmark the behavior of all algorithms in both familiar and novel training regimes: when jointly drawn data is unlimited, when data is fixed but prior draws are unlimited, and in the commonplace fixed data and parameters setting. Our investigations reveal that the highest performing models are distant from the competitors (NRE-A, NRE-B) in hyperparameter space. We make a recommendation for hyperparameters distinct from the previous models. We suggest a bound on the mutual information as a performance metric for simulation-based inference methods, without the need for posterior samples, and provide experimental results. Bo Zhao · Nima Dehmamy · Robin Walters · Rose Yu Existing gradient-based optimization methods update parameters locally, in a direction that minimizes the loss function. We study a different approach, symmetry teleportation, that allows parameters to travel a large distance on the loss level set, in order to improve the convergence speed in subsequent steps. Teleportation exploits symmetries in the loss landscape of optimization problems. We derive loss-invariant group actions for test functions in optimization and multi-layer neural networks, and prove a necessary condition for teleportation to improve convergence rate. We also show that our algorithm is closely related to second order methods. Experimentally, we show that teleportation improves the convergence speed of gradient descent and AdaGrad for several optimization problems including test functions, multi-layer regressions, and MNIST classification. Selena Zihan Ling · Nicholas Sharp · Alec Jacobson The Adam optimization algorithm has proven remarkably effective for optimization problems across machine learning and even traditional tasks in geometry processing. At the same time, the development of equivariant methods, which preserve their output under the action of rotation or some other transformation, has proven to be important for geometry problems across these domains. In this work, we observe that Adam — when treated as a function that maps initial conditions to optimized results — is not rotation equivariant for vector-valued parameters due to per-coordinate moment updates. This leads to significant artifacts and biases in practice. We propose to resolve this deficiency with VectorAdam, a simple modification which makes Adam rotation-equivariant by accounting for the vector structure of optimization variables. We demonstrate this approach on problems in machine learning and traditional geometric optimization, showing that equivariant VectorAdam resolves the artifacts and biases of traditional Adam when applied to vector-valued data, with equivalent or even improved rates of convergence. Cristopher Salvi · Maud Lemercier · Andris Gerasimovics Stochastic partial differential equations (SPDEs) are the mathematical tool of choice for modelling spatiotemporal PDE-dynamics under the influence of randomness. Based on the notion of mild solution of an SPDE, we introduce a novel neural architecture to learn solution operators of PDEs with (possibly stochastic) forcing from partially observed data. The proposed Neural SPDE model provides an extension to two popular classes of physics-inspired architectures. On the one hand, it extends Neural CDEs and variants -- continuous-time analogues of RNNs -- in that it is capable of processing incoming sequential information arriving at arbitrary spatial resolutions. On the other hand, it extends Neural Operators -- generalizations of neural networks to model mappings between spaces of functions -- in that it can parameterize solution operators of SPDEs depending simultaneously on the initial condition and a realization of the driving noise. By performing operations in the spectral domain, we show how a Neural SPDE can be evaluated in two ways, either by calling an ODE solver (emulating a spectral Galerkin scheme), or by solving a fixed point problem. Experiments on various semilinear SPDEs, including the stochastic Navier-Stokes equations, demonstrate how the Neural SPDE model is capable of learning complex spatiotemporal dynamics in a resolution-invariant way, with better accuracy and lighter training data requirements compared to alternative models, and up to 3 orders of magnitude faster than traditional solvers. Siavash Golkar · Tiberiu Tesileanu · Yanis Bahroun · Anirvan Sengupta · Dmitri Chklovskii Predictive coding (PC) has emerged as an influential normative model of neural computation with numerous extensions and applications. As such, much effort has been put into mapping PC faithfully onto the cortex, but there are issues that remain unresolved or controversial. In particular, current implementations often involve separate value and error neurons and require symmetric forward and backward weights across different brain regions. These features have not been experimentally confirmed. In this work, we show that the PC framework in the linear regime can be modified to map faithfully onto the cortical hierarchy in a manner compatible with empirical observations. By employing a disentangling-inspired constraint on hidden-layer neural activities, we derive an upper bound for the PC objective. Optimization of this upper bound leads to an algorithm that shows the same performance as the original objective and maps onto a biologically plausible network. The units of this network can be interpreted as multi-compartmental neurons with non-Hebbian learning rules, with a remarkable resemblance to recent experimental findings. There exist prior models which also capture these features, but they are phenomenological, while our work is a normative derivation. This allows us to determine which features are necessary for the functioning of the model. For instance, the network we derive does not involve one-to-one connectivity or signal multiplexing, which the phenomenological models require, indicating that these features are not necessary for learning in the cortex. The normative nature of our algorithm in the simplified linear case also allows us to prove interesting properties of the framework and analytically understand the computational role of our network's components. The parameters of our network have natural interpretations as physiological quantities in a multi-compartmental model of pyramidal neurons, providing a concrete link between PC and experimental measurements carried out in the cortex. Jeevana Priya Inala · Chenglong Wang · Mei Yang · Andres Codas · Mark Encarnación · Shuvendu Lahiri · Madanlal Musuvathi · Jianfeng Gao Large language models (LLMs) have demonstrated an impressive ability to generate code for various programming tasks. In many instances, LLMs can generate a correct program for a task when given numerous trials. Consequently, a recent trend is to do large scale sampling of programs using a model and then filtering/ranking the programs based on the program execution on a small number of known unit tests to select one candidate solution. However, these approaches assume that the unit tests are given and assume the ability to safely execute the generated programs (which can do arbitrary dangerous operations such as file manipulations). Both of the above assumptions are impractical in real-world software development. In this paper, we propose CodeRanker, a neural ranker that can predict the correctness of a sampled program without executing it. Our CodeRanker is fault-aware i.e., it is trained to predict different kinds of execution information such as predicting the exact compile/runtime error type (e.g., an IndexError or a TypeError). We show that CodeRanker can significantly increase the pass@1 accuracy of various code generation models (including Codex, GPT-Neo, GPT-J) on APPS, HumanEval and MBPP datasets. Pierre-alexandre Kamienny · Stéphane d'Ascoli · Guillaume Lample · Francois Charton Symbolic regression, the task of predicting the mathematical expression of a function from the observation of its values, is a difficult task which usually involves a two-step procedure: predicting the "skeleton" of the expression up to the choice of numerical constants, then fitting the constants by optimizing a non-convex loss function. The dominant approach is genetic programming, which evolves candidates by iterating this subroutine a large number of times. Neural networks have recently been tasked to predict the correct skeleton in a single try, but remain much less powerful.In this paper, we challenge this two-step procedure, and task a Transformer to directly predict the full mathematical expression, constants included. One can subsequently refine the predicted constants by feeding them to the non-convex optimizer as an informed initialization. We present ablations to show that this end-to-end approach yields better results, sometimes even without the refinement step. We evaluate our model on problems from the SRBench benchmark and show that our model approaches the performance of state-of-the-art genetic programming with several orders of magnitude faster Bingbin Liu · Daniel Hsu · Pradeep Ravikumar · Andrej Risteski The vast majority of work in self-supervised learning have focused on assessing recovered features by a chosen set of downstream tasks. While there are several commonly used benchmark datasets, this lens of feature learning requires assumptions on the downstream tasks which are not inherent to the data distribution itself. In this paper, we present an alternative lens, one of parameter identifiability: assuming data comes from a parametric probabilistic model, we train a self-supervised learning predictor with a suitable parametric form, and ask whether the parameters of the optimal predictor can be used to extract the parameters of the ground truth generative model.Specifically, we focus on latent-variable models capturing sequential structures, namely Hidden Markov Models with both discrete and conditionally Gaussian observations. We focus on masked prediction as the self-supervised learning task and study the optimal masked predictor. We show that parameter identifiability is governed by the task difficulty, which is determined by the choice of data model and the amount of tokens to predict. Technique-wise, we uncover close connections with the uniqueness of tensor rank decompositions, a widely used tool in studying identifiability through the lens of the method of moments. Yeshu Li · Danyal Saeed · Xinhua Zhang · Brian Ziebart · Kevin Gimpel Structured prediction of tree-shaped objects is heavily studied under the name of syntactic dependency parsing. Current practice based on maximum likelihood or margin is either agnostic to or inconsistent with the evaluation loss. Risk minimization alleviates the discrepancy between training and test objectives but typically induces a non-convex problem. These approaches adopt explicit regularization to combat overfitting without probabilistic interpretation. We propose a moment-based distributionally robust optimization approach for tree structured prediction, where the worst-case expected loss over a set of distributions within bounded moment divergence from the empirical distribution is minimized. We develop efficient algorithms for arborescences and other variants of trees. We derive Fisher consistency, convergence rates and generalization bounds for our proposed method. We evaluate its empirical effectiveness on dependency parsing benchmarks. Yuting Ng · Ali Hasan · Vahid Tarokh Understanding multivariate dependencies in both the bulk and the tails of a distribution is an important problem for many applications, such as ensuring algorithms are robust to observations that are infrequent but have devastating effects. Archimax copulas are a family of distributions endowed with a precise representation that allows simultaneous modeling of the bulk and the tails of a distribution. Rather than separating the two as is typically done in practice, incorporating additional information from the bulk may improve inference of the tails, where observations are limited. Building on the stochastic representation of Archimax copulas, we develop a non-parametric inference method and sampling algorithm. Our proposed methods, to the best of our knowledge, are the first that allow for highly flexible and scalable inference and sampling algorithms, enabling the increased use of Archimax copulas in practical settings. We experimentally compare to state-of-the-art density modeling techniques, and the results suggest that the proposed method effectively extrapolates to the tails while scaling to higher dimensional data. Our findings suggest that the proposed algorithms can be used in a variety of applications where understanding the interplay between the bulk and the tails of a distribution is necessary, such as healthcare and safety. Renate Krause · Matthew Cook · Sepp Kollmorgen · Valerio Mante · Giacomo Indiveri Recurrent Neural Networks (RNN) are commonly used models to study neural computation. However, a comprehensive understanding of how dynamics in RNN emerge from the underlying connectivity is largely lacking. Previous work derived such an understanding for RNN fulfilling very specific constraints on their connectivity, but it is unclear whether the resulting insights apply more generally. Here we study how network dynamics are related to network connectivity in RNN trained without any specific constraints on several tasks previously employed in neuroscience. Despite the apparent high-dimensional connectivity of these RNN, we show that a low-dimensional, functionally relevant subspace of the weight matrix can be found through the identification of \textit{operative} dimensions, which we define as components of the connectivity whose removal has a large influence on local RNN dynamics. We find that a weight matrix built from only a few operative dimensions is sufficient for the RNN to operate with the original performance, implying that much of the high-dimensional structure of the trained connectivity is functionally irrelevant. The existence of a low-dimensional, operative subspace in the weight matrix simplifies the challenge of linking connectivity to network dynamics and suggests that independent network functions may be placed in specific, separate subspaces of the weight matrix to avoid catastrophic forgetting in continual learning. Hossein Souri · Liam Fowl · Rama Chellappa · Micah Goldblum · Tom Goldstein As the curation of data for machine learning becomes increasingly automated, dataset tampering is a mounting threat. Backdoor attackers tamper with training data to embed a vulnerability in models that are trained on that data. This vulnerability is then activated at inference time by placing a "trigger'' into the model's input. Typical backdoor attacks insert the trigger directly into the training data, although the presence of such an attack may be visible upon inspection. In contrast, the Hidden Trigger Backdoor Attack achieves poisoning without placing a trigger into the training data at all. However, this hidden trigger attack is ineffective at poisoning neural networks trained from scratch. We develop a new hidden trigger attack, Sleeper Agent, which employs gradient matching, data selection, and target model re-training during the crafting process. Sleeper Agent is the first hidden trigger backdoor attack to be effective against neural networks trained from scratch. We demonstrate its effectiveness on ImageNet and in black-box settings. Our implementation code can be found at: https://github.com/hsouri/Sleeper-Agent. Jason Wei · Xuezhi Wang · Dale Schuurmans · Maarten Bosma · brian ichter · Fei Xia · Ed Chi · Quoc V Le · Denny Zhou We explore how generating a chain of thought---a series of intermediate reasoning steps---significantly improves the ability of large language models to perform complex reasoning. In particular, we show how such reasoning abilities emerge naturally in sufficiently large language models via a simple method called chain of thought prompting, where a few chain of thought demonstrations are provided as exemplars in prompting. Experiments on three large language models show that chain of thought prompting improves performance on a range of arithmetic, commonsense, and symbolic reasoning tasks. The empirical gains can be striking. For instance, prompting a 540B-parameter language model with just eight chain of thought exemplars achieves state of the art accuracy on the GSM8K benchmark of math word problems, surpassing even finetuned GPT-3 with a verifier. William Harvey · Saeid Naderiparizi · Vaden Masrani · Christian Weilbach · Frank Wood We present a framework for video modeling based on denoising diffusion probabilistic models that produces long-duration video completions in a variety of realistic environments. We introduce a generative model that can at test-time sample any arbitrary subset of video frames conditioned on any other subset and present an architecture adapted for this purpose. Doing so allows us to efficiently compare and optimize a variety of schedules for the order in which frames in a long video are sampled and use selective sparse and long-range conditioning on previously sampled frames. We demonstrate improved video modeling over prior work on a number of datasets and sample temporally coherent videos over 25 minutes in length. We additionally release a new video modeling dataset and semantically meaningful metrics based on videos generated in the CARLA autonomous driving simulator. Reda CHHAIBI · Tariq Daouda · Ezechiel Kahn Gradient descent during the learning process of a neural network can be subject to many instabilities. The spectral density of the Jacobian is a key component for analyzing stability. Following the works of Pennington et al., such Jacobians are modeled using free multiplicative convolutions from Free Probability Theory (FPT).We present a reliable and very fast method for computing the associated spectral densities, for given architecture and initialization. This method has a controlled and proven convergence. Our technique is based on an homotopy method: it is an adaptative Newton-Raphson scheme which chains basins of attraction. We find contiguous lilypad-like basins and step from one to the next, heading towards the objective.In order to demonstrate the relevance of our method we show that the relevant FPT metrics computed before training are highly correlated to final test losses – up to 85%. We also give evidence that a very desirable feature for neural networks is the hyperbolicity of their Jacobian at initialization, while remaining at the edge of chaos. Leon Gerard · Michael Scherbela · Philipp Marquetand · Philipp Grohs Finding accurate solutions to the Schrödinger equation is the key unsolved challenge of computational chemistry. Given its importance for the development of new chemical compounds, decades of research have been dedicated to this problem, but due to the large dimensionality even the best available methods do not yet reach the desired accuracy.Recently the combination of deep learning with Monte Carlo methods has emerged as a promising way to obtain highly accurate energies and moderate scaling of computational cost. In this paper we significantly contribute towards this goal by introducing a novel deep-learning architecture that achieves 40-70% lower energy error at 6x lower computational cost compared to previous approaches. Using our method we establish a new benchmark by calculating the most accurate variational ground state energies ever published for a number of different atoms and molecules.We systematically break down and measure our improvements, focusing in particular on the effect of increasing physical prior knowledge.We surprisingly find that increasing the prior knowledge given to the architecture can actually decrease accuracy. Guillaume Huguet · Daniel Sumner Magruder · Alexander Tong · Oluwadamilola Fasina · Manik Kuchroo · Guy Wolf · Smita Krishnaswamy We present a method called Manifold Interpolating Optimal-Transport Flow (MIOFlow) that learns stochastic, continuous population dynamics from static snapshot samples taken at sporadic timepoints. MIOFlow combines dynamic models, manifold learning, and optimal transport by training neural ordinary differential equations (Neural ODE) to interpolate between static population snapshots as penalized by optimal transport with manifold ground distance. Further, we ensure that the flow follows the geometry by operating in the latent space of an autoencoder that we call a geodesic autoencoder (GAE). In GAE the latent space distance between points is regularized to match a novel multiscale geodesic distance on the data manifold that we define. We show that this method is superior to normalizing flows, Schr\"odinger bridges and other generative models that are designed to flow from noise to data in terms of interpolating between populations. Theoretically, we link these trajectories with dynamic optimal transport. We evaluate our method on simulated data with bifurcations and merges, as well as scRNA-seq data from embryoid body differentiation, and acute myeloid leukemia treatment. Guy Tennenholtz · Nadav Merlis · Lior Shani · Shie Mannor · Uri Shalit · Gal Chechik · Assaf Hallak · Gal Dalal We present the problem of reinforcement learning with exogenous termination. We define the Termination Markov Decision Process (TerMDP), an extension of the MDP framework, in which episodes may be interrupted by an external non-Markovian observer. This formulation accounts for numerous real-world situations, such as a human interrupting an autonomous driving agent for reasons of discomfort. We learn the parameters of the TerMDP and leverage the structure of the estimation problem to provide state-wise confidence bounds. We use these to construct a provably-efficient algorithm, which accounts for termination, and bound its regret. Motivated by our theoretical analysis, we design and implement a scalable approach, which combines optimism (w.r.t. termination) and a dynamic discount factor, incorporating the termination probability. We deploy our method on high-dimensional driving and MinAtar benchmarks. Additionally, we test our approach on human data in a driving setting. Our results demonstrate fast convergence and significant improvement over various baseline approaches. Tiantian Fang · Ruoyu Sun · Alex Schwing Generative adversarial nets (GANs) have been remarkably successful at learning to sample from distributions specified by a given dataset, particularly if the given dataset is reasonably large compared to its dimensionality. However, given limited data, classical GANs have struggled, and strategies like output-regularization, data-augmentation, use of pre-trained models and pruning have been shown to lead to improvements. Notably, the applicability of these strategies is often constrained to particular settings, e.g., availability of a pretrained GAN, or increases training time, e.g., when using pruning. In contrast, we propose a Discriminator gradIent Gap regularized GAN (DigGAN) formulation which can be added to any existing GAN. DigGAN augments existing GANs by encouraging to narrow the gap between the norm of the gradient of a discriminator's prediction w.r.t. real images and w.r.t. the generated samples. We observe this formulation to avoid bad attractors within the GAN loss landscape, and we find DigGAN to significantly improve the results of GAN training when limited data is available. Muning Wen · Jakub Kuba · Runji Lin · Weinan Zhang · Ying Wen · Jun Wang · Yaodong Yang Large sequence models (SM) such as GPT series and BERT have displayed outstanding performance and generalization capabilities in natural language process, vision and recently reinforcement learning. A natural follow-up question is how to abstract multi-agent decision making also as an sequence modeling problem and benefit from the prosperous development of the SMs. In this paper, we introduce a novel architecture named Multi-Agent Transformer (MAT) that effectively casts cooperative multi-agent reinforcement learning (MARL) into SM problems wherein the objective is to map agents' observation sequences to agents' optimal action sequences. Our goal is to build the bridge between MARL and SMs so that the modeling power of modern sequence models can be unleashed for MARL. Central to our MAT is an encoder-decoder architecture which leverages the multi-agent advantage decomposition theorem to transform the joint policy search problem into a sequential decision making process; this renders only linear time complexity for multi-agent problems and, most importantly, endows MAT with monotonic performance improvement guarantee. Unlike prior arts such as Decision Transformer fit only pre-collected offline data, MAT is trained by online trial and error from the environment in an on-policy fashion. To validate MAT, we conduct extensive experiments on StarCraftII, Multi-Agent MuJoCo, Dexterous Hands Manipulation, and Google Research Football benchmarks. Results demonstrate that MAT achieves superior performance and data efficiency compared to strong baselines including MAPPO and HAPPO. Furthermore, we demonstrate that MAT is an excellent few-short learner on unseen tasks regardless of changes in the number of agents.See our project page at https:// Cansu Sancaktar · Sebastian Blaes · Georg Martius It has been a long-standing dream to design artificial agents that explore their environment efficiently via intrinsic motivation, similar to how children perform curious free play. Despite recent advances in intrinsically motivated reinforcement learning (RL), sample-efficient exploration in object manipulation scenarios remains a significant challenge as most of the relevant information lies in the sparse agent-object and object-object interactions. In this paper, we propose to use structured world models to incorporate relational inductive biases in the control loop to achieve sample-efficient and interaction-rich exploration in compositional multi-object environments. By planning for future novelty inside structured world models, our method generates free-play behavior that starts to interact with objects early on and develops more complex behavior over time. Instead of using models only to compute intrinsic rewards, as commonly done, our method showcases that the self-reinforcing cycle between good models and good exploration also opens up another avenue: zero-shot generalization to downstream tasks via model-based planning. After the entirely intrinsic task-agnostic exploration phase, our method solves challenging downstream tasks such as stacking, flipping, pick & place, and throwing that generalizes to unseen numbers and arrangements of objects without any additional training. Katja Schwarz · Axel Sauer · Michael Niemeyer · Yiyi Liao · Andreas Geiger State-of-the-art 3D-aware generative models rely on coordinate-based MLPs to parameterize 3D radiance fields. While demonstrating impressive results, querying an MLP for every sample along each ray leads to slow rendering.Therefore, existing approaches often render low-resolution feature maps and process them with an upsampling network to obtain the final image. Albeit efficient, neural rendering often entangles viewpoint and content such that changing the camera pose results in unwanted changes of geometry or appearance.Motivated by recent results in voxel-based novel view synthesis, we investigate the utility of sparse voxel grid representations for fast and 3D-consistent generative modeling in this paper.Our results demonstrate that monolithic MLPs can indeed be replaced by 3D convolutions when combining sparse voxel grids with progressive growing, free space pruning and appropriate regularization.To obtain a compact representation of the scene and allow for scaling to higher voxel resolutions, our model disentangles the foreground object (modeled in 3D) from the background (modeled in 2D).In contrast to existing approaches, our method requires only a single forward pass to generate a full 3D scene. It hence allows for efficient rendering from arbitrary viewpoints while yielding 3D consistent results with high visual fidelity. Code and models are available at https://github.com/autonomousvision/voxgraf. Giangiacomo Mercatali · Andre Freitas · Vikas Garg Learning disentangled representations is important for unraveling the underlying complex interactions between latent generative factors. Disentanglement has been formalized using a symmetry-centric notion for unstructured spaces, however, graphs have eluded a similarly rigorous treatment. We fill this gap with a new notion of conditional symmetry for disentanglement, and leverage tools from Lie algebras to encode graph properties into subgroups using suitable adaptations of generative models such as Variational Autoencoders. Unlike existing works on disentanglement, the proposed models segregate the latent space into uncoupled and entangled parts. Experiments on synthetic and real datasets suggest that these models can learn effective disengaged representations, and improve performance on downstream tasks such as few-shot classification and molecular generation. Aryan Pedawi · Pawel Gniewek · Chaoyi Chang · Brandon Anderson · Henry van den Bedem Virtual, make-on-demand chemical libraries have transformed early-stage drug discovery by unlocking vast, synthetically accessible regions of chemical space. Recent years have witnessed rapid growth in these libraries from millions to trillions of compounds, hiding undiscovered, potent hits for a variety of therapeutic targets. However, they are quickly approaching a size beyond that which permits explicit enumeration, presenting new challenges for virtual screening. To overcome these challenges, we propose the Combinatorial Synthesis Library Variational Auto-Encoder (CSLVAE). The proposed generative model represents such libraries as a differentiable, hierarchically-organized database. Given a compound from the library, the molecular encoder constructs a query for retrieval, which is utilized by the molecular decoder to reconstruct the compound by first decoding its chemical reaction and subsequently decoding its reactants. Our design minimizes autoregression in the decoder, facilitating the generation of large, valid molecular graphs. Our method performs fast and parallel batch inference for ultra-large synthesis libraries, enabling a number of important applications in early-stage drug discovery. Compounds proposed by our method are guaranteed to be in the library, and thus synthetically and cost-effectively accessible. Importantly, CSLVAE can encode out-of-library compounds and search for in-library analogues. In experiments, we demonstrate the capabilities of the proposed method in the navigation of massive combinatorial synthesis Eric Zelikman · Yuhuai Wu · Jesse Mu · Noah Goodman Generating step-by-step "chain-of-thought" rationales improves language model performance on complex reasoning tasks like mathematics or commonsense question-answering. However, inducing language model rationale generation currently requires either constructing massive rationale datasets or sacrificing accuracy by using only few-shot inference. We propose a technique to iteratively leverage a small number of rationale examples and a large dataset without rationales, to bootstrap the ability to perform successively more complex reasoning. This technique, the "Self-Taught Reasoner" (STaR), relies on a simple loop: generate rationales to answer many questions, prompted with a few rationale examples; if the generated answers are wrong, try again to generate a rationale given the correct answer; fine-tune on all the rationales that ultimately yielded correct answers; repeat. We show that STaR significantly improves performance on multiple datasets compared to a model fine-tuned to directly predict final answers, and performs comparably to fine-tuning a 30$\times$ larger state-of-the-art language model on CommensenseQA. Thus, STaR lets a model improve itself by learning from its own generated reasoning. Tsung-Yen Yang · Justinian Rosca · Karthik Narasimhan · Peter J Ramadge We consider the problem of estimating states (e.g., position and velocity) and physical parameters (e.g., friction, elasticity) from a sequence of observations when provided a dynamic equation that describes the behavior of the system. The dynamic equation can arise from first principles (e.g., Newton’s laws) and provide useful cues for learning, but its physical parameters are unknown. To address this problem, we propose a model that estimates states and physical parameters of the system using two main components. First, an autoencoder compresses a sequence of observations (e.g., sensor measurements, pixel images) into a sequence for the state representation that is consistent with physics by including a simulation of the dynamic equation. Second, an estimator is coupled with the autoencoder to predict the values of the physical parameters. We also theoretically and empirically show that using Fourier feature mappings improves generalization of the estimator in predicting physical parameters compared to raw state sequences. In our experiments on three visual and one sensor measurement tasks, our model imposes interpretability on latent states and achieves improved generalization performance for long-term prediction of system dynamics over state-of-the-art baselines. Mislav Balunovic · Dimitar Dimitrov · Nikola Jovanović · Martin Vechev Recent work shows that sensitive user data can be reconstructed from gradient updates, breaking the key privacy promise of federated learning. While success was demonstrated primarily on image data, these methods do not directly transfer to other domains such as text. In this work, we propose LAMP, a novel attack tailored to textual data, that successfully reconstructs original text from gradients. Our attack is based on two key insights: (i) modelling prior text probability via an auxiliary language model, guiding the search towards more natural text, and (ii) alternating continuous and discrete optimization which minimizes reconstruction loss on embeddings while avoiding local minima via discrete text transformations. Our experiments demonstrate that LAMP is significantly more effective than prior work: it reconstructs 5x more bigrams and $23\%$ longer subsequences on average. Moreover, we are first to recover inputs from batch sizes larger than 1 for textual models. These findings indicate that gradient updates of models operating on textual data leak more information than previously thought. Christos Thrampoulidis · Ganesh Ramachandra Kini · Vala Vakilian · Tina Behnia Neural Collapse refers to the remarkable structural properties characterizing the geometry of class embeddings and classifier weights, found by deep nets when trained beyond zero training error. However, this characterization only holds for balanced data. Here we thus ask whether it can be made invariant to class imbalances. Towards this end, we adopt the unconstrained feature model (UFM), a recent theoretical model for studying neural collapse, and introduce $\text{\emph{Simplex-Encoded-Labels Interpolation}}$ (SELI) as an invariant characterization of the neural collapse phenomenon. Specifically, we prove for the UFM with cross-entropy loss and vanishing regularization that, irrespective of class imbalances, the embeddings and classifiers always interpolate a simplex-encoded label matrix and that their individual geometries are determined by the SVD factors of this same label matrix. We then present extensive experiments on synthetic and real datasets that confirm convergence to the SELI geometry. However, we caution that convergence worsens with increasing imbalances. We theoretically support this finding by showing that unlike the balanced case, when minorities are present, ridge-regularization plays a critical role in tweaking the geometry. This defines new questions and motivates further investigations into the impact of class imbalances on the rates at which first-order methods converge to their asymptotically preferred solutions. Limei Wang · Yi Liu · Yuchao Lin · Haoran Liu · Shuiwang Ji Many real-world data can be modeled as 3D graphs, but learning representations that incorporates 3D information completely and efficiently is challenging. Existing methods either use partial 3D information, or suffer from excessive computational cost. To incorporate 3D information completely and efficiently, we propose a novel message passing scheme that operates within 1-hop neighborhood. Our method guarantees full completeness of 3D information on 3D graphs by achieving global and local completeness. Notably, we propose the important rotation angles to fulfill global completeness. Additionally, we show that our method is orders of magnitude faster than prior methods. We provide rigorous proof of completeness and analysis of time complexity for our methods. As molecules are in essence quantum systems, we build the \underline{com}plete and \underline{e}fficient graph neural network (ComENet) by combing quantum inspired basis functions and the proposed message passing scheme. Experimental results demonstrate the capability and efficiency of ComENet, especially on real-world datasets that are large in both numbers and sizes of graphs. Our code is publicly available as part of the DIG library (\url{https://github.com/divelab/DIG}). Samuel Pinilla · Tingting Mu · Neil Bourne · Jeyan Thiyagalingam Image reconstruction enhanced by regularizers, e.g., to enforce sparsity, low rank or smoothness priors on images, has many successful applications in vision tasks such as computer photography, biomedical and spectral imaging. It has been well accepted that non-convex regularizers normally perform better than convex ones in terms of the reconstruction quality. But their convergence analysis is only established to a critical point, rather than the global optima. To mitigate the loss of guarantees for global optima, we propose to apply the concept of invexity and provide the first list of proved invex regularizers for improving image reconstruction. Moreover, we establish convergence guarantees to global optima for various advanced image reconstruction techniques after being improved by such invex regularization. To the best of our knowledge, this is the first practical work applying invex regularization to improve imaging with global optima guarantees. To demonstrate the effectiveness of invex regularization, numerical experiments are conducted for various imaging tasks using benchmark datasets. Quoc Phong Nguyen · Bryan Kian Hsiang Low · Patrick Jaillet This paper investigates the problem of fairly trading off between payoff and model rewards in collaborative machine learning (ML) where parties aggregate their datasets together to obtain improved ML models over that of each party. Supposing parties can afford the optimal model trained on the aggregated dataset, we propose an allocation scheme that distributes the payoff fairly. Notably, the same scheme can be derived from two different approaches based on (a) desirable properties of the parties' payoffs or (b) that of the underlying payoff flows from one party to another. While the former is conceptually simpler, the latter can be used to handle the practical constraint on the budgets of parties. In particular, we propose desirable properties for achieving a fair adjustment of the payoff flows that can trade off between the model reward's performance and the payoff reward. We empirically demonstrate that our proposed scheme is a sensible solution in several scenarios of collaborative ML with different budget constraints. Yan Dai · Haipeng Luo · Liyu Chen We consider regret minimization for Adversarial Markov Decision Processes (AMDPs), where the loss functions are changing over time and adversarially chosen, and the learner only observes the losses for the visited state-action pairs (i.e., bandit feedback). While there has been a surge of studies on this problem using Online-Mirror-Descent (OMD) methods, very little is known about the Follow-the-Perturbed-Leader (FTPL) methods, which are usually computationally more efficient and also easier to implement since it only requires solving an offline planning problem. Motivated by this, we take a closer look at FTPL for learning AMDPs, starting from the standard episodic finite-horizon setting. We find some unique and intriguing difficulties in the analysis and propose a workaround to eventually show that FTPL is also able to achieve near-optimal regret bounds in this case. More importantly, we then find two significant applications: First, the analysis of FTPL turns out to be readily generalizable to delayed bandit feedback with order-optimal regret, while OMD methods exhibit extra difficulties (Jin et al., 2022). Second, using FTPL, we also develop the first no-regret algorithm for learning communicating AMDPs in the infinite-horizon setting with bandit feedback and stochastic transitions. Our algorithm is efficient assuming access to an offline planning oracle, while even for the easier full-information setting, the only existing algorithm (Chandrasekaran and Tewari, 2021) is computationally inefficient. Osama Hanna · Lin Yang · Christina Fragouli Contextual linear bandits is a rich and theoretically important model that has many practical applications. Recently, this setup gained a lot of interest in applications over wireless where communication constraints can be a performance bottleneck, especially when the contexts come from a large $d$-dimensional space. In this paper, we consider the distributed contextual linear bandit learning problem, where the agents who observe the contexts and take actions are geographically separated from the learner who performs the learning while not seeing the contexts. We assume that contexts are generated from a distribution and propose a method that uses $\approx 5d$ bits per context for the case of unknown context distribution and $0$ bits per context if the context distribution is known, while achieving nearly the same regret bound as if the contexts were directly observable. The former bound improves upon existing bounds by a $\log(T)$ factor, where $T$ is the length of the horizon, while the latter achieves information theoretical tightness. Shubham Gupta · Ambedkar Dukkipati Spectral clustering is popular among practitioners and theoreticians alike. While performance guarantees for spectral clustering are well understood, recent studies have focused on enforcing "fairness" in clusters, requiring them to be "balanced" with respect to a categorical sensitive node attribute (e.g. the race distribution in clusters must match the race distribution in the population). In this paper, we consider a setting where sensitive attributes indirectly manifest in an auxiliary representation graph rather than being directly observed. This graph specifies node pairs that can represent each other with respect to sensitive attributes and is observed in addition to the usual similarity graph. Our goal is to find clusters in the similarity graph while respecting a new individual-level fairness constraint encoded by the representation graph. We develop variants of unnormalized and normalized spectral clustering for this task and analyze their performance under a fair planted partition model induced by the representation graph. This model uses both the cluster membership of the nodes and the structure of the representation graph to generate random similarity graphs. To the best of our knowledge, these are the first consistency results for constrained spectral clustering under an individual-level fairness constraint. Numerical results corroborate our theoretical findings. Matthias Englert · Ranko Lazic Adversarial reprogramming, introduced by Elsayed, Goodfellow, and Sohl-Dickstein, seeks to repurpose a neural network to perform a different task, by manipulating its input without modifying its weights. We prove that two-layer ReLU neural networks with random weights can be adversarially reprogrammed to achieve arbitrarily high accuracy on Bernoulli data models over hypercube vertices, provided the network width is no greater than its input dimension. We also substantially strengthen a recent result of Phuong and Lampert on directional convergence of gradient flow, and obtain as a corollary that training two-layer ReLU neural networks on orthogonally separable datasets can cause their adversarial reprogramming to fail. We support these theoretical results by experiments that demonstrate that, as long as batch normalisation layers are suitably initialised, even untrained networks with random weights are susceptible to adversarial reprogramming. This is in contrast to observations in several recent works that suggested that adversarial reprogramming is not possible for untrained networks to any degree of reliability. Jonas M. Kübler · Vincent Stimper · Simon Buchholz · Krikamol Muandet · Bernhard Schölkopf Two-sample tests are important in statistics and machine learning, both as tools for scientific discovery as well as to detect distribution shifts.This led to the development of many sophisticated test procedures going beyond the standard supervised learning frameworks, whose usage can require specialized knowledge about two-sample testing. We use a simple test that takes the mean discrepancy of a witness function as the test statistic and prove that minimizing a squared loss leads to a witness with optimal testing power. This allows us to leverage recent advancements in AutoML. Without any user input about the problems at hand, and using the same method for all our experiments, our AutoML two-sample test achieves competitive performance on a diverse distribution shift benchmark as well as on challenging two-sample testing problems. In this paper, we study a large-scale multi-agent minimax optimization problem, which models many interesting applications in statistical learning and game theory, including Generative Adversarial Networks (GANs). The overall objective is a sum of agents' private local objective functions. We focus on the federated setting, where agents can perform local computation and communicate with a central server. Most existing federated minimax algorithms either require communication per iteration or lack performance guarantees with the exception of Local Stochastic Gradient Descent Ascent (SGDA), a multiple-local-update descent ascent algorithm which guarantees convergence under a diminishing stepsize. By analyzing Local SGDA under the ideal condition of no gradient noise, we show that generally it cannot guarantee exact convergence with constant stepsizes and thus suffers from slow rates of convergence. To tackle this issue, we propose FedGDA-GT, an improved Federated (Fed) Gradient Descent Ascent (GDA) method based on Gradient Tracking (GT). When local objectives are Lipschitz smooth and strongly-convex-strongly-concave, we prove that FedGDA-GT converges linearly with a constant stepsize to global $\epsilon$-approximation solution with $\mathcal{O}(\log (1/\epsilon))$ rounds of communication, which matches the time complexity of centralized GDA method. Then, we analyze the general distributed minimax problem from a statistical aspect, where the overall objective approximates a true population minimax risk by empirical samples. We provide generalization bounds for learning with this objective through Rademacher complexity analysis. Finally, we numerically show that FedGDA-GT outperforms Local SGDA. Rong Yin · Yong Liu · Weiping Wang · Dan Meng Kernel $k$-means is arguably one of the most common approaches to clustering. In this paper, we investigate the efficiency of kernel $k$-means combined with randomized sketches in terms of both statistical analysis and computational requirements. More precisely, we propose a unified randomized sketches framework to kernel $k$-means and investigate its excess risk bounds, obtaining the state-of-the-art risk bound with only a fraction of computations. Indeed, we prove that it suffices to choose the sketch dimension $\Omega(\sqrt{n})$ to obtain the same accuracy of exact kernel $k$-means with greatly reducing the computational costs, for sub-Gaussian sketches, the randomized orthogonal system (ROS) sketches, and Nystr\"{o}m kernel $k$-means, where $n$ is the number of samples. To the best of our knowledge, this is the first result of this kind for unsupervised learning. Finally, the numerical experiments on simulated data and real-world datasets validate our theoretical analysis. Outstanding Paper Nika Haghtalab · Michael Jordan · Eric Zhao Societal and real-world considerations such as robustness, fairness, social welfare and multi-agent tradeoffs have given rise to multi-distribution learning paradigms, such as collaborative [Blum et al. 2017], group distributionally robust [Sagawa et al. 2019], and fair federated learning [Mohri et al. 2019]. In each of these settings, a learner seeks to minimize its worstcase loss over a set of $n$ predefined distributions, while using as few samples as possible. In this paper, we establish the optimal sample complexity of these learning paradigms and give algorithms that meet this sample complexity. Importantly, our sample complexity bounds exceed that of the sample complexity of learning a single distribution only by an additive factor of $\frac{n\log(n)}{\epsilon^2}$. These improve upon the best known sample complexity of agnostic federated learning by Mohri et al. 2019 by a multiplicative factor of $n$, the sample complexity of collaborative learning by Nguyen and Zakynthinou 2018 by a multiplicative factor $\frac{\log(n)}{\epsilon^3}$, and give the first sample complexity bounds for the group DRO objective of Sagawa et al. 2019. To achieve optimal sample complexity, our algorithms learn to sample and learn from distributions on demand. Our algorithm design and analysis extends stochastic optimization techniques to solve zero-sum games in a new stochastic setting. Yuping Zheng · Andrew Lamperski Langevin algorithms are gradient descent methods augmented with additive noise, and are widely used in Markov Chain Monte Carlo (MCMC) sampling, optimization, and machine learning. In recent years, the non-asymptotic analysis of Langevin algorithms for non-convex learning has been extensively explored. For constrained problems with non-convex losses over a compact convex domain with IID data variables, the projected Langevin algorithm achieves a deviation of $O(T^{-1/4} (\log T)^{1/2})$ from its target distribution \cite{lamperski2021projected} in $1$-Wasserstein distance. In this paper, we obtain a deviation of $O(T^{-1/2} \log T)$ in $1$-Wasserstein distance for non-convex losses with $L$-mixing data variables and polyhedral constraints (which are not necessarily bounded). This improves on the previous bound for constrained problems and matches the best-known bound for unconstrained problems. Tianyi Lin · Zeyu Zheng · Michael Jordan Nonsmooth nonconvex optimization problems broadly emerge in machine learning and business decision making, whereas two core challenges impede the development of efficient solution methods with finite-time convergence guarantee: the lack of computationally tractable optimality criterion and the lack of computationally powerful oracles. The contributions of this paper are two-fold. First, we establish the relationship between the celebrated Goldstein subdifferential~\citep{Goldstein-1977-Optimization} and uniform smoothing, thereby providing the basis and intuition for the design of gradient-free methods that guarantee the finite-time convergence to a set of Goldstein stationary points. Second, we propose the gradient-free method (GFM) and stochastic GFM for solving a class of nonsmooth nonconvex optimization problems and prove that both of them can return a $(\delta,\epsilon)$-Goldstein stationary point of a Lipschitz function $f$ at an expected convergence rate at $O(d^ {3/2}\delta^{-1}\epsilon^{-4})$ where $d$ is the problem dimension. Two-phase versions of GFM and SGFM are also proposed and proven to achieve improved large-deviation results. Finally, we demonstrate the effectiveness of 2-SGFM on training ReLU neural networks with the \textsc{Minst} dataset. Alexander Soen · Ibrahim Alabdulmohsin · Sanmi Koyejo · Yishay Mansour · Nyalleng Moorosi · Richard Nock · Ke Sun · Lexing Xie We introduce a new family of techniques to post-process (``wrap") a black-box classifier in order to reduce its bias. Our technique builds on the recent analysis of improper loss functions whose optimization can correct any twist in prediction, unfairness being treated as a twist. In the post-processing, we learn a wrapper function which we define as an $\alpha$-tree, which modifies the prediction. We provide two generic boosting algorithms to learn $\alpha$-trees. We show that our modification has appealing properties in terms of composition of $\alpha$-trees, generalization, interpretability, and KL divergence between modified and original predictions. We exemplify the use of our technique in three fairness notions: conditional value-at-risk, equality of opportunity, and statistical parity; and provide experiments on several readily available datasets. Michael Sander · Pierre Ablin · Gabriel Peyré Neural Ordinary Differential Equations (Neural ODEs) are the continuous analog of Residual Neural Networks (ResNets). We investigate whether the discrete dynamics defined by a ResNet are close to the continuous one of a Neural ODE. We first quantify the distance between the ResNet's hidden state trajectory and the solution of its corresponding Neural ODE. Our bound is tight and, on the negative side, does not go to $0$ with depth $N$ if the residual functions are not smooth with depth. On the positive side, we show that this smoothness is preserved by gradient descent for a ResNet with linear residual functions and small enough initial loss. It ensures an implicit regularization towards a limit Neural ODE at rate $\frac1N$, uniformly with depth and optimization time. As a byproduct of our analysis, we consider the use of a memory-free discrete adjoint method to train a ResNet by recovering the activations on the fly through a backward pass of the network, and show that this method theoretically succeeds at large depth if the residual functions are Lipschitz with the input. We then show that Heun's method, a second order ODE integration scheme, allows for better gradient estimation with the adjoint method when the residual functions are smooth with depth. We experimentally validate that our adjoint method succeeds at large depth, and that Heun’s method needs fewer layers to succeed. We finally use the adjoint method successfully for fine-tuning very deep ResNets without memory consumption in the residual layers. Ladislav Rampášek · Michael Galkin · Vijay Prakash Dwivedi · Anh Tuan Luu · Guy Wolf · Dominique Beaini We propose a recipe on how to build a general, powerful, scalable (GPS) graph Transformer with linear complexity and state-of-the-art results on a diverse set of benchmarks. Graph Transformers (GTs) have gained popularity in the field of graph representation learning with a variety of recent publications but they lack a common foundation about what constitutes a good positional or structural encoding, and what differentiates them. In this paper, we summarize the different types of encodings with a clearer definition and categorize them as being $\textit{local}$, $\textit{global}$ or $\ textit{relative}$. The prior GTs are constrained to small graphs with a few hundred nodes, here we propose the first architecture with a complexity linear in the number of nodes and edges $O(N+E)$ by decoupling the local real-edge aggregation from the fully-connected Transformer. We argue that this decoupling does not negatively affect the expressivity, with our architecture being a universal function approximator on graphs. Our GPS recipe consists of choosing 3 main ingredients: (i) positional/structural encoding, (ii) local message-passing mechanism, and (iii) global attention mechanism. We provide a modular framework $\textit{GraphGPS}$ that supports multiple types of encodings and that provides efficiency and scalability both in small and large graphs. We test our architecture on 16 benchmarks and show highly competitive results in all of them, show-casing the empirical benefits gained by the modularity and the combination of different strategies. Tal Schuster · Adam Fisch · Jai Gupta · Mostafa Dehghani · Dara Bahri · Vinh Tran · Yi Tay · Donald Metzler Recent advances in Transformer-based large language models (LLMs) have led to significant performance improvements across many tasks. These gains come with a drastic increase in the models' size, potentially leading to slow and costly use at inference time. In practice, however, the series of generations made by LLMs is composed of varying levels of difficulty. While certain predictions truly benefit from the models' full capacity, other continuations are more trivial and can be solved with reduced compute. In this work, we introduce Confident Adaptive Language Modeling (CALM), a framework for dynamically allocating different amounts of compute per input and generation timestep. Early exit decoding involves several challenges that we address here, such as: (1) what confidence measure to use; (2) connecting sequence-level constraints to local per-token exit decisions; and (3) attending back to missing hidden representations due to early exits in previous tokens. Through theoretical analysis and empirical experiments on three diverse text generation tasks, we demonstrate the efficacy of our framework in reducing compute---potential speedup of up to $\times 3$---while provably maintaining high performance. Li-Cheng Lan · Huan Zhang · Ti-Rong Wu · Meng-Yu Tsai · I-Chen Wu · Cho-Jui Hsieh The success of AlphaZero (AZ) has demonstrated that neural-network-based Go AIs can surpass human performance by a large margin. Given that the state space of Go is extremely large and a human player can play the game from any legal state, we ask whether adversarial states exist for Go AIs that may lead them to play surprisingly wrong actions.In this paper, we first extend the concept of adversarial examples to the game of Go: we generate perturbed states that are ``semantically'' equivalent to the original state by adding meaningless moves to the game, and an adversarial state is a perturbed state leading to an undoubtedly inferior action that is obvious even for Go beginners. However, searching the adversarial state is challenging due to the large, discrete, and non-differentiable search space. To tackle this challenge, we develop the first adversarial attack on Go AIs that can efficiently search for adversarial states by strategically reducing the search space. This method can also be extended to other board games such as NoGo. Experimentally, we show that the actions taken by both Policy-Value neural network (PV-NN) and Monte Carlo tree search (MCTS) can be misled by adding one or two meaningless stones; for example, on 58\% of the AlphaGo Zero self-play games, our method can make the widely used KataGo agent with 50 simulations of MCTS plays a losing action by adding two meaningless stones. We additionally evaluated the adversarial examples found by our algorithm with amateur human Go players, and 90\% of examples indeed lead the Go agent to play an obviously inferior action. Ourcode is available at \url{https://PaperCode.cc/GoAttack}. Non-rigid point cloud registration is a key component in many computer vision and computer graphics applications. The high complexity of the unknown non-rigid motion make this task a challenging problem. In this paper, we break down this problem via hierarchical motion decomposition. Our method called Neural Deformation Pyramid (NDP) represents non-rigid motion using a pyramid architecture. Each pyramid level, denoted by a Multi-Layer Perception (MLP), takes as input a sinusoidally encoded 3D point and outputs its motion increments from the previous level. The sinusoidal function starts with a low input frequency and gradually increases when the pyramid level goes down. This allows a multi-level rigid to nonrigid motion decomposition and also speeds up the solving by ×50 times compared to the existing MLP-based approach. Our method achieves advanced partial-to-partial non-rigid point cloud registration results on the 4DMatch/4DLoMatchbenchmark under both no-learned and supervised settings. Juhong Min · Yucheng Zhao · Chong Luo · Minsu Cho Human vision possesses a special type of visual processing systems called peripheral vision. Partitioning the entire visual field into multiple contour regions based on the distance to the center of our gaze, the peripheral vision provides us the ability to perceive various visual features at different regions. In this work, we take a biologically inspired approach and explore to model peripheral vision in deep neural networks for visual recognition. We propose to incorporate peripheral position encoding to the multi-head self-attention layers to let the network learn to partition the visual field into diverse peripheral regions given training data. We evaluate the proposed network, dubbed PerViT, on ImageNet-1K and systematically investigate the inner workings of the model for machine perception, showing that the network learns to perceive visual data similarly to the way that human vision does. The performance improvements in image classification over the baselines across different model sizes demonstrate the efficacy of the proposed method. Sikun Lin · Thomas Sprague · Ambuj K Singh Understanding how the brain encodes external stimuli and how these stimuli can be decoded from the measured brain activities are long-standing and challenging questions in neuroscience. In this paper, we focus on reconstructing the complex image stimuli from fMRI (functional magnetic resonance imaging) signals. Unlike previous works that reconstruct images with single objects or simple shapes, our work aims to reconstruct image stimuli that are rich in semantics, closer to everyday scenes, and can reveal more perspectives. However, data scarcity of fMRI datasets is the main obstacle to applying state-of-the-art deep learning models to this problem. We find that incorporating an additional text modality is beneficial for the reconstruction problem compared to directly translating brain signals to images. Therefore, the modalities involved in our method are: (i) voxel-level fMRI signals, (ii) observed images that trigger the brain signals, and (iii) textual description of the images. To further address data scarcity, we leverage an aligned vision-language latent space pre-trained on massive datasets. Instead of training models from scratch to find a latent space shared by the three modalities, we encode fMRI signals into this pre-aligned latent space. Then, conditioned on embeddings in this space, we reconstruct images with a generative model. The reconstructed images from our pipeline balance both naturalness and fidelity: they are photo-realistic and capture the ground truth image contents well. Jian Wang · Chenhui Gou · Qiman Wu · Haocheng Feng · Junyu Han · Errui Ding · Jingdong Wang Recently, transformer-based networks have shown impressive results in semantic segmentation. Yet for real-time semantic segmentation, pure CNN-based approaches still dominate in this field, due to the time-consuming computation mechanism of transformer. We propose RTFormer, an efficient dual-resolution transformer for real-time semantic segmenation, which achieves better trade-off between performance and efficiency than CNN-based models. To achieve high inference efficiency on GPU-like devices, our RTFormer leverages GPU-Friendly Attention with linear complexity and discards the multi-head mechanism. Besides, we find that cross-resolution attention is more efficient to gather global context information for high-resolution branch by spreading the high level knowledge learned from low-resolution branch. Extensive experiments on mainstream benchmarks demonstrate the effectiveness of our proposed RTFormer, it achieves state-of-the-art on Cityscapes, CamVid and COCOStuff, and shows promising results on ADE20K. Qian Huang · Hongyu Ren · Jure Leskovec Few-shot knowledge graph (KG) completion task aims to perform inductive reasoning over the KG: given only a few support triplets of a new relation $\bowtie$ (e.g., (chop,$\bowtie$,kitchen), (read,$\ bowtie$,library), the goal is to predict the query triplets of the same unseen relation $\bowtie$, e.g., (sleep,$\bowtie$,?). Current approaches cast the problem in a meta-learning framework, where the model needs to be first jointly trained over many training few-shot tasks, each being defined by its own relation, so that learning/prediction on the target few-shot task can be effective. However, in real-world KGs, curating many training tasks is a challenging ad hoc process. Here we propose Connection Subgraph Reasoner (CSR), which can make predictions for the target few-shot task directly without the need for pre-training on the human curated set of training tasks. The key to CSR is that we explicitly model a shared connection subgraph between support and query triplets, as inspired by the principle of eliminative induction. To adapt to specific KG, we design a corresponding self-supervised pretraining scheme with the objective of reconstructing automatically sampled connection subgraphs. Our pretrained model can then be directly applied to target few-shot tasks on without the need for training few-shot tasks. Extensive experiments on real KGs, including NELL, FB15K-237, and ConceptNet, demonstrate the effectiveness of our framework: we show that even a learning-free implementation of CSR can already perform competitively to existing methods on target few-shot tasks; with pretraining, CSR can achieve significant gains of up to 52% on the more challenging inductive few-shot tasks where the entities are also unseen during (pre)training. Mengda Xu · Manuela Veloso · Shuran Song We introduce ASPiRe (Adaptive Skill Prior for RL), a new approach that leverages prior experience to accelerate reinforcement learning. Unlike existing methods that learn a single skill prior from a large and diverse dataset, our framework learns a library of different distinction skill priors (i.e., behavior priors) from a collection of specialized datasets, and learns how to combine them to solve a new task. This formulation allows the algorithm to acquire a set of specialized skill priors that are more reusable for downstream tasks; however, it also brings up additional challenges of how to effectively combine these unstructured sets of skill priors to form a new prior for new tasks. Specifically, it requires the agent not only to identify which skill prior(s) to use but also how to combine them (either sequentially or concurrently) to form a new prior. To achieve this goal, ASPiRe includes Adaptive Weight Module (AWM) that learns to infer an adaptive weight assignment between different skill priors and uses them to guide policy learning for downstream tasks via weighted Kullback-Leibler divergences. Our experiments demonstrate that ASPiRe can significantly accelerate the learning of new downstream tasks in the presence of multiple priors and show improvement on competitive baselines. Xiaoyang Wu · Yixing Lao · Li Jiang · Xihui Liu · Hengshuang Zhao As a pioneering work exploring transformer architecture for 3D point cloud understanding, Point Transformer achieves impressive results on multiple highly competitive benchmarks. In this work, we analyze the limitations of the Point Transformer and propose our powerful and efficient Point Transformer V2 model with novel designs that overcome the limitations of previous work. In particular, we first propose group vector attention, which is more effective than the previous version of vector attention. Inheriting the advantages of both learnable weight encoding and multi-head attention, we present a highly effective implementation of grouped vector attention with a novel grouped weight encoding layer. We also strengthen the position information for attention by an additional position encoding multiplier. Furthermore, we design novel and lightweight partition-based pooling methods which enable better spatial alignment and more efficient sampling. Extensive experiments show that our model achieves better performance than its predecessor and achieves state-of-the-art on several challenging 3D point cloud understanding benchmarks, including 3D point cloud segmentation on ScanNet v2 and S3DIS and 3D point cloud classification on ModelNet40. Our code will be available at https://github.com/Gofinge/PointTransformerV2. hanxue liang · Zhiwen Fan · Rishov Sarkar · Ziyu Jiang · Tianlong Chen · Kai Zou · Yu Cheng · Cong Hao · Zhangyang Wang Multi-task learning (MTL) encapsulates multiple learned tasks in a single model and often lets those tasks learn better jointly. Multi-tasking models have become successful and often essential for many sophisticated systems such as autonomous driving and indoor robots. However, when deploying MTL onto those real-world systems that are often resource-constrained or latency-sensitive, two prominent challenges arise: (i) during training, simultaneously optimizing all tasks is often difficult due to gradient conflicts across tasks, and the challenge is amplified when a growing number of tasks have to be squeezed into one compact model; (ii) at inference, current MTL regimes have to activate nearly the entire model even to just execute a single task. Yet most real systems demand only one or two tasks at each moment, while flexibly switching between tasks per need: therefore such “all tasks activated” inference is also highly inefficient and non-scalable in practice. In this paper, we present a model-accelerator co-design framework to enable efficient on-device MTL, that tackles both training and inference bottlenecks. Our framework, dubbed M³ViT, customizes mixture-of-experts (MoE) layers into a vision transformer (ViT) backbone for MTL, and sparsely activates task-specific experts during training, which effectively disentangles the parameter spaces to avoid different tasks’ training conflicts. Then at inference with any task of interest, the same design allows for activating only the task-corresponding sparse “expert” pathway, instead of the full model. Our new model design is further enhanced by hardware-level innovations, in particular, a novel computation reordering scheme tailored for memory-constrained MTL that achieves zero-overhead switching between tasks and can scale to any number of experts. Extensive experiments on PASCAL-Context and NYUD-v2 datasets at both software and hardware levels are conducted to demonstrate the effectiveness of the proposed design. When executing the practical scenario of single-task inference, M³ViT achieves higher accuracies than encoder-focused MTL methods, while significantly reducing 88% inference FLOPs. When implemented on a hardware platform of one Xilinx ZCU104 FPGA, our co-design framework reduces the memory requirement by 2.40×, while achieving energy efficiency (as the product of latency and power) up to 9.23× times higher than a comparable FPGA baseline. Naigang Wang · Chi-Chun (Charlie) Liu · Swagath Venkataramani · Sanchari Sen · Chia-Yu Chen · Kaoutar El Maghraoui · Vijayalakshmi (Viji) Srinivasan · Leland Chang Pre-trained transformer models have achieved remarkable success in natural language processing (NLP) and have recently become competitive alternatives to Convolution Neural Networks (CNN) and Recurrent Neural Networks (RNN) in vision and speech tasks, respectively. Due to excellent computational efficiency and scalability, transformer models can be trained on exceedingly large amounts of data; however, model sizes can grow tremendously. As high performance, large-scale, and pre-trained transformer models become available for users to download and fine-tune for customized downstream tasks, the deployment of these models becomes challenging due to the vast amount of operations and large memory footprint. To address this challenge, we introduce methods to deeply compress pre-trained transformer models across three major application domains: NLP, speech, and vision. Specifically, we quantize transformer backbones down to 4-bit and further achieve 50% fine-grained structural sparsity on pre-trained BERT, Wav2vec2.0 and Vision Transformer (ViT) models to achieve 16x compression while maintaining model accuracy. This is achieved by identifying the critical initialization for quantization/sparsity aware fine-tuning, as well as novel techniques including quantizers with zero-preserving format and scheduled dropout. These hardware-friendly techniques need only to be applied in the fine-tuning phase for downstream tasks; hence, are especially suitable for acceleration and deployment of pre-trained transformer models. Yilin He · Chaojie Wang · Hao Zhang · Bo Chen · Mingyuan Zhou Graph neural networks (GNNs), which propagate the node features through the edges and learn how to transform the aggregated features under label supervision, have achieved great success in supervised feature extraction for both node-level and graph-level classification tasks. However, GNNs typically treat the graph structure as given and ignore how the edges are formed. This paper introduces a graph generative process to model how the observed edges are generated by aggregating the node interactions over a set of overlapping node communities, each of which contributes to the edges via a logical OR mechanism. Based on this generative model, we partition each edge into the summation of multiple community-specific weighted edges and use them to define community-specific GNNs. A variational inference framework is proposed to jointly learn a GNN-based inference network that partitions the edges into different communities, these community-specific GNNs, and a GNN-based predictor that combines community-specific GNNs for the end classification task. Extensive evaluations on real-world graph datasets have verified the effectiveness of the proposed method in learning discriminative representations for both node-level and graph-level classification tasks. Peng Gao · Teli Ma · Hongsheng Li · Ziyi Lin · Jifeng Dai · Yu Qiao Vision Transformers (ViT) become widely-adopted architectures for various vision tasks. Masked auto-encoding for feature pretraining and multi-scale hybrid convolution-transformer architectures can further unleash the potentials of ViT, leading to state-of-the-art performances on image classification, detection and semantic segmentation. In this paper, our MCMAE framework demonstrates that multi-scale hybrid convolution-transformer can learn more discriminative representations via the mask auto-encoding scheme. However, directly using the original masking strategy leads to the heavy computational cost and pretraining-finetuning discrepancy. To tackle the issue, we adopt the masked convolution to prevent information leakage in the convolution blocks. A simple block-wise masking strategy is proposed to ensure computational efficiency. We also propose to more directly supervise the multi-scale features of the encoder to boost multi-scale features. Based on our pretrained MCMAE models, MCMAE-Base improves ImageNet-1K finetuning accuracy by 1.4% compared with MAE-Base. On object detection, MCMAE-Base finetuned for only 25 epochs surpasses MAE-Base fined-tuned for 100 epochs by 2.9% box AP and 2.2% mask AP respectively. Code and pretrained models are available at \url{https://github.com/Alpha-VL/ConvMAE}. Zezhong Xu · Wen Zhang · Peng Ye · Hui Chen · Huajun Chen Answering complex queries over knowledge graphs (KG) is an important yet challenging task because of the KG incompleteness issue and cascading errors during reasoning. Recent query embedding (QE) approaches embed the entities and relations in a KG and the first-order logic (FOL) queries into a low dimensional space, making the query can be answered by dense similarity searching. However, previous works mainly concentrate on the target answers, ignoring intermediate entities' usefulness, which is essential for relieving the cascading error problem in logical query answering. In addition, these methods are usually designed with their own geometric or distributional embeddings to handle logical operators like union, intersection, and negation, with the sacrifice of the accuracy of the basic operator -- projection, and they could not absorb other embedding methods to their models. In this work, we propose a Neural and Symbolic Entangled framework (ENeSy) for complex query answering, which enables the neural and symbolic reasoning to enhance each other to alleviate the cascading error and KG incompleteness. The projection operator in ENeSy could be any embedding method with the capability of link prediction, and the other FOL operators are handled without parameters. With both neural and symbolic reasoning results contained, ENeSy answers queries in ensembles. We evaluate ENeSy on complex query answering benchmarks, and ENeSy achieves the state-of-the-art, especially in the setting of training model only with the link prediction task. Liting Lin · Heng Fan · Zhipeng Zhang · Yong Xu · Haibin Ling Recently Transformer has been largely explored in tracking and shown state-of-the-art (SOTA) performance. However, existing efforts mainly focus on fusing and enhancing features generated by convolutional neural networks (CNNs). The potential of Transformer in representation learning remains under-explored. In this paper, we aim to further unleash the power of Transformer by proposing a simple yet efficient fully-attentional tracker, dubbed SwinTrack, within classic Siamese framework. In particular, both representation learning and feature fusion in SwinTrack leverage the Transformer architecture, enabling better feature interactions for tracking than pure CNN or hybrid CNN-Transformer frameworks. Besides, to further enhance robustness, we present a novel motion token that embeds historical target trajectory to improve tracking by providing temporal context. Our motion token is lightweight with negligible computation but brings clear gains. In our thorough experiments, SwinTrack exceeds existing approaches on multiple benchmarks. Particularly, on the challenging LaSOT, SwinTrack sets a new record with 0.713 SUC score. It also achieves SOTA results on other benchmarks. We expect SwinTrack to serve as a solid baseline for Transformer tracking and facilitate future research. Our codes and results are released at https://github.com/LitingLin/ Xingyi He · Jiaming Sun · Yuang Wang · Di Huang · Hujun Bao · Xiaowei Zhou We propose a new method for object pose estimation without CAD models. The previous feature-matching-based method OnePose has shown promising results under a one-shot setting which eliminates the need for CAD models or object-specific training. However, OnePose relies on detecting repeatable image keypoints and is thus prone to failure on low-textured objects. We propose a keypoint-free pose estimation pipeline to remove the need for repeatable keypoint detection. Built upon the detector-free feature matching method LoFTR, we devise a new keypoint-free SfM method to reconstruct a semi-dense point-cloud model for the object. Given a query image for object pose estimation, a 2D-3D matching network directly establishes 2D-3D correspondences between the query image and the reconstructed point-cloud model without first detecting keypoints in the image. Experiments show that the proposed pipeline outperforms existing one-shot CAD-model-free methods by a large margin and is comparable to CAD-model-based methods on LINEMOD even for low-textured objects. We also collect a new dataset composed of 80 sequences of 40 low-textured objects to facilitate future research on one-shot object pose estimation. The supplementary material, code and dataset are available on the project page: https://zju3dv.github.io/oneposeplusplus/. Andrew Luo · Yilun Du · Michael Tarr · Josh Tenenbaum · Antonio Torralba · Chuang Gan Our environment is filled with rich and dynamic acoustic information. When we walk into a cathedral, the reverberations as much as appearance inform us of the sanctuary's wide open space. Similarly, as an object moves around us, we expect the sound emitted to also exhibit this movement. While recent advances in learned implicit functions have led to increasingly higher quality representations of the visual world, there have not been commensurate advances in learning spatial auditory representations. To address this gap, we introduce Neural Acoustic Fields (NAFs), an implicit representation that captures how sounds propagate in a physical scene. By modeling acoustic propagation in a scene as a linear time-invariant system, NAFs learn to continuously map all emitter and listener location pairs to a neural impulse response function that can then be applied to arbitrary sounds. We demonstrate NAFs on both synthetic and real data, and show that the continuous nature of NAFs enables us to render spatial acoustics for a listener at arbitrary locations. We further show that the representation learned by NAFs can help improve visual learning with sparse views. Finally we show that a representation informative of scene structure emerges during the learning of NAFs. Josh Gardner · Zoran Popovic · Ludwig Schmidt Researchers have proposed many methods for fair and robust machine learning, but comprehensive empirical evaluation of their subgroup robustness is lacking. In this work, we address this gap in the context of tabular data, where sensitive subgroups are clearly-defined, real-world fairness problems abound, and prior works often do not compare to state-of-the-art tree-based models as baselines. We conduct an empirical comparison of several previously-proposed methods for fair and robust learning alongside state-of-the-art tree-based methods and other baselines. Via experiments with more than $340{,}000$ model configurations on eight datasets, we show that tree-based methods have strong subgroup robustness, even when compared to robustness- and fairness-enhancing methods. Moreover, the best tree-based models tend to show good performance over a range of metrics, while robust or group-fair models can show brittleness, with significant performance differences across different metrics for a fixed model. We also demonstrate that tree-based models show less sensitivity to hyperparameter configurations, and are less costly to train. Our work suggests that tree-based ensemble models make an effective baseline for tabular data, and are a sensible default when subgroup robustness is desired. See https://github.com/jpgard/subgroup-robustness-grows-on-trees for code to reproduce our experiments and detailed experimental results. Leena Chennuru Vankadara · Luca Rendsburg · Ulrike Luxburg · Debarghya Ghoshdastidar Recent work shows that in complex model classes, interpolators can achieve statistical generalization and even be optimal for statistical learning. However, despite increasing interest in learning models with good causal properties, there is no understanding of whether such interpolators can also achieve causal generalization. To address this gap, we study causal learning from observational data through the lens of interpolation and its counterpart---regularization. Under a simple linear causal model, we derive precise asymptotics for the causal risk of the min-norm interpolator and ridge regressors in the high-dimensional regime. We find a large range of behavior that can be precisely characterized by a new measure of confounding strength. When confounding strength is positive, which holds under independent causal mechanisms---a standard assumption in causal learning---we find that interpolators cannot be optimal. Indeed, causal learning requires stronger regularization than statistical learning. Beyond this assumption, when confounding is negative, we observe a phenomenon of self-induced regularization due to positive alignment between statistical and causal signals. Here, causal learning requires weaker regularization than statistical learning, interpolators can be optimal, and optimal regularization can even be negative. Zikui Cai · Chengyu Song · Srikanth Krishnamurthy · Amit Roy-Chowdhury · Salman Asif Blackbox adversarial attacks can be categorized into transfer- and query-based attacks. Transfer methods do not require any feedback from the victim model, but provide lower success rates compared to query-based methods. Query attacks often require a large number of queries for success. To achieve the best of both approaches, recent efforts have tried to combine them, but still require hundreds of queries to achieve high success rates (especially for targeted attacks). In this paper, we propose a novel method for Blackbox Attacks via Surrogate Ensemble Search (BASES) that can generate highly successful blackbox attacks using an extremely small number of queries. We first define a perturbation machine that generates a perturbed image by minimizing a weighted loss function over a fixed set of surrogate models. To generate an attack for a given victim model, we search over the weights in the loss function using queries generated by the perturbation machine. Since the dimension of the search space is small (same as the number of surrogate models), the search requires a small number of queries. We demonstrate that our proposed method achieves better success rate with at least $30\times$ fewer queries compared to state-of-the-art methods on different image classifiers trained with ImageNet (including VGG-19, DenseNet-121, and ResNext-50). In particular, our method requires as few as 3 queries per image (on average) to achieve more than a $90\%$ success rate for targeted attacks and 1--2 queries per image for over a $99\%$ success rate for untargeted attacks. Our method is also effective on Google Cloud Vision API and achieved a $91\%$ untargeted attack success rate with 2.9 queries per image. We also show that the perturbations generated by our proposed method are highly transferable and can be adopted for hard-label blackbox attacks. Furthermore, we argue that BASES can be used to create attacks for a variety of tasks and show its effectiveness for attacks on object detection models. Our code is available at https://github.com/CSIPlab/BASES. Lang Huang · Shan You · Mingkai Zheng · Fei Wang · Chen Qian · Toshihiko Yamasaki We present an efficient approach for Masked Image Modeling (MIM) with hierarchical Vision Transformers (ViTs), allowing the hierarchical ViTs to discard masked patches and operate only on the visible ones. Our approach consists of three key designs. First, for window attention, we propose a Group Window Attention scheme following the Divide-and-Conquer strategy. To mitigate the quadratic complexity of the self-attention w.r.t. the number of patches, group attention encourages a uniform partition that visible patches within each local window of arbitrary size can be grouped with equal size, where masked self-attention is then performed within each group. Second, we further improve the grouping strategy via the Dynamic Programming algorithm to minimize the overall computation cost of the attention on the grouped patches. Third, as for the convolution layers, we convert them to the Sparse Convolution that works seamlessly with the sparse data, i.e., the visible patches in MIM. As a result, MIM can now work on most, if not all, hierarchical ViTs in a green and efficient way. For example, we can train the hierarchical ViTs, e.g., Swin Transformer and Twins Transformer, about 2.7$\times$ faster and reduce the GPU memory usage by 70%, while still enjoying competitive performance on ImageNet classification and the superiority on downstream COCO object detection benchmarks. Lingxiao Zhao · Neil Shah · Leman Akoglu Message passing neural networks (MPNNs) have become a dominant flavor of graph neural networks (GNNs) in recent years. Yet, MPNNs come with notable limitations; namely, they are at most as powerful as the 1-dimensional Weisfeiler-Leman (1-WL) test in distinguishing graphs in a graph isomorphism testing frame-work. To this end, researchers have drawn inspiration from the k-WL hierarchy to develop more expressive GNNs. However, current k-WL-equivalent GNNs are not practical for even small values of k, as k-WL becomes combinatorially more complex as k grows. At the same time, several works have found great empirical success in graph learning tasks without highly expressive models, implying that chasing expressiveness with a “coarse-grained ruler” of expressivity like k-WL is often unneeded in practical tasks. To truly understand the expressiveness-complexity tradeoff, one desires a more “fine-grained ruler,” which can more gradually increase expressiveness. Our work puts forth such a proposal: Namely, we first propose the (k, c)(≤)-SETWL hierarchy with greatly reduced complexity from k-WL, achieved by moving from k-tuples of nodes to sets with ≤k nodes defined over ≤c connected components in the induced original graph. We show favorable theoretical results for this model in relation to k-WL, and concretize it via (k, c)(≤)-SETGNN, which is as expressive as (k, c)(≤)-SETWL. Our model is practical and progressively-expressive, increasing in power with k and c. We demonstrate effectiveness on several benchmark datasets, achieving several state-of-the-art results with runtime and memory usage applicable to practical graphs. We open source our implementation at https://github.com/LingxiaoShawn/KCSetGNN. Ho Huu Nghia Nguyen · Tan Nguyen · Huyen Vo · Stanley Osher · Thieu Vo We propose the Nesterov neural ordinary differential equations (NesterovNODEs), whose layers solve the second-order ordinary differential equations (ODEs) limit of Nesterov's accelerated gradient (NAG) method, and a generalization called GNesterovNODEs. Taking the advantage of the convergence rate $\mathcal{O}(1/k^{2})$ of the NAG scheme, GNesterovNODEs speed up training and inference by reducing the number of function evaluations (NFEs) needed to solve the ODEs. We also prove that the adjoint state of a GNesterovNODEs also satisfies a GNesterovNODEs, thus accelerating both forward and backward ODE solvers and allowing the model to be scaled up for large-scale tasks. We empirically corroborate the advantage of GNesterovNODEs on a wide range of practical applications, including point cloud separation, image classification, and sequence modeling. Compared to NODEs, GNesterovNODEs require a significantly smaller number of NFEs while achieving better accuracy across our Difan Zou · Jingfeng Wu · Vladimir Braverman · Quanquan Gu · Sham Kakade Stochastic gradient descent (SGD) has achieved great success due to its superior performance in both optimization and generalization. Most of existing generalization analyses are made for single-pass SGD, which is a less practical variant compared to the commonly-used multi-pass SGD. Besides, theoretical analyses for multi-pass SGD often concern a worst-case instance in a class of problems, which may be pessimistic to explain the superior generalization ability for some particular problem instance. The goal of this paper is to provide an instance-dependent excess risk bound of multi-pass SGD for least squares in the interpolation regime, which is expressed as a function of the iteration number, stepsize, and data covariance. We show that the excess risk of SGD can be exactly decomposed into the excess risk of GD and a positive fluctuation error, suggesting that SGD always performs worse, instance-wisely, than GD, in generalization. On the other hand, we show that although SGD needs more iterations than GD to achieve the same level of excess risk, it saves the number of stochastic gradient evaluations, and therefore is preferable in terms of computational time. Sarah Sachs · Hedi Hadiji · Tim van Erven · Cristóbal Guzmán Stochastic and adversarial data are two widely studied settings in online learning. But many optimizationtasks are neither i.i.d. nor fully adversarial, which makes it of fundamental interest to get a better theoretical understanding of the world between these extremes. In this work we establish novel regret bounds for online convex optimization in a setting that interpolates between stochastic i.i.d. and fully adversarial losses. By exploiting smoothness of the expected losses, these bounds replace a dependence on the maximum gradient length by the variance of the gradients, which was previously known only for linear losses. In addition, they weaken the i.i.d. assumption by allowing, for example, adversarially poisoned rounds, which were previously considered in the expert and bandit setting. Our results extend this to the online convex optimization framework. In the fully i.i.d. case, our bounds match the rates one would expect from results in stochastic acceleration, and in the fully adversarial case they gracefully deteriorate to match the minimax regret. We further provide lower bounds showing that our regret upper bounds aretight for all intermediate regimes in terms of the stochastic variance and theadversarial variation of the loss gradients. Khaled Nakhleh · I-Hong Hou We consider the problem of learning the optimal threshold policy for control problems. Threshold policies make control decisions by evaluating whether an element of the system state exceeds a certain threshold, whose value is determined by other elements of the system state. By leveraging the monotone property of threshold policies, we prove that their policy gradients have a surprisingly simple expression. We use this simple expression to build an off-policy actor-critic algorithm for learning the optimal threshold policy. Simulation results show that our policy significantly outperforms other reinforcement learning algorithms due to its ability to exploit the monotone property.In addition, we show that the Whittle index, a powerful tool for restless multi-armed bandit problems, is equivalent to the optimal threshold policy for an alternative problem. This observation leads to a simple algorithm that finds the Whittle index by learning the optimal threshold policy in the alternative problem. Simulation results show that our algorithm learns the Whittle index much faster than several recent studies that learn the Whittle index through indirect means. Sangho Lim · Eun-Gyeol Oh · Hongseok Yang SATNet is a differentiable constraint solver with a custom backpropagation algorithm, which can be used as a layer in a deep-learning system. It is a promising proposal for bridging deep learning and logical reasoning. In fact, SATNet has been successfully applied to learn, among others, the rules of a complex logical puzzle, such as Sudoku, just from input and output pairs where inputs are given as images. In this paper, we show how to improve the learning of SATNet by exploiting symmetries in the target rules of a given but unknown logical puzzle or more generally a logical formula. We present SymSATNet, a variant of SATNet that translates the given symmetries of the target rules to a condition on the parameters of SATNet and requires that the parameters should have a particular parametric form that guarantees the condition. The requirement dramatically reduces the number of parameters to learn for the rules with enough symmetries, and makes the parameter learning of SymSATNet much easier than that of SATNet. We also describe a technique for automatically discovering symmetries of the target rules from examples. Our experiments with Sudoku and Rubik's cube show the substantial improvement of SymSATNet over the baseline SATNet. Charles Lovering · Jessica Forde · George Konidaris · Ellie Pavlick · Michael Littman AlphaZero, an approach to reinforcement learning that couples neural networks and Monte Carlo tree search (MCTS), has produced state-of-the-art strategies for traditional board games like chess, Go, shogi, and Hex. While researchers and game commentators have suggested that AlphaZero uses concepts that humans consider important, it is unclear how these concepts are captured in the network. We investigate AlphaZero's internal representations in the game of Hex using two evaluation techniques from natural language processing (NLP): model probing and behavioral tests. In doing so, we introduce several new evaluation tools to the RL community, and illustrate how evaluations other than task performance can be used to provide a more complete picture of a model's strengths and weaknesses. Our analyses in the game of Hex reveal interesting patterns and generate some testable hypotheses about how such models learn in general. For example, we find that the MCTS discovers concepts before the neural network learns to encode them. We also find that concepts related to short-term end-game planning are best encoded in the final layers of the model, whereas concepts related to long-term planning are encoded in the middle layers of the model. Anurag Ajay · Abhishek Gupta · Dibya Ghosh · Sergey Levine · Pulkit Agrawal Meta-reinforcement learning algorithms provide a data-driven way to acquire policies that quickly adapt to many tasks with varying rewards or dynamics functions. However, learned meta-policies are often effective only on the exact task distribution on which they were trained and struggle in the presence of distribution shift of test-time rewards or transition dynamics. In this work, we develop a framework for meta-RL algorithms that are able to behave appropriately under test-time distribution shifts in the space of tasks. Our framework centers on an adaptive approach to distributional robustness that trains a population of meta-policies to be robust to varying levels of distribution shift. When evaluated on a potentially shifted test-time distribution of tasks, this allows us to choose the meta-policy with the most appropriate level of robustness, and use it to perform fast adaptation. We formally show how our framework allows for improved regret under distribution shift, and empirically show its efficacy on simulated robotics problems under a wide range of distribution shifts. Guillaume Lample · Timothee Lacroix · Marie-Anne Lachaux · Aurelien Rodriguez · Amaury Hayat · Thibaut Lavril · Gabriel Ebner · Xavier Martinet We propose an online training procedure for a transformer-based automated theorem prover. Our approach leverages a new search algorithm, HyperTree Proof Search (HTPS), that learns from previous proof searches through online training, allowing it to generalize to domains far from the training distribution. We report detailed ablations of our pipeline’s main components by studying performance on three environments of increasing complexity. In particular, we show that with HTPS alone, a model trained on annotated proofs manages to prove 65.4% of a held-out set of Metamath theorems, significantly outperforming the previous state of the art of 56.5% by GPT-f. Online training on these unproved theorems increases accuracy to 82.6%. With a similar computational budget, we improve the state of the art on the Lean-based miniF2F-curriculum dataset from 31% to 42% proving accuracy. Annie Chen · Archit Sharma · Sergey Levine · Chelsea Finn Reinforcement learning algorithms are typically designed to learn a performant policy that can repeatedly and autonomously complete a task, usually starting from scratch. However, in many real-world situations, the goal might not be to learn a policy that can do the task repeatedly, but simply to perform a new task successfully once in a single trial. For example, imagine a disaster relief robot tasked with retrieving an item from a fallen building, where it cannot get direct supervision from humans. It must retrieve this object within one test-time trial, and must do so while tackling unknown obstacles, though it may leverage knowledge it has of the building before the disaster. We formalize this problem setting, which we call single-life reinforcement learning (SLRL), where an agent must complete a task within a single episode without interventions, utilizing its prior experience while contending with some form of novelty. SLRL provides a natural setting to study the challenge of autonomously adapting to unfamiliar situations, and we find that algorithms designed for standard episodic reinforcement learning often struggle to recover from out-of-distribution states in this setting. Motivated by this observation, we propose an algorithm, Q-weighted adversarial learning (QWALE), which employs a distribution matching strategy that leverages the agent's prior experience as guidance in novel situations. Our experiments on several single-life continuous control problems indicate that methods based on our distribution matching formulation are 20-60% more successful because they can more quickly recover from novel states. Philippe Weinzaepfel · Vincent Leroy · Thomas Lucas · Romain BRÉGIER · Yohann Cabon · Vaibhav ARORA · Leonid Antsfeld · Boris Chidlovskii · Gabriela Csurka · Jerome Revaud Masked Image Modeling (MIM) has recently been established as a potent pre-training paradigm. A pretext task is constructed by masking patches in an input image, and this masked content is then predicted by a neural network using visible patches as sole input. This pre-training leads to state-of-the-art performance when finetuned for high-level semantic tasks, e.g. image classification and object detection. In this paper we instead seek to learn representations that transfer well to a wide variety of 3D vision and lower-level geometric downstream tasks, such as depth prediction or optical flow estimation. Inspired by MIM, we propose an unsupervised representation learning task trained from pairs of images showing the same scene from different viewpoints. More precisely, we propose the pretext task of cross-view completion where the first input image is partially masked, and this masked content has to be reconstructed from the visible content and the second image. In single-view MIM, the masked content often cannot be inferred precisely from the visible portion only, so the model learns to act as a prior influenced by high-level semantics. In contrast, this ambiguity can be resolved with cross-view completion from the second unmasked image, on the condition that the model is able to understand the spatial relationship between the two images. Our experiments show that our pretext task leads to significantly improved performance for monocular 3D vision downstream tasks such as depth estimation. In addition, our model can be directly applied to binocular downstream tasks like optical flow or relative camera pose estimation, for which we obtain competitive results without bells and whistles, i.e., using a generic architecture without any task-specific design. Aayush Rana · Yogesh Rawat Video action detection requires annotations at every frame, which drastically increases the labeling cost. In this work, we focus on efficient labeling of videos for action detection to minimize this cost. We propose active sparse labeling (ASL), a novel active learning strategy for video action detection. Sparse labeling will reduce the annotation cost but poses two main challenges; 1) how to estimate the utility of annotating a single frame for action detection as detection is performed at video level?, and 2) how these sparse labels can be used for action detection which require annotations on all the frames? This work attempts to address these challenges within a simple active learning framework. For the first challenge, we propose a novel frame-level scoring mechanism aimed at selecting most informative frames in a video. Next, we introduce a novel loss formulation which enables training of action detection model with these sparsely selected frames. We evaluate the proposed approach on two different action detection benchmark datasets, UCF-101-24 and J-HMDB-21, and observed that active sparse labeling can be very effective in saving annotation costs. We demonstrate that the proposed approach performs better than random selection, outperforming all other baselines, with performance comparable to supervised approach using merely 10% annotations. Vindula Jayawardana · Catherine Tang · Sirui Li · Dajiang Suo · Cathy Wu Evaluations of Deep Reinforcement Learning (DRL) methods are an integral part of scientific progress of the field. Beyond designing DRL methods for general intelligence, designing task-specific methods is becoming increasingly prominent for real-world applications. In these settings, the standard evaluation practice involves using a few instances of Markov Decision Processes (MDPs) to represent the task. However, many tasks induce a large family of MDPs owing to variations in the underlying environment, particularly in real-world contexts. For example, in traffic signal control, variations may stem from intersection geometries and traffic flow levels. The select MDP instances may thus inadvertently cause overfitting, lacking the statistical power to draw conclusions about the method's true performance across the family. In this article, we augment DRL evaluations to consider parameterized families of MDPs. We show that in comparison to evaluating DRL methods on select MDP instances, evaluating the MDP family often yields a substantially different relative ranking of methods, casting doubt on what methods should be considered state-of-the-art. We validate this phenomenon in standard control benchmarks and the real-world application of traffic signal control. At the same time, we show that accurately evaluating on an MDP family is nontrivial. Overall, this work identifies new challenges for empirical rigor in reinforcement learning, especially as the outcomes of DRL trickle into downstream decision-making. Xiang Li · John Thickstun · Ishaan Gulrajani · Percy Liang · Tatsunori Hashimoto Controlling the behavior of language models (LMs) without re-training is a major open problem in natural language generation. While recent works have demonstrated successes on controlling simple sentence attributes (e.g., sentiment), there has been little progress on complex, fine-grained controls (e.g., syntactic structure). To address this challenge, we develop a new non-autoregressive language model based on continuous diffusions that we call Diffusion-LM. Building upon the recent successes of diffusion models in continuous domains, Diffusion-LM iteratively denoises a sequence of Gaussian vectors into word vectors, yielding a sequence of intermediate latent variables. The continuous, hierarchical nature of these intermediate variables enables a simple gradient-based algorithm to perform complex, controllable generation tasks. We demonstrate successful control of Diffusion-LM for six challenging fine-grained control tasks, significantly outperforming prior work. Santiago Cuervo · Adrian Lancucki · Ricard Marxer · Paweł Rychlikowski · Jan Chorowski The success of deep learning comes from its ability to capture the hierarchical structure of data by learning high-level representations defined in terms of low-level ones. In this paper we explore self-supervised learning of hierarchical representations of speech by applying multiple levels of Contrastive Predictive Coding (CPC). We observe that simply stacking two CPC models does not yield significant improvements over single-level architectures. Inspired by the fact that speech is often described as a sequence of discrete units unevenly distributed in time, we propose a model in which the output of a low-level CPC module is non-uniformly downsampled to directly minimize the loss of a high-level CPC module. The latter is designed to also enforce a prior of separability and discreteness in its representations by enforcing dissimilarity of successive high-level representations through focused negative sampling, and by quantization of the prediction targets. Accounting for the structure of the speech signal improves upon single-level CPC features and enhances the disentanglement of the learned representations, as measured by downstream speech recognition tasks, while resulting in a meaningful segmentation of the signal that closely resembles phone boundaries. Steffen Schotthöfer · Emanuele Zangrando · Jonas Kusch · Gianluca Ceruti · Francesco Tudisco Neural networks have achieved tremendous success in a large variety of applications. However, their memory footprint and computational demand can render them impractical in application settings with limited hardware or energy resources. In this work, we propose a novel algorithm to find efficient low-rank subnetworks. Remarkably, these subnetworks are determined and adapted already during the training phase and the overall time and memory resources required by both training and evaluating them is significantly reduced. The main idea is to restrict the weight matrices to a low-rank manifold and to update the low-rank factors rather than the full matrix during training. To derive training updates that are restricted to the prescribed manifold, we employ techniques from dynamic model order reduction for matrix differential equations. Moreover, our method automatically and dynamically adapts the ranks during training to achieve a desired approximation accuracy.The efficiency of the proposed method is demonstrated through a variety of numerical experiments on fully-connected and convolutional networks. Kyungsu Lee · Jaeseung Yang · Haeyun Lee · Jae Youn Hwang The simulation of human neurons and neurotransmission mechanisms has been realized in deep neural networks based on the theoretical implementations of activation functions. However, recent studies have reported that the threshold potential of neurons exhibits different values according to the locations and types of individual neurons, and that the activation functions have limitations in terms of representing this variability. Therefore, this study proposes a simple yet effective activation function that facilitates different thresholds and adaptive activations according to the positions of units and the contexts of inputs. Furthermore, the proposed activation function mathematically exhibits a more generalized form of Swish activation function, and thus we denoted it as Adaptive SwisH (ASH). ASH highlights informative features that exhibit large values in the top percentiles in an input, whereas it rectifies low values. Most importantly, ASH exhibits trainable, adaptive, and context-aware properties compared to other activation functions. Furthermore, ASH represents general formula of the previously studied activation function and provides a reasonable mathematical background for the superior performance. To validate the effectiveness and robustness of ASH, we implemented ASH into many deep learning models for various tasks, including classification, detection, segmentation, and image generation. Experimental analysis demonstrates that our activation function can provide the benefits of more accurate prediction and earlier convergence in many deep learning Tian Qin · Fengxiang He · Dingfeng Shi · Wenbing Huang · Dacheng Tao Designing an incentive-compatible auction mechanism that maximizes the auctioneer's revenue while minimizes the bidders’ ex-post regret is an important yet intricate problem in economics. Remarkable progress has been achieved through learning the optimal auction mechanism by neural networks. In this paper, we consider the popular additive valuation and symmetric valuation setting; i.e., the valuation for a set of items is defined as the sum of all items’ valuations in the set, and the valuation distribution is invariant when the bidders and/or the items are permutated. We prove that permutation-equivariant neural networks have significant advantages: the permutation-equivariance decreases the expected ex-post regret, improves the model generalizability, while maintains the expected revenue invariant. This implies that the permutation-equivariance helps approach the theoretically optimal dominant strategy incentive compatible condition, and reduces the required sample complexity for desired generalization. Extensive experiments fully support our theory. To our best knowledge, this is the first work towards understanding the benefits of permutation-equivariance in auction mechanisms. Yong Liu · Siqi Mai · Minhao Cheng · Xiangning Chen · Cho-Jui Hsieh · Yang You Currently, Sharpness-Aware Minimization (SAM) is proposed to seek the parameters that lie in a flat region to improve the generalization when training neural networks. In particular, a minimax optimization objective is defined to find the maximum loss value centered on the weight, out of the purpose of simultaneously minimizing loss value and loss sharpness. For the sake of simplicity, SAM applies one-step gradient ascent to approximate the solution of the inner maximization. However, one-step gradient ascent may not be sufficient and multi-step gradient ascents will cause additional training costs. Based on this observation, we propose a novel random smoothing based SAM (R-SAM) algorithm. To be specific, R-SAM essentially smooths the loss landscape, based on which we are able to apply the one-step gradient ascent on the smoothed weights to improve the approximation of the inner maximization. Further, we evaluate our proposed R-SAM on CIFAR and ImageNet datasets. The experimental results illustrate that R-SAM can consistently improve the performance on ResNet and Vision Transformer (ViT) training. Ji Lin · Ligeng Zhu · Wei-Ming Chen · Wei-Chen Wang · Chuang Gan · Song Han On-device training enables the model to adapt to new data collected from the sensors by fine-tuning a pre-trained model. Users can benefit from customized AI models without having to transfer the data to the cloud, protecting the privacy. However, the training memory consumption is prohibitive for IoT devices that have tiny memory resources. We propose an algorithm-system co-design framework to make on-device training possible with only 256KB of memory. On-device training faces two unique challenges: (1) the quantized graphs of neural networks are hard to optimize due to low bit-precision and the lack of normalization; (2) the limited hardware resource (memory and computation) does not allow full backpropagation. To cope with the optimization difficulty, we propose Quantization- Aware Scaling to calibrate the gradient scales and stabilize 8-bit quantized training. To reduce the memory footprint, we propose Sparse Update to skip the gradient computation of less important layers and sub-tensors. The algorithm innovation is implemented by a lightweight training system, Tiny Training Engine, which prunes the backward computation graph to support sparse updates and offload the runtime auto-differentiation to compile time. Our framework is the first practical solution for on-device transfer learning of visual recognition on tiny IoT devices (e.g., a microcontroller with only 256KB SRAM), using less than 1/1000 of the memory of PyTorch and TensorFlow while matching the accuracy. Our study enables IoT devices not only to perform inference but also to continuously adapt to new data for on-device lifelong learning. A video demo can be found here: https://youtu.be/XaDCO8YtmBw. Wonyong Jeong · Sung Ju Hwang In real-world federated learning scenarios, participants could have their own personalized labels incompatible with those from other clients, due to using different label permutations or tackling completely different tasks or domains. However, most existing FL approaches cannot effectively tackle such extremely heterogeneous scenarios since they often assume that (1) all participants use a synchronized set of labels, and (2) they train on the same tasks from the same domain. In this work, to tackle these challenges, we introduce Factorized-FL, which allows to effectively tackle label- and task-heterogeneous federated learning settings by factorizing the model parameters into a pair of rank-1 vectors, where one captures the common knowledge across different labels and tasks and the other captures knowledge specific to the task for each local model. Moreover, based on the distance in the client-specific vector space, Factorized-FL performs a selective aggregation scheme to utilize only the knowledge from the relevant participants for each client. We extensively validate our method on both label- and domain-heterogeneous settings, on which it outperforms the state-of-the-art personalized federated learning methods. The code is available at https://github.com/wyjeong/Factorized-FL. Mingcheng Hou · Issei Sato The prototypical network is a prototype classifier based on meta-learning and is widely used for few-shot learning because it classifies unseen examples by constructing class-specific prototypes without adjusting hyper-parameters during meta-testing.Interestingly, recent research has attracted a lot of attention, showing that training a new linear classifier, which does not use a meta-learning algorithm, performs comparably with the prototypical network.However, the training of a new linear classifier requires the retraining of the classifier every time a new class appears.In this paper, we analyze how a prototype classifier works equally well without training a new linear classifier or meta-learning.We experimentally find that directly using the feature vectors, which is extracted by using standard pre-trained models to construct a prototype classifier in meta-testing, does not perform as well as the prototypical network and training new linear classifiers on the feature vectors of pre-trained models.Thus, we derive a novel generalization bound for a prototypical classifier and show that the transformation of a feature vector can improve the performance of prototype classifiers.We experimentally investigate several normalization methods for minimizing the derived bound and find that the same performance can be obtained by using the L2 normalization and minimizing the ratio of the within-class variance to the between-class variance without training a new classifier or meta-learning. Weijian Deng · Stephen Gould · Liang Zheng Generalization and invariance are two essential properties of machine learning models. Generalization captures a model's ability to classify unseen data while invariance measures consistency of model predictions on transformations of the data. Existing research suggests a positive relationship: a model generalizing well should be invariant to certain visual factors. Building on this qualitative implication we make two contributions. First, we introduce effective invariance (EI), a simple and reasonable measure of model invariance which does not rely on image labels. Given predictions on a test image and its transformed version, EI measures how well the predictions agree and with what level of confidence. Second, using invariance scores computed by EI, we perform large-scale quantitative correlation studies between generalization and invariance, focusing on rotation and grayscale transformations. From a model-centric view, we observe generalization and invariance of different models exhibit a strong linear relationship, on both in-distribution and out-of-distribution datasets. From a dataset-centric view, we find a certain model's accuracy and invariance linearly correlated on different test sets. Apart from these major findings, other minor but interesting insights are also discussed. Yipei Wang · Xiaoqian Wang Numerous methods have been developed to explain the inner mechanism of deep neural network (DNN) based classifiers. Existing explanation methods are often limited to explaining predictions of a pre-specified class, which answers the question “why is the input classified into this class?” However, such explanations with respect to a single class are inherently insufficient because they do not capture features with class-discriminative power. That is, features that are important for predicting one class may also be important for other classes. To capture features with true class-discriminative power, we should instead ask “why is the input classified into this class, but not others?” To answer this question, we propose a weighted contrastive framework for explaining DNNs. Our framework can easily convert any existing back-propagation explanation methods to build class-contrastive explanations. We theoretically validate our weighted contrast explanation in general back-propagation explanations, and show that our framework enables class-contrastive explanations with significant improvements in both qualitative and quantitative experiments. Based on the results, we point out an important blind spot in the current explainable artificial intelligence (XAI) study, where explanations towards the predicted logits and the probabilities are obfuscated. We suggest that these two aspects should be distinguished explicitly any time explanation methods are applied. Yue Bai · Huan Wang · Xu Ma · Yitian Zhang · Zhiqiang Tao · Yun Fu A deeper network structure generally handles more complicated non-linearity and performs more competitively. Nowadays, advanced network designs often contain a large number of repetitive structures (e.g., Transformer). They empower the network capacity to a new level but also increase the model size inevitably, which is unfriendly to either model restoring or transferring. In this study, we are the first to investigate the representative potential of fixed random weights with limited unique values by learning diverse masks and introduce the Parameter-Efficient Masking Networks (PEMN). It also naturally leads to a new paradigm for model compression to diminish the model size. Concretely, motivated by the repetitive structures in modern neural networks, we utilize one random initialized layer, accompanied with different masks, to convey different feature mappings and represent repetitive network modules. Therefore, the model can be expressed as \textit{one-layer} with a bunch of masks, which significantly reduce the model storage cost. Furthermore, we enhance our strategy by learning masks for a model filled by padding a given random weights vector. In this way, our method can further lower the space complexity, especially for models without many repetitive architectures. We validate the potential of PEMN learning masks on random weights with limited unique values and test its effectiveness for a new compression paradigm based on different network architectures.Code is available at \href{https://github.com/yueb17/PEMN}{\textcolor{magenta}{https:// Pau de Jorge Aranda · Adel Bibi · Riccardo Volpi · Amartya Sanyal · Philip Torr · Gregory Rogez · Puneet Dokania Recently, Wong et al. (2020) showed that adversarial training with single-step FGSM leads to a characteristic failure mode named catastrophic overfitting (CO), in which a model becomes suddenly vulnerable to multi-step attacks. Experimentally they showed that simply adding a random perturbation prior to FGSM (RS-FGSM) could prevent CO. However, Andriushchenko & Flammarion (2020) observed that RS-FGSM still leads to CO for larger perturbations, and proposed a computationally expensive regularizer (GradAlign) to avoid it. In this work, we methodically revisit the role of noise and clipping in single-step adversarial training. Contrary to previous intuitions, we find that using a stronger noise around the clean sample combined with \textit{not clipping} is highly effective in avoiding CO for large perturbation radii. We then propose Noise-FGSM (N-FGSM) that, while providing the benefits of single-step adversarial training, does not suffer from CO. Empirical analyses on a large suite of experiments show that N-FGSM is able to match or surpass the performance of previous state of-the-art GradAlign while achieving 3$\times$ speed-up. Qishi Dong · Awais Muhammad · Fengwei Zhou · Chuanlong Xie · Tianyang Hu · Yongxin Yang · Sung-Ho Bae · Zhenguo Li Recent advances on large-scale pre-training have shown great potentials of leveraging a large set of Pre-Trained Models (PTMs) for improving Out-of-Distribution (OoD) generalization, for which the goal is to perform well on possible unseen domains after fine-tuning on multiple training domains. However, maximally exploiting a zoo of PTMs is challenging since fine-tuning all possible combinations of PTMs is computationally prohibitive while accurate selection of PTMs requires tackling the possible data distribution shift for OoD tasks. In this work, we propose ZooD, a paradigm for PTMs ranking and ensemble with feature selection. Our proposed metric ranks PTMs by quantifying inter-class discriminability and inter-domain stability of the features extracted by the PTMs in a leave-one-domain-out cross-validation manner. The top-K ranked models are then aggregated for the target OoD task. To avoid accumulating noise induced by model ensemble, we propose an efficient variational EM algorithm to select informative features. We evaluate our paradigm on a diverse model zoo consisting of 35 models for various OoD tasks and demonstrate: (i) model ranking is better correlated with fine-tuning ranking than previous methods and up to 9859x faster than brute-force fine-tuning; (ii) OoD generalization after model ensemble with feature selection outperforms the state-of-the-art methods and the accuracy on most challenging task DomainNet is improved from 46.5\% to 50.6\%. Furthermore, we provide the fine-tuning results of 35 PTMs on 7 OoD datasets, hoping to help the research of model zoo and OoD generalization. Code will be available at \href{https://gitee.com/mindspore/models/tree/master/research/cv/zood}{https://gitee.com/mindspore/models/tree/master/ Ning Xu · Congyu Qiao · Jiaqi Lv · Xin Geng · Min-Ling Zhang Multi-label learning (MLL) learns from the examples each associated with multiple labels simultaneously, where the high cost of annotating all relevant labels for each training example is challenging for real-world applications. To cope with the challenge, we investigate single-positive multi-label learning (SPMLL) where each example is annotated with only one relevant label and show that one can successfully learn a theoretically grounded multi-label classifier for the problem. In this paper, a novel SPMLL method named SMILE, i.e., Single-positive MultI-label learning with Label Enhancement, is proposed. Specifically, an unbiased risk estimator is derived, which could be guaranteed to approximately converge to the optimal risk minimizer of fully supervised learning and shows that one positive label of each instance is sufficient to train the predictive model. Then, the corresponding empirical risk estimator is established via recovering the latent soft label as a label enhancement process, where the posterior density of the latent soft labels is approximate to the variational Beta density parameterized by an inference model. Experiments on benchmark datasets validate the effectiveness of the proposed method. Kamyar Ghasemipour · Shixiang (Shane) Gu · Ofir Nachum Motivated by the success of ensembles for uncertainty estimation in supervised learning, we take a renewed look at how ensembles of $Q$-functions can be leveraged as the primary source of pessimism for offline reinforcement learning (RL). We begin by identifying a critical flaw in a popular algorithmic choice used by many ensemble-based RL algorithms, namely the use of shared pessimistic target values when computing each ensemble member's Bellman error. Through theoretical analyses and construction of examples in toy MDPs, we demonstrate that shared pessimistic targets can paradoxically lead to value estimates that are effectively optimistic. Given this result, we propose MSG, a practical offline RL algorithm that trains an ensemble of $Q$-functions with independently computed targets based on completely separate networks, and optimizes a policy with respect to the lower confidence bound of predicted action values. Our experiments on the popular D4RL and RL Unplugged offline RL benchmarks demonstrate that on challenging domains such as antmazes, MSG with deep ensembles surpasses highly well-tuned state-of-the-art methods by a wide margin. Additionally, through ablations on benchmarks domains, we verify the critical significance of using independently trained $Q$-functions, and study the role of ensemble size. Finally, as using separate networks per ensemble member can become computationally costly with larger neural network architectures, we investigate whether efficient ensemble approximations developed for supervised learning can be similarly effective, and demonstrate that they do not match the performance and robustness of MSG with separate networks, highlighting the need for new efforts into efficient uncertainty estimation directed at Bohdan Kivva · Goutham Rajendran · Pradeep Ravikumar · Bryon Aragam We prove identifiability of a broad class of deep latent variable models that (a) have universal approximation capabilities and (b) are the decoders of variational autoencoders that are commonly used in practice. Unlike existing work, our analysis does not require weak supervision, auxiliary information, or conditioning in the latent space. Specifically, we show that for a broad class of generative (i.e. unsupervised) models with universal approximation capabilities, the side information $u$ is not necessary: We prove identifiability of the entire generative model where we do not observe $u$ and only observe the data $x$. The models we consider match autoencoder architectures used in practice that leverage mixture priors in the latent space and ReLU/leaky-ReLU activations in the encoder, such as VaDE and MFC-VAE. Our main result is an identifiability hierarchy that significantly generalizes previous work and exposes how different assumptions lead to different ``strengths'' of identifiability, and includes certain ``vanilla'' VAEs with isotropic Gaussian priors as a special case. For example, our weakest result establishes (unsupervised) identifiability up to an affine transformation, and thus partially resolves an open problem regarding model identifiability raised in prior work. These theoretical results are augmented with experiments on both simulated and real data. Thijs Vogels · Hadrien Hendrikx · Martin Jaggi In data-parallel optimization of machine learning models, workers collaborate to improve their estimates of the model: more accurate gradients allow them to use larger learning rates and optimize faster. We consider the setting in which all workers sample from the same dataset, and communicate over a sparse graph (decentralized). In this setting, current theory fails to capture important aspects of real-world behavior. First, the ‘spectral gap’ of the communication graph is not predictive of its empirical performance in (deep) learning. Second, current theory does not explain that collaboration enables larger learning rates than training alone. In fact, it prescribes smaller learning rates, which further decrease as graphs become larger, failing to explain convergence in infinite graphs. This paper aims to paint an accurate picture of sparsely-connected distributed optimization when workers share the same data distribution. We quantify how the graph topology influences convergence in a quadratic toy problem and provide theoretical results for general smooth and (strongly) convex objectives. Our theory matches empirical observations in deep learning, and accurately describes the relative merits of different graph topologies. Carlos Misael Madrid Padilla · Daren Wang · Zifeng Zhao · Yi Yu We study the problem of change-point detection and localisation for functional data sequentially observed on a general $d$-dimensional space, where we allow the functional curves to be either sparsely or densely sampled. Data of this form naturally arise in a wide range of applications such as biology, neuroscience, climatology and finance. To achieve such a task, we propose a kernel-based algorithm named functional seeded binary segmentation (FSBS). FSBS is computationally efficient, can handle discretely observed functional data, and is theoretically sound for heavy-tailed and temporally-dependent observations. Moreover, FSBS works for a general $d$-dimensional domain, which is the first in the literature of change-point estimation for functional data. We show the consistency of FSBS for multiple change-point estimation and further provide a sharp localisation error rate, which reveals an interesting phase transition phenomenon depending on the number of functional curves observed and the sampling frequency for each curve. Extensive numerical experiments illustrate the effectiveness of FSBS and its advantage over existing methods in the literature under various settings. A real data application is further conducted, where FSBS localises change-points of sea surface temperature patterns in the south Pacific attributed to El Ni\~{n}o. Yibo Jiang · Victor Veitch Real-world classification problems must contend with domain shift, the (potential) mismatch between the domain where a model is deployed and the domain(s) where the training data was gathered. Methods to handle such problems must specify what structure is held in common between the domains and what is allowed to vary. A natural assumption is that causal (structural) relationships are invariant in all domains. Then, it is tempting to learn a predictor for label $Y$ that depends only on its causal parents. However, many real-world problems are ``anti-causal'' in the sense that $Y$ is a cause of the covariates $X$---in this case, $Y$ has no causal parents and the naive causal invariance is useless. In this paper, we study representation learning under a particular notion of domain shift that both respects causal invariance and that naturally handles the ``anti-causal'' structure. We show how to leverage the shared causal structure of the domains to learn a representation that both admits an invariant predictor and that also allows fast adaptation in new domains. The key is to translate causal assumptions into learning principles that disentangle ``invariant'' and ``non-stable'' features. Experiments on both synthetic and real-world data demonstrate the effectiveness of the proposed learning algorithm. Stephen Casper · Max Nadeau · Dylan Hadfield-Menell · Gabriel Kreiman The literature on adversarial attacks in computer vision typically focuses on pixel-level perturbations. These tend to be very difficult to interpret. Recent work that manipulates the latent representations of image generators to create "feature-level" adversarial perturbations gives us an opportunity to explore perceptible, interpretable adversarial attacks. We make three contributions. First, we observe that feature-level attacks provide useful classes of inputs for studying representations in models. Second, we show that these adversaries are uniquely versatile and highly robust. We demonstrate that they can be used to produce targeted, universal, disguised, physically-realizable, and black-box attacks at the ImageNet scale. Third, we show how these adversarial images can be used as a practical interpretability tool for identifying bugs in networks. We use these adversaries to make predictions about spurious associations between features and classes which we then test by designing "copy/paste" attacks in which one natural image is pasted into another to cause a targeted misclassification. Our results suggest that feature-level attacks are a promising approach for rigorous interpretability research. They support the design of tools to better understand what a model has learned and diagnose brittle feature associations. Code is available at https://github.com/ Agustinus Kristiadi · Runa Eschenhagen · Philipp Hennig Monte Carlo (MC) integration is the de facto method for approximating the predictive distribution of Bayesian neural networks (BNNs). But, even with many MC samples, Gaussian-based BNNs could still yield bad predictive performance due to the posterior approximation's error. Meanwhile, alternatives to MC integration are expensive. In this work, we experimentally show that the key to good MC-approximated predictive distributions is the quality of the approximate posterior itself. However, previous methods for obtaining accurate posterior approximations are expensive and non-trivial to implement. We, therefore, propose to refine Gaussian approximate posteriors with normalizing flows. When applied to last-layer BNNs, it yields a simple, cost-efficient, post hoc method for improving pre-existing parametric approximations. We show that the resulting posterior approximation is competitive with even the gold-standard full-batch Hamiltonian Monte Carlo. Hossein Esfandiari · Vahab Mirrokni · Jon Schneider In this work, we present and study a new framework for online learning in systems with multiple users that provide user anonymity. Specifically, we extend the notion of bandits to obey the standard $k$-anonymity constraint by requiring each observation to be an aggregation of rewards for at least $k$ users. This provides a simple yet effective framework where one can learn a clustering of users in an online fashion without observing any user's individual decision. We initiate the study of anonymous bandits and provide the first sublinear regret algorithms and lower bounds for this setting. Xinyi Hu · Jasper Lee · Jimmy Lee · Allen Z. Zhong This paper proposes Branch & Learn, a framework for Predict+Optimize to tackle optimization problems containing parameters that are unknown at the time of solving. Given an optimization problem solvable by a recursive algorithm satisfying simple conditions, we show how a corresponding learning algorithm can be constructed directly and methodically from the recursive algorithm. Our framework applies also to iterative algorithms by viewing them as a degenerate form of recursion. Extensive experimentation shows better performance for our proposal over classical and state of the art Alekh Agarwal · Tong Zhang We propose a general framework to design posterior sampling methods for model-based RL. We show that the proposed algorithms can be analyzed by reducing regret to Hellinger distance in conditional probability estimation. We further show that optimistic posterior sampling can control this Hellinger distance, when we measure model error via data likelihood. This technique allows us to design and analyze unified posterior sampling algorithms with state-of-the-art sample complexity guarantees for many model-based RL settings. We illustrate our general result in many special cases, demonstrating the versatility of our framework. Kwangjun Ahn · Prateek Jain · Ziwei Ji · Satyen Kale · Praneeth Netrapalli · Gil I Shamir We initiate a formal study of reproducibility in optimization. We define a quantitative measure of reproducibility of optimization procedures in the face of noisy or error-prone operations such as inexact or stochastic gradient computations or inexact initialization. We then analyze several convex optimization settings of interest such as smooth, non-smooth, and strongly-convex objective functions and establish tight bounds on the limits of reproducibility in each setting. Our analysis reveals a fundamental trade-off between computation and reproducibility: more computation is necessary (and sufficient) for better reproducibility. Yunjuan Wang · Enayat Ullah · Poorya Mianjy · Raman Arora Recent works show that adversarial examples exist for random neural networks [Daniely and Schacham, 2020] and that these examples can be found using a single step of gradient ascent [Bubeck et al., 2021]. In this work, we extend this line of work to ``lazy training'' of neural networks -- a dominant model in deep learning theory in which neural networks are provably efficiently learnable. We show that over-parametrized neural networks that are guaranteed to generalize well and enjoy strong computational guarantees remain vulnerable to attacks generated using a single step of gradient Damien Scieur · Gauthier Gidel · Quentin Bertrand · Fabian Pedregosa Computing the Jacobian of the solution of an optimization problem is a central problem in machine learning, with applications in hyperparameter optimization, meta-learning, optimization as a layer, and dataset distillation, to name a few. Unrolled differentiation is a popular heuristic that approximates the solution using an iterative solver and differentiates it through the computational path. This work provides a non-asymptotic convergence-rate analysis of this approach on quadratic objectives for gradient descent and the Chebyshev method. We show that to ensure convergence of the Jacobian, we can either 1) choose a large learning rate leading to a fast asymptotic convergence but accept that the algorithm may have an arbitrarily long burn-in phase or 2) choose a smaller learning rate leading to an immediate but slower convergence. We refer to this phenomenon as the curse of unrolling.Finally, we discuss open problems relative to this approach, such as deriving a practical update rule for the optimal unrolling strategy and making novel connections with the field of Sobolev orthogonal polynomials. Aurelien Lucchi · Frank Proske · Antonio Orvieto · Francis Bach · Hans Kersting Studying the properties of stochastic noise to optimize complex non-convex functions has been an active area of research in the field of machine learning. Prior work~\citep{zhou2019pgd, wei2019noise} has shown that the noise of stochastic gradient descent improves optimization by overcoming undesirable obstacles in the landscape. Moreover, injecting artificial Gaussian noise has become a popular idea to quickly escape saddle points. Indeed, in the absence of reliable gradient information, the noise is used to explore the landscape, but it is unclear what type of noise is optimal in terms of exploration ability. In order to narrow this gap in our knowledge, we study a general type of continuous-time non-Markovian process, based on fractional Brownian motion, that allows for the increments of the process to be correlated. This generalizes processes based on Brownian motion, such as the Ornstein-Uhlenbeck process. We demonstrate how to discretize such processes which gives rise to the new algorithm ``fPGD''. This method is a generalization of the known algorithms PGD and Anti-PGD~\citep{orvieto2022anti}. We study the properties of fPGD both theoretically and empirically, demonstrating that it possesses exploration abilities that, in some cases, are favorable over PGD and Anti-PGD. These results open the field to novel ways to exploit noise for training machine learning models. Hanbyul Lee · Qifan Song · Jean Honorio We study a practical algorithm for sparse principal component analysis (PCA) of incomplete and noisy data.Our algorithm is based on the semidefinite program (SDP) relaxation of the non-convex $l_1$-regularized PCA problem.We provide theoretical and experimental evidence that SDP enables us to exactly recover the true support of the sparse leading eigenvector of the unknown true matrix, despite only observing an incomplete (missing uniformly at random) and noisy version of it.We derive sufficient conditions for exact recovery, which involve matrix incoherence, the spectral gap between the largest and second-largest eigenvalues, the observation probability and the noise variance.We validate our theoretical results with incomplete synthetic data, and show encouraging and meaningful results on a gene expression dataset. Tianyuan Jin · Pan Xu · Xiaokui Xiao · Anima Anandkumar We study the regret of Thompson sampling (TS) algorithms for exponential family bandits, where the reward distribution is from a one-dimensional exponential family, which covers many common reward distributions including Bernoulli, Gaussian, Gamma, Exponential, etc. We propose a Thompson sampling algorithm, termed ExpTS, which uses a novel sampling distribution to avoid the under-estimation of the optimal arm. We provide a tight regret analysis for ExpTS, which simultaneously yields both the finite-time regret bound as well as the asymptotic regret bound. In particular, for a $K$-armed bandit with exponential family rewards, ExpTS over a horizon $T$ is sub-UCB (a strong criterion for the finite-time regret that is problem-dependent), minimax optimal up to a factor $\sqrt{\log K}$, and asymptotically optimal, for exponential family rewards. Moreover, we propose ExpTS$^+$, by adding a greedy exploitation step in addition to the sampling distribution used in ExpTS, to avoid the over-estimation of sub-optimal arms. ExpTS$^+$ is an anytime bandit algorithm and achieves the minimax optimality and asymptotic optimality simultaneously for exponential family reward distributions. Our proof techniques are general and conceptually simple and can be easily applied to analyze standard Thompson sampling with specific reward distributions. Arya Akhavan · Evgenii Chzhen · Massimiliano Pontil · Alexandre Tsybakov This work studies online zero-order optimization of convex and Lipschitz functions. We present a novel gradient estimator based on two function evaluations and randomization on the $\ell_1$-sphere. Considering different geometries of feasible sets and Lipschitz assumptions we analyse online dual averaging algorithm with our estimator in place of the usual gradient. We consider two types of assumptions on the noise of the zero-order oracle: canceling noise and adversarial noise. We provide an anytime and completely data-driven algorithm, which is adaptive to all parameters of the problem. In the case of canceling noise that was previously studied in the literature, our guarantees are either comparable or better than state-of-the-art bounds obtained by~\citet{duchi2015} and \ citet{Shamir17} for non-adaptive algorithms. Our analysis is based on deriving a new weighted Poincaré type inequality for the uniform measure on the $\ell_1$-sphere with explicit constants, which may be of independent interest. Daniel Alabi · Salil Vadhan In this work, we design differentially private hypothesis tests for the following problems in the general linear model: testing a linear relationship and testing for the presence of mixtures. The majority of our hypothesis tests are based on differentially private versions of the $F$-statistic for the general linear model framework, which are uniformly most powerful unbiased in the non-private setting. We also present another test for testing mixtures, based on the differentially private nonparametric tests of Couch, Kazan, Shi, Bray, and Groce (CCS 2019), which is especially suited for the small dataset regime. We show that the differentially private $F$-statistic converges to the asymptotic distribution of its non-private counterpart. As a corollary, the statistical power of the differentially private $F$-statistic converges to the statistical power of the non-private $F$-statistic. Through a suite of Monte Carlo based experiments, we show that our tests achieve desired \textit{significance levels} and have a high \textit{power} that approaches the power of the non-private tests as we increase sample sizes or the privacy-loss parameter. We also show when our tests outperform existing methods in the literature. Tian-Zuo Wang · Tian Qin · Zhi-Hua Zhou Great efforts have been devoted to causal discovery from observational data, and it is well known that introducing some background knowledge attained from experiments or human expertise can be very helpful. However, it remains unknown that \emph{what causal relations are identifiable given background knowledge in the presence of latent confounders}. In this paper, we solve the problem with sound and complete orientation rules when the background knowledge is given in a \emph{local} form. Furthermore, based on the solution to the problem, this paper proposes a general active learning framework for causal discovery in the presence of latent confounders, with its effectiveness and efficiency validated by experiments. Gregory Canal · Blake Mason · Ramya Korlakai Vinayak · Robert Nowak This paper investigates simultaneous preference and metric learning from a crowd of respondents. A set of items represented by $d$-dimensional feature vectors and paired comparisons of the form ``item $i$ is preferable to item $j$'' made by each user is given. Our model jointly learns a distance metric that characterizes the crowd's general measure of item similarities along with a latent ideal point for each user reflecting their individual preferences. This model has the flexibility to capture individual preferences, while enjoying a metric learning sample cost that is amortized over the crowd. We first study this problem in a noiseless, continuous response setting (i.e., responses equal to differences of item distances) to understand the fundamental limits of learning. Next, we establish prediction error guarantees for noisy, binary measurements such as may be collected from human respondents, and show how the sample complexity improves when the underlying metric is low-rank. Finally, we establish recovery guarantees under assumptions on the response distribution. We demonstrate the performance of our model on both simulated data and on a dataset of color preference judgements across a large number of users. Arindam Ghosh · Thomas Schaaf · Matthew Gormley Much recent work has been devoted to the problem of ensuring that a neural network's confidence scores match the true probability of being correct, i.e. the calibration problem. Of note, it was found that training with focal loss leads to better calibration than cross-entropy while achieving similar level of accuracy \cite{mukhoti2020}. This success stems from focal loss regularizing the entropy of the model's prediction (controlled by the parameter $\gamma$), thereby reining in the model's overconfidence. Further improvement is expected if $\gamma$ is selected independently for each training sample (Sample-Dependent Focal Loss (FLSD-53) \cite{mukhoti2020}). However, FLSD-53 is based on heuristics and does not generalize well. In this paper, we propose a calibration-aware adaptive focal loss called AdaFocal that utilizes the calibration properties of focal (and inverse-focal) loss and adaptively modifies $\gamma_t$ for different groups of samples based on $\gamma_ {t-1}$ from the previous step and the knowledge of model's under/over-confidence on the validation set. We evaluate AdaFocal on various image recognition and one NLP task, covering a wide variety of network architectures, to confirm the improvement in calibration while achieving similar levels of accuracy. Additionally, we show that models trained with AdaFocal achieve a significant boost in out-of-distribution detection. This paper investigates the network load balancing problem in data centers (DCs) where multiple load balancers (LBs) are deployed, using the multi-agent reinforcement learning (MARL) framework. The challenges of this problem consist of the heterogeneous processing architecture and dynamic environments, as well as limited and partial observability of each LB agent in distributed networking systems, which can largely degrade the performance of in-production load balancing algorithms in real-world setups. Centralised training and distributed execution (CTDE) RL scheme has been proposed to improve MARL performance, yet it incurs -- especially in distributed networking systems, which prefer distributed and plug-and-play design schemes -- additional communication and management overhead among agents. We formulate the multi-agent load balancing problem as a Markov potential game, with a carefully and properly designed workload distribution fairness as the potential function. A fully distributed MARL algorithm is proposed to approximate the Nash equilibrium of the game. Experimental evaluations involve both an event-driven simulator and a real-world system, where the proposed MARL load balancing algorithm shows close-to-optimal performance in simulations and superior results over in-production LBs in the real-world system. Anay Mehrotra · Nisheeth Vishnoi The fair-ranking problem, which asks to rank a given set of items to maximize utility subject to group fairness constraints, has received attention in the fairness, information retrieval, and machine learning literature. Recent works, however, observe that errors in socially-salient (including protected) attributes of items can significantly undermine fairness guarantees of existing fair-ranking algorithms and raise the problem of mitigating the effect of such errors. We study the fair-ranking problem under a model where socially-salient attributes of items are randomly and independently perturbed. We present a fair-ranking framework that incorporates group fairness requirements along with probabilistic information about perturbations in socially-salient attributes. We provide provable guarantees on the fairness and utility attainable by our framework and show that it is information-theoretically impossible to significantly beat these guarantees. Our framework works for multiple non-disjoint attributes and a general class of fairness constraints that includes proportional and equal representation. Empirically, we observe that, compared to baselines, our algorithm outputs rankings with higher fairness, and has a similar or better fairness-utility trade-off compared to baselines. Nicolò Felicioni · Maurizio Ferrari Dacrema · Marcello Restelli · Paolo Cremonesi The Off-Policy Evaluation (OPE) problem consists in evaluating the performance of new policies from the data collected by another one. OPE is crucial when evaluating a new policy online is too expensive or risky. Many of the state-of-the-art OPE estimators are based on the Inverse Propensity Scoring (IPS) technique, which provides an unbiased estimator when the full support assumption holds, i.e., when the logging policy assigns a non-zero probability to each action. However, there are several scenarios where this assumption does not hold in practice, i.e., there is deficient support, and the IPS estimator is biased in the general case.In this paper, we consider two alternative estimators for the deficient support OPE problem. We first show how to adapt an estimator that was originally proposed for a different domain to the deficient support setting.Then, we propose another estimator, which is a novel contribution of this paper.These estimators exploit additional information about the actions, which we call side information, in order to make reliable estimates on the unsupported actions. Under alternative assumptions that do not require full support, we show that the considered estimators are unbiased.We also provide a theoretical analysis of the concentration when relaxing all the assumptions. Finally, we provide an experimental evaluation showing how the considered estimators are better suited for the deficient support setting compared to the baselines. Theodore Sumers · Robert Hawkins · Mark Ho · Tom Griffiths · Dylan Hadfield-Menell From the earliest years of our lives, humans use language to express our beliefs and desires. Being able to talk to artificial agents about our preferences would thus fulfill a central goal of value alignment. Yet today, we lack computational models explaining such language use. To address this challenge, we formalize learning from language in a contextual bandit setting and ask how a human might communicate preferences over behaviors. We study two distinct types of language: instructions, which provide information about the desired policy, and descriptions, which provide information about the reward function. We show that the agent's degree of autonomy determines which form of language is optimal: instructions are better in low-autonomy settings, but descriptions are better when the agent will need to act independently. We then define a pragmatic listener agent that robustly infers the speaker's reward function by reasoning about how the speaker expresses themselves. We validate our models with a behavioral experiment, demonstrating that (1) our speaker model predicts human behavior, and (2) our pragmatic listener successfully recovers humans' reward functions. Finally, we show that this form of social learning can integrate with and reduce regret in traditional reinforcement learning. We hope these insights facilitate a shift from developing agents that obey language to agents that learn from it. Erik Jones · Jacob Steinhardt Large language models generate complex, open-ended outputs: instead of outputting a class label they write summaries, generate dialogue, or produce working code. In order to asses the reliability of these open-ended generation systems, we aim to identify qualitative categories of erroneous behavior, beyond identifying individual errors. To hypothesize and test for such qualitative errors, we draw inspiration from human cognitive biases---systematic patterns of deviation from rational judgement. Specifically, we use cognitive biases as motivation to (i) generate hypotheses for problems that models may have, and (ii) develop experiments that elicit these problems. Using code generation as a case study, we find that OpenAI’s Codex errs predictably based on how the input prompt is framed, adjusts outputs towards anchors, and is biased towards outputs that mimic frequent training examples. We then use our framework to elicit high-impact errors such as incorrectly deleting files. Our results indicate that experimental methodology from cognitive science can help characterize how machine learning systems behave. Rainer Engelken · Sven Goedeke Information encoding in neural circuits depends on how well time-varying stimuli are encoded by neural populations.Slow neuronal timescales, noise and network chaos can compromise reliable and rapid population response to external stimuli.A dynamic balance of externally incoming currents by strong recurrent inhibition was previously proposed as a mechanism to accurately and robustly encode a time-varying stimulus in balanced networks of binary neurons, but a theory for recurrent rate networks was missing. Here, we develop a non-stationary dynamic mean-field theory that transparently explains how a tight balance of excitatory currents by recurrent inhibition improves information encoding. We demonstrate that the mutual information rate of a time-varying input increases linearly with the tightness of balance, both in the presence of additive noise and with recurrently generated chaotic network fluctuations. We corroborated our findings in numerical experiments and demonstrated that recurrent networks with positive firing rates trained to transmit a time-varying stimulus generically use recurrent inhibition to increase the information rate. We also found that networks trained to transmit multiple independent time-varying signals spontaneously form multiple local inhibitory clusters, one for each input channel.Our findings suggest that feedforward excitatory input and local recurrent inhibition - as observed in many biological circuits - is a generic circuit motif for encoding and transmitting time-varying information in recurrent neural Zongyi Li · Miguel Liu-Schiaffini · Nikola Kovachki · Kamyar Azizzadenesheli · Burigede Liu · Kaushik Bhattacharya · Andrew Stuart · Anima Anandkumar Chaotic systems are notoriously challenging to predict because of their sensitivity to perturbations and errors due to time stepping. Despite this unpredictable behavior, for many dissipative systems the statistics of the long term trajectories are governed by an invariant measure supported on a set, known as the global attractor; for many problems this set is finite dimensional, even if the state space is infinite dimensional. For Markovian systems, the statistical properties of long-term trajectories are uniquely determined by the solution operator that maps the evolution of the system over arbitrary positive time increments. In this work, we propose a machine learning framework to learn the underlying solution operator for dissipative chaotic systems, showing that the resulting learned operator accurately captures short-time trajectories and long-time statistical behavior. Using this framework, we are able to predict various statistics of the invariant measure for the turbulent Kolmogorov Flow dynamics with Reynolds numbers up to $5000$. Andrea Tirinzoni · Rémy Degenne Elimination algorithms for bandit identification, which prune the plausible correct answers sequentially until only one remains, are computationally convenient since they reduce the problem size over time. However, existing elimination strategies are often not fully adaptive (they update their sampling rule infrequently) and are not easy to extend to combinatorial settings, where the set of answers is exponentially large in the problem dimension. On the other hand, most existing fully-adaptive strategies to tackle general identification problems are computationally demanding since they repeatedly test the correctness of every answer, without ever reducing the problem size. We show that adaptive methods can be modified to use elimination in both their stopping and sampling rules, hence obtaining the best of these two worlds: the algorithms (1) remain fully adaptive, (2) suffer a sample complexity that is never worse of their non-elimination counterpart, and (3) provably eliminate certain wrong answers early. We confirm these benefits experimentally, where elimination improves significantly the computational complexity of adaptive methods on common tasks like best-arm identification in linear bandits. Anders Aamand · Justin Chen · Piotr Indyk We propose a model for online graph problems where algorithms are given access to an oracle that predicts (e.g., based on modeling assumptions or past data) the degrees of nodes in the graph. Within this model, we study the classic problem of online bipartite matching, and a natural greedy matching algorithm called MinPredictedDegree, which uses predictions of the degrees of offline nodes. For the bipartite version of a stochastic graph model due to Chung, Lu, and Vu where the expected values of the offline degrees are known and used as predictions, we show that MinPredictedDegree stochastically dominates any other online algorithm, i.e., it is optimal for graphs drawn from this model. Since the "symmetric" version of the model, where all online nodes are identical, is a special case of the well-studied "known i.i.d. model", it follows that the competitive ratio of MinPredictedDegree on such inputs is at least 0.7299. For the special case of graphs with power law degree distributions, we show that MinPredictedDegree frequently produces matchings almost as large as the true maximum matching on such graphs. We complement these results with an extensive empirical evaluation showing that MinPredictedDegree compares favorably to state-of-the-art online algorithms for online matching. Dariusz Kowalski · Dominik Pajak Group Testing (GT) is about learning a (hidden) subset $K$, of size $k$, of some large domain $N$, of size $n \gg k$, using a sequence of queries. A result of a query provides some information about the intersection of the query with the unknown set $K$. The goal is to design efficient (polynomial time) and scalable (polylogarithmic number of queries per element in $K$) algorithms for constructing queries that allow to decode every hidden set $K$ based on the results of the queries. A vast majority of the previous work focused on randomized algorithms minimizing the number of queries; however, in case of large domains N, randomization may result in asignificant deviation from the expected precision of learning the set $K$. Others assumed unlimited computational power (existential results) or adaptiveness of queries (next query could be constructed taking into account the results of the previous queries) – the former approach is less practical due to non-efficiency, and the latter has several drawbacks including non-parallelization. To avoid all the abovementioned drawbacks, for Quantitative Group Testing (QGT) where query result is the size of its intersection with the hidden set, we present the first efficient and scalable non-adaptive deterministic algorithms for constructing queries and decoding a hidden set K from the results of the queries – these solutions do not use any randomization, adaptiveness or unlimited computational power. Luca Pesce · Bruno Loureiro · Florent Krzakala · Lenka Zdeborová A simple model to study subspace clustering is the high-dimensional $k$-Gaussian mixture model where the cluster means are sparse vectors. Here we provide an exact asymptotic characterization of the statistically optimal reconstruction error in this model in the high-dimensional regime with extensive sparsity, i.e. when the fraction of non-zero components of the cluster means $\rho$, as well as the ratio $\alpha$ between the number of samples and the dimension are fixed, while the dimension diverges. We identify the information-theoretic threshold below which obtaining a positive correlation with the true cluster means is statistically impossible. Additionally, we investigate the performance of the approximate message passing (AMP) algorithm analyzed via its state evolution, which is conjectured to be optimal among polynomial algorithm for this task. We identify in particular the existence of a statistical-to-computational gap between the algorithm that requires a signal-to-noise ratio $\lambda_{\text{alg}} \ge k / \sqrt{\alpha}$ to perform better than random, and the information theoretic threshold at $\lambda_{\text{it}} \approx \sqrt{-k \rho \log{\rho}} / \ sqrt{\alpha}$. Finally, we discuss the case of sub-extensive sparsity $\rho$ by comparing the performance of the AMP with other sparsity-enhancing algorithms, such as sparse-PCA and diagonal Philip Amortila · Nan Jiang · Dhruv Madeka · Dean Foster The current paper studies sample-efficient Reinforcement Learning (RL) in settings where only the optimal value function is assumed to be linearly-realizable. It has recently been understood that, even under this seemingly strong assumption and access to a generative model, worst-case sample complexities can be prohibitively (i.e., exponentially) large. We investigate the setting where the learner additionally has access to interactive demonstrations from an expert policy, and we present a statistically and computationally efficient algorithm (Delphi) for blending exploration with expert queries. In particular, Delphi requires $\tilde O(d)$ expert queries and a $\texttt{poly}(d,H,|A|,1/\varepsilon)$ amount of exploratory samples to provably recover an $\varepsilon$-suboptimal policy. Compared to pure RL approaches, this corresponds to an exponential improvement in sample complexity with surprisingly-little expert input. Compared to prior imitation learning (IL) approaches, our required number of expert demonstrations is independent of $H$ and logarithmic in $1/\varepsilon$, whereas all prior work required at least linear factors of both in addition to the same dependence on $d$. Towards establishing the minimal amount of expert queries needed, we show that, in the same setting, any learner whose exploration budget is \textit{polynomially-bounded} (in terms of $d,H,$ and $|A|$) will require \textit{at least} $\tilde\Omega(\sqrt{d})$ oracle calls to recover a policy competing with the expert's value function. Under the weaker assumption that the expert's policy is linear, we show that the lower bound increases to $\tilde\Omega(d)$. Shinsaku Sakaue · Taihei Oki Greedy best-first search (GBFS) and A* search (A*) are popular algorithms for path-finding on large graphs. Both use so-called heuristic functions, which estimate how close a vertex is to the goal. While heuristic functions have been handcrafted using domain knowledge, recent studies demonstrate that learning heuristic functions from data is effective in many applications. Motivated by this emerging approach, we study the sample complexity of learning heuristic functions for GBFS and A*. We build on a recent framework called \textit{data-driven algorithm design} and evaluate the \textit {pseudo-dimension} of a class of utility functions that measure the performance of parameterized algorithms. Assuming that a vertex set of size $n$ is fixed, we present $\mathrm{O}(n\lg n)$ and $\ mathrm{O}(n^2\lg n)$ upper bounds on the pseudo-dimensions for GBFS and A*, respectively, parameterized by heuristic function values. The upper bound for A* can be improved to $\mathrm{O}(n^2\lg d)$ if every vertex has a degree of at most $d$ and to $\mathrm{O}(n \lg n)$ if edge weights are integers bounded by $\mathrm{poly}(n)$. We also give $\Omega(n)$ lower bounds for GBFS and A*, which imply that our bounds for GBFS and A* under the integer-weight condition are tight up to a $\lg n$ factor. Finally, we discuss a case where the performance of A* is measured by the suboptimality and show that we can sometimes obtain a better guarantee by combining a parameter-dependent worst-case bound with a sample complexity bound. Yair Carmon · Danielle Hausler We develop and analyze algorithms for distributionally robust optimization (DRO) of convex losses. In particular, we consider group-structured and bounded $f$-divergence uncertainty sets. Our approach relies on an accelerated method that queries a ball optimization oracle, i.e., a subroutine that minimizes the objective within a small ball around the query point. Our main contribution is efficient implementations of this oracle for DRO objectives. For DRO with $N$ non-smooth loss functions, the resulting algorithms find an $\epsilon$-accurate solution with $\widetilde{O}\left(N\ epsilon^{-2/3} + \epsilon^{-2}\right)$ first-order oracle queries to individual loss functions. Compared to existing algorithms for this problem, we improve complexity by a factor of up to $\epsilon^ We study the two-stage vertex-weighted online bipartite matching problem of Feng, Niazadeh, and Saberi (SODA ‘21) in a setting where the algorithm has access to a suggested matching that is recommended in the first stage. We evaluate an algorithm by its robustness $R$, which is its performance relative to that of the optimal offline matching, and its consistency $C$, which is its performance when the advice or the prediction given is correct. We characterize for this problem the Pareto-efficient frontier between robustness and consistency, which is rare in the literature on advice-augmented algorithms, yet necessary for quantifying such an algorithm to be optimal. Specifically, we propose an algorithm that is $R$-robust and $C$-consistent for any $(R,C)$ with $0 \leq R \leq \frac{3}{4}$ and $\sqrt{1-R} + \sqrt{1-C} = 1$, and prove that no other algorithm can achieve a better tradeoff. Yucheng Lu · Wentao Guo · Christopher De Sa Random reshuffling, which randomly permutes the dataset each epoch, is widely adopted in model training because it yields faster convergence than with-replacement sampling. Recent studies indicate greedily chosen data orderings can further speed up convergence empirically, at the cost of using more computation and memory. However, greedy ordering lacks theoretical justification and has limited utility due to its non-trivial memory and computation overhead. In this paper, we first formulate an example-ordering framework named \emph{herding} and answer affirmatively that SGD with herding converges at the rate $O(T^{-2/3})$ on smooth, non-convex objectives, faster than the $O(n^{1/3}T^{-2/3})$ obtained by random reshuffling, where $n$ denotes the number of data points and $T$ denotes the total number of iterations. To reduce the memory overhead, we leverage discrepancy minimization theory to propose an online Gradient Balancing algorithm (GraB) that enjoys the same rate as herding, while reducing the memory usage from $O(nd)$ to just $O(d)$ and computation from $O(n^2)$ to $O(n)$, where $d$ denotes the model dimension. We show empirically on applications including MNIST, CIFAR10, WikiText and GLUE that GraB can outperform random reshuffling in terms of both training and validation performance, and even outperform state-of-the-art greedy ordering while reducing memory usage over $100\times$. Xudong Pan · Shengyao Zhang · Mi Zhang · Yifan Yan · Min Yang In this paper, we present a capacity-aware neuron steganography scheme (i.e., Cans) to covertly transmit multiple private machine learning (ML) datasets via a scheduled-to-publish deep neural network (DNN) as the carrier model. Unlike existing steganography schemes which treat the DNN parameters as bit strings, \textit{Cans} for the first time exploits the learning capacity of the carrier model via a novel parameter sharing mechanism. Extensive evaluation shows, Cans is the first working scheme which can covertly transmit over $10000$ real-world data samples within a carrier model which has $220\times$ less parameters than the total size of the stolen data, and simultaneously transmit multiple heterogeneous datasets within a single carrier model, under a trivial distortion rate ($<10^ {-5}$) and with almost no utility loss on the carrier model ($<1\%$). Besides, Cans implements by-design redundancy to be resilient against common post-processing techniques on the carrier model before the publishing. Sander Beckers · Hana Chockler · Joseph Halpern As autonomous systems rapidly become ubiquitous, there is a growing need for a legal and regulatory framework toaddress when and how such a system harms someone. There have been several attempts within the philosophy literature to define harm, but none of them has proven capable of dealing with with the many examples that have been presented, leading some to suggest that the notion of harm should be abandoned and ``replaced by more well-behaved notions''. As harm is generally something that is caused, most of these definitions have involved causality at some level. Yet surprisingly, none of them makes use of causal models and the definitions of actual causality that they can express. In this paper we formally define a qualitative notion of harm that uses causal models and is based on a well-known definition of actual causality (Halpern, 2016). The key novelty of our definition is that it is based on contrastive causation and uses a default utility to which the utility of actual outcomes is compared. We show that our definition is able to handle the examples from the literature, and illustrate its importance for reasoning about situations involving autonomous systems. Gagan Aggarwal · Kshipra Bhawalkar · Aranyak Mehta · Divyarthi Mohan · Alexandros Psomas Internet ad auctions have evolved from a few lines of text to richer informational layouts that include images, sitelinks, videos, etc. Ads in these new formats occupy varying amounts of space, and an advertiser can provide multiple formats, only one of which can be shown.The seller is now faced with a multi-parameter mechanism design problem.Computing an efficient allocation is computationally intractable, and therefore the standard Vickrey-Clarke-Groves (VCG) auction, while truthful and welfare-optimal, is impractical. In this paper, we tackle a fundamental problem in the design of modern ad auctions. We adopt a ``Myersonian'' approach and study allocation rules that are monotone both in the bid and set of rich ads. We show that such rules can be paired with a payment function to give a truthful auction. Our main technical challenge is designing a monotone rule that yields a good approximation to the optimal welfare. Monotonicity doesn't hold for standard algorithms, e.g. the incremental bang-per-buck order, that give good approximations to ``knapsack-like'' problems such as ours. In fact, we show that no deterministic monotone rule can approximate the optimal welfare within a factor better than $2$ (while there is a non-monotone FPTAS). Our main result is a new, simple, greedy and monotone allocation rule that guarantees a $3$ approximation. In ad auctions in practice, monotone allocation rules are often paired with the so-called \emph{Generalized Second Price (GSP)} payment rule, which charges the minimum threshold price below which the allocation changes. We prove that, even though our monotone allocation rule paired with GSP is not truthful, its Price of Anarchy (PoA) is bounded. Under standard no-overbidding assumptions, we prove bounds on the a pure and Bayes-Nash PoA. Finally, we experimentally test our algorithms on real-world data. Marcus Nordstrom · Henrik Hult · Fredrik Löfman · Jonas Söderberg We study two of the most popular performance metrics in medical image segmentation, Accuracy and Dice, when the target labels are noisy. For both metrics, several statements related to characterization and volume properties of the set of optimal segmentations are proved, and associated experiments are provided. Our main insights are: (i) the volume of the solutions to both metrics may deviate significantly from the expected volume of the target, (ii) the volume of a solution to Accuracy is always less than or equal to the volume of a solution to Dice and (iii) the optimal solutions to both of these metrics coincide when the set of feasible segmentations is constrained to the set of segmentations with the volume equal to the expected volume of the target. Max Springer · MohammadTaghi Hajiaghayi · Debmalya Panigrahi · Mohammad Khani The Santa Claus problem is a fundamental problem in {\em fair division}: the goal is to partition a set of {\em heterogeneous} items among {\em heterogeneous} agents so as to maximize the minimum value of items received by any agent. In this paper, we study the online version of this problem where the items are not known in advance and have to be assigned to agents as they arrive over time. If the arrival order of items is arbitrary, then no good assignment rule exists in the worst case. However, we show that, if the arrival order is random, then for $n$ agents and any $\varepsilon > 0$, we can obtain a competitive ratio of $1-\varepsilon$ when the optimal assignment gives value at least $\Omega(\log n / \varepsilon^2)$ to every agent (assuming each item has at most unit value). We also show that this result is almost tight: namely, if the optimal solution has value at most $C \ln n / \varepsilon$ for some constant $C$, then there is no $(1-\varepsilon)$-competitive algorithm even for random arrival order. Emmanuel Esposito · Federico Fusco · Dirk van der Hoeven · Nicolò Cesa-Bianchi The framework of feedback graphs is a generalization of sequential decision-making with bandit or full information feedback. In this work, we study an extension where the directed feedback graph is stochastic, following a distribution similar to the classical Erdős-Rényi model. Specifically, in each round every edge in the graph is either realized or not with a distinct probability for each edge. We prove nearly optimal regret bounds of order $\min\bigl\{\min_{\varepsilon} \sqrt{(\alpha_\varepsilon/\varepsilon) T},\, \min_{\varepsilon} (\delta_\varepsilon/\varepsilon)^{1/3} T^{2/3}\bigr \}$ (ignoring logarithmic factors), where $\alpha_{\varepsilon}$ and $\delta_{\varepsilon}$ are graph-theoretic quantities measured on the support of the stochastic feedback graph $\mathcal{G}$ with edge probabilities thresholded at $\varepsilon$. Our result, which holds without any preliminary knowledge about $\mathcal{G}$, requires the learner to observe only the realized out-neighborhood of the chosen action. When the learner is allowed to observe the realization of the entire graph (but only the losses in the out-neighborhood of the chosen action), we derive a more efficient algorithm featuring a dependence on weighted versions of the independence and weak domination numbers that exhibits improved bounds for some special cases. Dheeraj Baby · Yu-Xiang Wang We consider the problem of nonstochastic control with a sequence of quadratic losses, i.e., LQR control. We provide an efficient online algorithm that achieves an optimal dynamic (policy) regret of $ \tilde{O}(n^{1/3} \mathcal{TV}(M_{1:n}^{2/3} \vee 1)$, where $\mathcal{TV}(M_{1:n})$ is the total variation of any oracle sequence of \emph{Disturbance Action} policies parameterized by $M_1,...,M_n$ --- chosen in hindsight to cater to unknown nonstationarity. The rate improves the best known rate of $\tilde{O}(\sqrt{n (\mathcal{TV}(M_{1:n})+1)} )$ for general convex losses and is information-theoretically optimal for LQR. Main technical components include the reduction of LQR to online linear regression with delayed feedback due to Foster & Simchowitz 2020, as well as a new \ emph{proper} learning algorithm with an optimal $\tilde{O}(n^{1/3})$ dynamic regret on a family of "minibatched'' quadratic losses, which could be of independent interest. Yeoneung Kim · Insoon Yang · Kwang-Sung Jun In online learning problems, exploiting low variance plays an important role in obtaining tight performance guarantees yet is challenging because variances are often not known a priori. Recently, considerable progress has been made by Zhang et al. (2021) where they obtain a variance-adaptive regret bound for linear bandits without knowledge of the variances and a horizon-free regret bound for linear mixture Markov decision processes (MDPs). In this paper, we present novel analyses that improve their regret bounds significantly. For linear bandits, we achieve $\tilde O(\min\{d\sqrt{K}, d^ {1.5}\sqrt{\sum_{k=1}^K \sigma_k^2}\} + d^2)$ where $d$ is the dimension of the features, $K$ is the time horizon, and $\sigma_k^2$ is the noise variance at time step $k$, and $\tilde O$ ignores polylogarithmic dependence, which is a factor of $d^3$ improvement. For linear mixture MDPs with the assumption of maximum cumulative reward in an episode being in $[0,1]$, we achieve a horizon-free regret bound of $\tilde O(d \sqrt{K} + d^2)$ where $d$ is the number of base models and $K$ is the number of episodes. This is a factor of $d^{3.5}$ improvement in the leading term and $d^7$ in the lower order term. Our analysis critically relies on a novel peeling-based regret analysis that leverages the elliptical potential `count' lemma. Ioannis Anagnostides · Gabriele Farina · Ioannis Panageas · Tuomas Sandholm We show that, for any sufficiently small fixed $\epsilon > 0$, when both players in a general-sum two-player (bimatrix) game employ optimistic mirror descent (OMD) with smooth regularization, learning rate $\eta = O(\epsilon^2)$ and $T = \Omega(poly(1/\epsilon))$ repetitions, either the dynamics reach an $\epsilon$-approximate Nash equilibrium (NE), or the average correlated distribution of play is an $\Omega(poly(\epsilon))$-strong coarse correlated equilibrium (CCE): any possible unilateral deviation does not only leave the player worse, but will decrease its utility by $\Omega (poly(\epsilon))$. As an immediate consequence, when the iterates of OMD are bounded away from being Nash equilibria in a bimatrix game, we guarantee convergence to an \emph{exact} CCE after only $O (1)$ iterations. Our results reveal that uncoupled no-regret learning algorithms can converge to CCE in general-sum games remarkably faster than to NE in, for example, zero-sum games. To establish this, we show that when OMD does not reach arbitrarily close to a NE, the (cumulative) regret of both players is not only negative, but decays linearly with time. Given that regret is the canonical measure of performance in online learning, our results suggest that cycling behavior of no-regret learning algorithms in games can be justified in terms of efficiency. Hengquan Guo · Xin Liu · Honghao Wei · Lei Ying This paper considers online convex optimization with hard constraints and analyzes achievable regret and cumulative hard constraint violation (violation for short). The problem distinguishes itself from online convex optimization with soft constraints, where a violation at one round can be compensated/cancelled by a conservative decision at a different round. We propose a RECtified Online Optimization algorithm (RECOO) and consider two settings: fixed constraints and adversarial constraints. Both settings have been considered in the literature. Compared with existing results, {\em RECOO achieves the best of two worlds and beyond.} For the fixed-constraints setting, RECOO achieves $O\left(\sqrt{T}\right)$ regret and $O(1)$ violation, where $T$ is the learning horizon. The best known results in this case are $O(\sqrt{T})$ regret and $O\left(T^{1/4}\right)$ violation. For the adversarial-constraints setting, it guarantees $O(\sqrt{T})$ regret and $O(T^{3/4})$ violation, which match the best existing results. When the loss functions are strongly convex, RECOO can guarantee $O(\log T)$ regret and $O(1)$ violation for fixed constraints, and $O(\log T)$ regret and $O(\ sqrt{T\log T})$ violation for adversarial constraints. Both these results are order-wise better than the existing bounds. The regret and violation bounds mentioned above use the best fixed decision in hindsight as the baseline. This paper further considers a dynamic baseline where the comparator sequence is time-varying. This paper shows that RECOO not only improves the existing results in the fixed-constraints setting but also {\em for the first time,} guarantees dynamic regret and violation bounds in the adversarial-constraints setting. Our experiment results confirm that RECOO outperforms several existing algorithms for both fixed and adversarial constraints. Gene Li · Cong Ma · Nati Srebro We present a family $\{\widehat{\pi}_p\}_{p\ge 1}$ of pessimistic learning rules for offline learning of linear contextual bandits, relying on confidence sets with respect to different $\ell_p$ norms, where $\widehat{\pi}_2$ corresponds to Bellman-consistent pessimism (BCP), while $\widehat{\pi}_\infty$ is a novel generalization of lower confidence bound (LCB) to the linear setting. We show that the novel $\widehat{\pi}_\infty$ learning rule is, in a sense, adaptively optimal, as it achieves the minimax performance (up to log factors) against all $\ell_q$-constrained problems, and as such it strictly dominates all other predictors in the family, including $\widehat{\pi}_2$. Yihan Zhang · Nir Weinberger We consider a high-dimensional mean estimation problem over a binary hidden Markov model, which illuminates the interplay between memory in data, sample size, dimension, and signal strength in statistical inference. In this model, an estimator observes $n$ samples of a $d$-dimensional parameter vector $\theta_{*}\in\mathbb{R}^{d}$, multiplied by a random sign $ S_i $ ($1\le i\le n$), and corrupted by isotropic standard Gaussian noise. The sequence of signs $\{S_{i}\}_{i\in[n]}\in\{-1,1\}^{n}$ is drawn from a stationary homogeneous Markov chain with flip probability $\delta\in[0,1/2] $. As $\delta$ varies, this model smoothly interpolates two well-studied models: the Gaussian Location Model for which $\delta=0$ and the Gaussian Mixture Model for which $\delta=1/2$. Assuming that the estimator knows $\delta$, we establish a nearly minimax optimal (up to logarithmic factors) estimation error rate, as a function of $\|\theta_{*}\|,\delta,d,n$. We then provide an upper bound to the case of estimating $\delta$, assuming a (possibly inaccurate) knowledge of $\theta_{*}$. The bound is proved to be tight when $\theta_{*}$ is an accurately known constant. These results are then combined to an algorithm which estimates $\theta_{*}$ with $\delta$ unknown a priori, and theoretical guarantees on its error are stated. Ilai Bistritz · Nicholas Bambos Abstract Consider $N$ cooperative agents such that for $T$ turns, each agent n takes an action $a_{n}$ and receives a stochastic reward $r_{n}\left(a_{1},\ldots,a_{N}\right)$. Agents cannot observe the actions of other agents and do not know even their own reward function. The agents can communicate with their neighbors on a connected graph $G$ with diameter $d\left(G\right)$. We want each agent $n$ to achieve an expected average reward of at least $\lambda_{n}$ over time, for a given quality of service (QoS) vector $\boldsymbol{\lambda}$. A QoS vector $\boldsymbol{\lambda}$ is not necessarily achievable. By giving up on immediate reward, knowing that the other agents will compensate later, agents can improve their achievable capacity region. Our main observation is that the gap between $\lambda_{n}t$ and the accumulated reward of agent $n$, which we call the QoS regret, behaves like a queue. Inspired by this observation, we propose a distributed algorithm that aims to learn a max-weight matching of agents to actions. In each epoch, the algorithm employs a consensus phase where the agents agree on a certain weighted sum of rewards by communicating only $O\left(d\ left(G\right)\right)$ numbers every turn. Then, the algorithm uses distributed successive elimination on a random subset of action profiles to approximately maximize this weighted sum of rewards. We prove a bound on the accumulated sum of expected QoS regrets of all agents, that holds if $\boldsymbol{\lambda}$ is a safety margin $\varepsilon_{T}$ away from the boundary of the capacity region, where $\varepsilon_{T}\rightarrow0$ as $T\rightarrow\infty$. This bound implies that, for large $T$, our algorithm can achieve any $\boldsymbol{\lambda}$ in the interior of the dynamic capacity region, while all agents are guaranteed an empirical average expected QoS regret of $\tilde{O}\left(1\right)$ over $t=1,\ldots,T$ which never exceeds $\tilde{O}\left(\sqrt{t}\right)$ for any $t$. We then extend our result to time-varying i.i.d. communication graphs. Dmitry Kovalev · Alexander Gasnikov In this paper, we revisit the smooth and strongly-convex-strongly-concave minimax optimization problem. Zhang et al. (2021) and Ibrahim et al. (2020) established the lower bound $\Omega\left(\sqrt{\ kappa_x\kappa_y} \log \frac{1}{\epsilon}\right)$ on the number of gradient evaluations required to find an ϵ-accurate solution, where κx and κy are condition numbers for the strong convexity and strong concavity assumptions. However, the existing state-of-the-art methods do not match this lower bound: algorithms of Lin et al. (2020) and Wang and Li (2020) have gradient evaluation complexity $\mathcal{O}\left(\sqrt{\kappa_x\kappa_y} \log^3 \frac{1}{\epsilon}\right)$ and $\mathcal{O}\left( \sqrt{\kappa_x\kappa_y}\log^3 (\kappa_x\kappa_y)\log\frac{1}{\epsilon}\right)$, respectively. We fix this fundamental issue by providing the first algorithm with $\mathcal{O}\left(\sqrt{\kappa_x\kappa_y} \log \frac{1}{\epsilon}\right)$ gradient evaluation complexity. We design our algorithm in three steps: (i) we reformulate the original problem as a minimization problem via the pointwise conjugate function; (ii) we apply a specific variant of the proximal point algorithm to the reformulated problem; (iii) we compute the proximal operator inexactly using the optimal algorithm for operator norm reduction in monotone inclusions. Dat Do · Nhat Ho · XuanLong Nguyen As we collect additional samples from a data population for which a known density function estimate may have been previously obtained by a black box method, the increased complexity of the data set may result in the true density being deviated from the known estimate by a mixture distribution. To model this phenomenon, we consider the \emph{deviating mixture model} $(1-\lambda^{*})h_0 + \lambda ^{*} (\sum_{i = 1}^{k} p_{i}^{*} f(x|\theta_{i}^{*}))$, where $h_0$ is a known density function, while the deviated proportion $\lambda^{*}$ and latent mixing measure $G_{*} = \sum_{i = 1}^{k} p_{i}^ {*} \delta_{\theta_i^{*}}$ associated with the mixture distribution are unknown. Via a novel notion of distinguishability between the known density $h_{0}$ and the deviated mixture distribution, we establish rates of convergence for the maximum likelihood estimates of $\lambda^{*}$ and $G^{*}$ under Wasserstein metric. Simulation studies are carried out to illustrate the theory. Saeed Masoudian · Julian Zimmert · Yevgeny Seldin We present a modified tuning of the algorithm of Zimmert and Seldin [2020] for adversarial multiarmed bandits with delayed feedback, which in addition to the minimax optimal adversarial regret guarantee shown by Zimmert and Seldin [2020] simultaneously achieves a near-optimal regret guarantee in the stochastic setting with fixed delays. Specifically, the adversarial regret guarantee is $\ mathcal{O}(\sqrt{TK} + \sqrt{dT\log K})$, where $T$ is the time horizon, $K$ is the number of arms, and $d$ is the fixed delay, whereas the stochastic regret guarantee is $\mathcal{O}\left(\sum_{i \ neq i^*}(\frac{1}{\Delta_i} \log(T) + \frac{d}{\Delta_{i}}) + d K^{1/3}\log K\right)$, where $\Delta_i$ are the suboptimality gaps. We also present an extension of the algorithm to the case of arbitrary delays, which is based on an oracle knowledge of the maximal delay $d_{max}$ and achieves $\mathcal{O}(\sqrt{TK} + \sqrt{D\log K} + d_{max}K^{1/3} \log K)$ regret in the adversarial regime, where $D$ is the total delay, and $\mathcal{O}\left(\sum_{i \neq i^*}(\frac{1}{\Delta_i} \log(T) + \frac{\sigma_{max}}{\Delta_{i}}) + d_{max}K^{1/3}\log K\right)$ regret in the stochastic regime, where $\sigma_{max}$ is the maximal number of outstanding observations. Finally, we present a lower bound that matches regret upper bound achieved by the skipping technique of Zimmert and Seldin [2020] in the adversarial setting. Gautam Kamath · Argyris Mouzakis · Vikrant Singhal We prove new lower bounds for statistical estimation tasks under the constraint of $(\varepsilon,\delta)$-differential privacy. First, we provide tight lower bounds for private covariance estimation of Gaussian distributions. We show that estimating the covariance matrix in Frobenius norm requires $\Omega(d^2)$ samples, and in spectral norm requires $\Omega(d^{3/2})$ samples, both matching upper bounds up to logarithmic factors. We prove these bounds via our main technical contribution, a broad generalization of the fingerprinting method to exponential families. Additionally, using the private Assouad method of Acharya, Sun, and Zhang, we show a tight $\Omega(d/(\alpha^2 \varepsilon))$ lower bound for estimating the mean of a distribution with bounded covariance to $\alpha$-error in $\ell_2$-distance. Prior known lower bounds for all these problems were either polynomially weaker or held under the stricter condition of $(\varepsilon,0)$-differential privacy. Runyu Zhang · Qinghua Liu · Huan Wang · Caiming Xiong · Na Li · Yu Bai This paper studies policy optimization algorithms for multi-agent reinforcement learning. We begin by proposing an algorithm framework for two-player zero-sum Markov Games in the full-information setting, where each iteration consists of a policy update step at each state using a certain matrix game algorithm, and a value update step with a certain learning rate. This framework unifies many existing and new policy optimization algorithms. We show that the \emph{state-wise average policy} of this algorithm converges to an approximate Nash equilibrium (NE) of the game, as long as the matrix game algorithms achieve low weighted regret at each state, with respect to weights determined by the speed of the value updates. Next, we show that this framework instantiated with the Optimistic Follow-The-Regularized-Leader (OFTRL) algorithm at each state (and smooth value updates) can find an $\mathcal{\widetilde{O}}(T^{-5/6})$ approximate NE in $T$ iterations, and a similar algorithm with slightly modified value update rule achieves a faster $\mathcal{\widetilde{O}}(T^{-1})$ convergence rate. These improve over the current best $\mathcal{\widetilde{O}}(T^{-1/2})$ rate of symmetric policy optimization type algorithms. We also extend this algorithm to multi-player general-sum Markov Games and show an $\mathcal{\widetilde{O}}(T^{-3/4})$ convergence rate to Coarse Correlated Equilibria (CCE). Finally, we provide a numerical example to verify our theory and investigate the importance of smooth value updates, and find that using ''eager'' value updates instead (equivalent to the independent natural policy gradient algorithm) may significantly slow down the convergence, even on a simple game with $H=2$ layers. Slavomír Hanzely · Dmitry Kamzolov · Dmitry Pasechnyuk · Alexander Gasnikov · Peter Richtarik · Martin Takac In this paper, we present the first stepsize schedule for Newton method resulting in fast global and local convergence guarantees. In particular, we a) prove an $\mathcal O \left( 1/{k^2} \right)$ global rate, which matches the state-of-the-art global rate of cubically regularized Newton method of Polyak and Nesterov (2006) and of regularized Newton method of Mishchenko (2021), and the later variant of Doikov and Nesterov (2021), b) prove a local quadratic rate, which matches the best-known local rate of second-order methods, and c) our stepsize formula is simple, explicit, and does not require solving any subproblem. Our convergence proofs hold under affine-invariant assumptions closely related to the notion of self-concordance. Finally, our method has competitive performance when compared to existing baselines which share the same fast global convergence guarantees. Joachim Bona-Pellissier · François Malgouyres · Francois Bachoc Is a sample rich enough to determine, at least locally, the parameters of a neural network? To answer this question, we introduce a new local parameterization of a given deep ReLU neural network by fixing the values of some of its weights. This allows us to define local lifting operators whose inverses are charts of a smooth manifold of a high dimensional space. The function implemented by the deep ReLU neural network composes the local lifting with a linear operator which depends on the sample. We derive from this convenient representation a geometrical necessary and sufficient condition of local identifiability. Looking at tangent spaces, the geometrical condition provides: 1/ a sharp and testable necessary condition of identifiability and 2/ a sharp and testable sufficient condition of local identifiability. The validity of the conditions can be tested numerically using backpropagation and matrix rank computations. Wiebke Günther · Urmi Ninad · Jonas Wahl · Jakob Runge Conditional independence (CI) testing is frequently used in data analysis and machine learning for various scientific fields and it forms the basis of constraint-based causal discovery. Oftentimes, CI testing relies on strong, rather unrealistic assumptions. One of these assumptions is homoskedasticity, in other words, a constant conditional variance is assumed. We frame heteroskedasticity in a structural causal model framework and present an adaptation of the partial correlation CI test that works well in the presence of heteroskedastic noise, given that expert knowledge about the heteroskedastic relationships is available. Further, we provide theoretical consistency results for the proposed CI test which carry over to causal discovery under certain assumptions. Numerical causal discovery experiments demonstrate that the adapted partial correlation CI test outperforms the standard test in the presence of heteroskedasticity and is on par for the homoskedastic case. Finally, we discuss the general challenges and limits as to how expert knowledge about heteroskedasticity can be accounted for in causal discovery. Steven Yin · Shipra Agrawal · Assaf Zeevi We study the problem of allocating $T$ sequentially arriving items among $n$ homogenous agents under the constraint that each agent must receive a prespecified fraction of all items, with the objective of maximizing the agents' total valuation of items allocated to them. The agents' valuations for the item in each round are assumed to be i.i.d. but their distribution is apriori unknown to the central planner.vTherefore, the central planner needs to implicitly learn these distributions from the observed values in order to pick a good allocation policy. However, an added challenge here is that the agents are strategic with incentives to misreport their valuations in order to receive better allocations. This sets our work apart both from the online auction mechanism design settings which typically assume known valuation distributions and/or involve payments, and from the online learning settings that do not consider strategic agents. To that end, our main contribution is an online learning based allocation mechanism that is approximately Bayesian incentive compatible, and when all agents are truthful, guarantees a sublinear regret for individual agents' utility compared to that under the optimal offline allocation policy. We study compressive sensing with a deep generative network prior. Initial theoretical guarantees for efficient recovery from compressed linear measurements have been developed for signals in the range of a ReLU network with Gaussian weights and logarithmic expansivity: that is when each layer is larger than the previous one by a logarithmic factor. It was later shown that constant expansivity is sufficient for recovery. It has remained open whether the expansivity can be relaxed, allowing for networks with contractive layers (as often the case of real generators). In this work we answer this question, proving that a signal in the range of a Gaussian generative network can be recovered from few linear measurements provided that the width of the layers is proportional to the input layer size (up to log factors). This condition allows the generative network to have contractive layers. Our result is based on showing that Gaussian matrices satisfy a matrix concentration inequality which we term Range Restricted Weight Distribution Condition (R2WDC) and which weakens the Weight Distribution Condition (WDC) upon which previous theoretical guarantees were based. The WDC has also been used to analyze other signal recovery problems with generative network priors. By replacing the WDC with the R2WDC, we are able to extend previous results for signal recovery with expansive generative network priors to non-expansive ones. We discuss these extensions for phase retrieval, denoising, and spiked matrix recovery. Osbert Bastani · Varun Gupta · Christopher Jung · Georgy Noarov · Ramya Ramalingam · Aaron Roth We give a simple, generic conformal prediction method for sequential prediction that achieves target empirical coverage guarantees on adversarial data. It is computationally lightweight --- comparable to split conformal prediction --- but does not require having a held-out validation set, and so all data can be used for training models from which to derive a conformal score. Furthermore, it gives stronger than marginal coverage guarantees in two ways. First, it gives threshold-calibrated prediction sets that have correct empirical coverage even conditional on the threshold used to form the prediction set from the conformal score. Second, the user can specify an arbitrary collection of subsets of the feature space --- possibly intersecting --- and the coverage guarantees will also hold conditional on membership in each of these subsets. We call our algorithm MVP, short for MultiValid Prediction. We give both theory and an extensive set of empirical Arpit Agarwal · Sanjeev Khanna · Huan Li · Prathamesh Patil Hierarchical clustering over graphs is a fundamental task in data mining and machine learning with applications in many domains including phylogenetics, social network analysis, and information retrieval. Specifically, we consider the recently popularized objective function for hierarchical clustering due to Dasgupta~\cite{Dasgupta16}, namely, minimum cost hierarchical partitioning. Previous algorithms for (approximately) minimizing this objective function require linear time/space complexity. In many applications the underlying graph can be massive in size making it computationally challenging to process the graph even using a linear time/space algorithm. As a result, there is a strong interest in designing algorithms that can perform global computation using only sublinear resources (space, time, and communication). The focus of this work is to study hierarchical clustering for massive graphs under three well-studied models of sublinear computation which focus on space, time, and communication, respectively, as the primary resources to optimize: (1) (dynamic) streaming model where edges are presented as a stream, (2) query model where the graph is queried using neighbor and degree queries, (3) massively parallel computation (MPC) model where the edges of the graph are partitioned over several machines connected via a communication channel.We design sublinear algorithms for hierarchical clustering in all three models above. At the heart of our algorithmic results is a view of the objective in terms of cuts in the graph, which allows us to use a relaxed notion of cut sparsifiers to do hierarchical clustering while introducing only a small distortion in the objective function. Our main algorithmic contributions are then to show how cut sparsifiers of the desired form can be efficiently constructed in the query model and the MPC model. We complement our algorithmic results by establishing nearly matching lower bounds that rule out the possibility of designing algorithms with better performance guarantees in each of these models. Yao Yao · Qihang Lin · Tianbao Yang The area under the ROC curve (AUC) is one of the most widely used performance measures for classification models in machine learning. However, it summarizes the true positive rates (TPRs) over all false positive rates (FPRs) in the ROC space, which may include the FPRs with no practical relevance in some applications. The partial AUC, as a generalization of the AUC, summarizes only the TPRs over a specific range of the FPRs and is thus a more suitable performance measure in many real-world situations. Although partial AUC optimization in a range of FPRs had been studied, existing algorithms are not scalable to big data and not applicable to deep learning. To address this challenge, we cast the problem into a non-smooth difference-of-convex (DC) program for any smooth predictive functions (e.g., deep neural networks), which allowed us to develop an efficient approximated gradient descent method based on the Moreau envelope smoothing technique, inspired by recent advances in non-smooth DC optimization. To increase the efficiency of large data processing, we used an efficient stochastic block coordinate update in our algorithm. Our proposed algorithm can also be used to minimize the sum of ranked range loss, which also lacks efficient solvers. We established a complexity of $\tilde O(1/\epsilon^6)$ for finding a nearly $\epsilon$-critical solution. Finally, we numerically demonstrated the effectiveness of our proposed algorithms in training both linear models and deep neural networks for partial AUC maximization and sum of ranked range loss Jiancong Xiao · Yanbo Fan · Ruoyu Sun · Jue Wang · Zhi-Quan Luo In adversarial machine learning, deep neural networks can fit the adversarial examples on the training dataset but have poor generalization ability on the test set. This phenomenon is called robust overfitting, and it can be observed when adversarially training neural nets on common datasets, including SVHN, CIFAR-10, CIFAR-100, and ImageNet. In this paper, we study the robust overfitting issue of adversarial training by using tools from uniform stability. One major challenge is that the outer function (as a maximization of the inner function) is nonsmooth, so the standard technique (e.g., Hardt et al., 2016) cannot be applied. Our approach is to consider $\eta$-approximate smoothness: we show that the outer function satisfies this modified smoothness assumption with $\eta$ being a constant related to the adversarial perturbation $\epsilon$. Based on this, we derive stability-based generalization bounds for stochastic gradient descent (SGD) on the general class of $\ eta$-approximate smooth functions, which covers the adversarial loss. Our results suggest that robust test accuracy decreases in $\epsilon$ when $T$ is large, with a speed between $\Omega(\epsilon\ sqrt{T})$ and $\mathcal{O}(\epsilon T)$. This phenomenon is also observed in practice. Additionally, we show that a few popular techniques for adversarial training (\emph{e.g.,} early stopping, cyclic learning rate, and stochastic weight averaging) are stability-promoting in theory. Qianyi Li · Haim Sompolinsky Recently proposed Gated Linear Networks (GLNs) present a tractable nonlinear network architecture, and exhibit interesting capabilities such as learning with local error signals and reduced forgetting in sequential learning. In this work, we introduce a novel gating architecture, named Globally Gated Deep Linear Networks (GGDLNs) where gating units are shared among all processing units in each layer, thereby decoupling the architectures of the nonlinear but unlearned gating and the learned linear processing motifs. We derive exact equations for the generalization properties of Bayesian Learning in these networks in the finite-width thermodynamic limit, defined by $N, P\rightarrow\infty$ while $P/N=O(1)$ where $N$ and $P$ are the hidden layers' width and size of training data sets respectfully. We find that the statistics of the network predictor can be expressed in terms of kernels that undergo shape renormalization through a data-dependent order-parameter matrix compared to the infinite-width Gaussian Process (GP) kernels. Our theory accurately captures the behavior of finite width GGDLNs trained with gradient descent (GD) dynamics. We show that kernel shape renormalization gives rise to rich generalization properties w.r.t. network width, depth, and $L_2$ regularization amplitude. Interestingly, networks with a large number of gating units behave similarly to standard ReLU architectures. Although gating units in the model do not participate in supervised learning, we show the utility of unsupervised learning of the gating parameters. Additionally, our theory allows the evaluation of the network capacity for learning multiple tasks by incorporating task-relevant information into the gating units. In summary, our work is the first exact theoretical solution of learning in a family of nonlinear networks with finite width. The rich and diverse behavior of the GGDLNs suggests that they are helpful analytically tractable models of learning single and multiple tasks, in finite-width nonlinear deep networks. Masaaki Nishino · Kengo Nakamura · Norihito Yasuda Machine learning technologies have been used in a wide range of practical systems.In practical situations, it is natural to expect the input-output pairs of a machine learning model to satisfy some requirements.However, it is difficult to obtain a model that satisfies requirements by just learning from examples.A simple solution is to add a module that checks whether the input-output pairs meet the requirements and then modifies the model's outputs. Such a module, which we call a {\em concurrent verifier} (CV), can give a certification, although how the generalizability of the machine learning model changes using a CV is unclear. This paper gives a generalization analysis of learning with a CV. We analyze how the learnability of a machine learning model changes with a CV and show a condition where we can obtain a guaranteed hypothesis using a verifier only in the inference time.We also show that typical error bounds based on Rademacher complexity will be no larger than that of the original model when using a CV in multi-class classification and structured prediction settings. Zhouxing Shi · Yihan Wang · Huan Zhang · J. Zico Kolter · Cho-Jui Hsieh Lipschitz constants are connected to many properties of neural networks, such as robustness, fairness, and generalization. Existing methods for computing Lipschitz constants either produce relatively loose upper bounds or are limited to small networks. In this paper, we develop an efficient framework for computing the $\ell_\infty$ local Lipschitz constant of a neural network by tightly upper bounding the norm of Clarke Jacobian via linear bound propagation. We formulate the computation of local Lipschitz constants with a linear bound propagation process on a high-order backward graph induced by the chain rule of Clarke Jacobian. To enable linear bound propagation, we derive tight linear relaxations for specific nonlinearities in Clarke Jacobian. This formulate unifies existing ad-hoc approaches such as RecurJac, which can be seen as a special case of ours with weaker relaxations. The bound propagation framework also allows us to easily borrow the popular Branch-and-Bound (BaB) approach from neural network verification to further tighten Lipschitz constants. Experiments show that on tiny models, our method produces comparable bounds compared to exact methods that cannot scale to slightly larger models; on larger models, our method efficiently produces tighter results than existing relaxed or naive methods, and our method scales to much larger practical models that previous works could not handle. We also demonstrate an application on provable monotonicity analysis. Code is available at https://github.com/shizhouxing/Local-Lipschitz-Constants. Sachin Goyal · Mingjie Sun · Aditi Raghunathan · J. Zico Kolter Test-time adaptation (TTA) refers to adapting neural networks to distribution shifts, specifically with just access to unlabeled test samples from the new domain at test-time. Prior TTA methods optimize over unsupervised objectives such as the entropy of model predictions in TENT (Wang et al., 2021), but it is unclear what exactly makes a good TTA loss. In this paper, we start by presenting a surprising phenomenon: if we attempt to $\textit{meta-learn}$ the ``best'' possible TTA loss over a wide class of functions, then we recover a function that is $\textit{remarkably}$ similar to (a temperature-scaled version of) the softmax-entropy employed by TENT. This only holds, however, if the classifier we are adapting is trained via cross-entropy loss; if the classifier is trained via squared loss, a different ``best'' TTA loss emerges.To explain this phenomenon, we analyze test-time adaptation through the lens of the training losses's $\textit{convex conjugate}$. We show that under natural conditions, this (unsupervised) conjugate function can be viewed as a good local approximation to the original supervised loss and indeed, it recovers the ``best'' losses found by meta-learning. This leads to a generic recipe than be used to find a good TTA loss for $\textit{any}$ given supervised training loss function of a general class. Empirically, our approach dominates other TTA alternatives over a wide range of domain adaptation benchmarks. Our approach is particularly of interest when applied to classifiers trained with $\textit{novel}$ loss functions, e.g., the recently-proposed PolyLoss (Leng et al., 2022) function, where it differs substantially from (and outperforms) an entropy-based loss. Further, we show that our conjugate based approach can also be interpreted as a kind of self-training using a very specific soft label, which we refer to as the $\textit{conjugate pseudo-label}$. Overall, therefore, our method provides a broad framework for better understanding and improving test-time adaptation. Code is available at https://github.com/locuslab/tta_conjugate. Aldo Pacchiano · Christoph Dann · Claudio Gentile We study the problem of model selection in bandit scenarios in the presence of nested policy classes, with the goal of obtaining simultaneous adversarial and stochastic (``best of both worlds") high-probability regret guarantees. Our approach requires that each base learner comes with a candidate regret bound that may or may not hold, while our meta algorithm plays each base learner according to a schedule that keeps the base learner's candidate regret bounds balanced until they are detected to violate their guarantees. We develop careful mis-specification tests specifically designed to blend the above model selection criterion with the ability to leverage the (potentially benign) nature of the environment. We recover the model selection guarantees of the CORRAL algorithm for adversarial environments, but with the additional benefit of achieving high probability regret bounds. More importantly, our model selection results also hold simultaneously in stochastic environments under gap assumptions. These are the first theoretical results that achieve best-of-both world (stochastic and adversarial) guarantees while performing model selection in contextual bandit scenarios. Radu Marinescu · Haifeng Qian · Alexander Gray · Debarun Bhattacharjya · Francisco Barahona · Tian Gao · Ryan Riegel · Pravinda Sahu We introduce Logical Credal Networks (or LCNs for short) -- an expressive probabilistic logic that generalizes prior formalisms that combine logic and probability. Given imprecise information represented by probability bounds and conditional probability bounds on logic formulas, an LCN specifies a set of probability distributions over all its interpretations. Our approach allows propositional and first-order logic formulas with few restrictions, e.g., without requiring acyclicity. We also define a generalized Markov condition that allows us to identify implicit independence relations between atomic formulas. We evaluate our method on benchmark problems such as random networks, Mastermind games with uncertainty and credit card fraud detection. Our results show that the LCN outperforms existing approaches; its advantage lies in aggregating multiple sources of imprecise information. Heng Dong · Tonghan Wang · Jiayuan Liu · Chongjie Zhang Modular Reinforcement Learning (RL) decentralizes the control of multi-joint robots by learning policies for each actuator. Previous work on modular RL has proven its ability to control morphologically different agents with a shared actuator policy. However, with the increase in the Degree of Freedom (DoF) of robots, training a morphology-generalizable modular controller becomes exponentially difficult. Motivated by the way the human central nervous system controls numerous muscles, we propose a Synergy-Oriented LeARning (SOLAR) framework that exploits the redundant nature of DoF in robot control. Actuators are grouped into synergies by an unsupervised learning method, and a synergy action is learned to control multiple actuators in synchrony. In this way, we achieve a low-rank control at the synergy level. We extensively evaluate our method on a variety of robot morphologies, and the results show its superior efficiency and generalizability, especially on robots with a large DoF like Humanoids++ and UNIMALs. Xinyu Pi · Wanjun Zhong · Yan Gao · Nan Duan · Jian-Guang Lou We present LogiGAN, an unsupervised adversarial pre-training framework for improving logical reasoning abilities of language models. Upon automatic identification of logical reasoning phenomena in massive text corpus via detection heuristics, we train language models to predict the masked-out logical statements. Inspired by the facilitation effect of reflective thinking in human learning, we analogically simulate the learning-thinking process with an adversarial Generator-Verifier architecture to assist logic learning. LogiGAN implements a novel sequential GAN approach that (a) circumvents the non-differentiable challenge of the sequential GAN by leveraging the Generator as a sentence-level generative likelihood scorer with a learning objective of reaching scoring consensus with the Verifier; (b) is computationally feasible for large-scale pre-training with arbitrary target length. Both base and large size language models pre-trained with LogiGAN demonstrate obvious performance improvement on 12 datasets requiring general reasoning abilities, revealing the fundamental role of logic in broad reasoning, as well as the effectiveness of LogiGAN. Ablation studies on LogiGAN components reveal the relative orthogonality between linguistic and logic abilities and suggest that reflective thinking's facilitation effect might also generalize to machine learning. Jung-Hee Kim · Junhwa Hur · Tien Phuoc Nguyen · Seong-Gyun Jeong We present a self-supervised depth estimation approach using a unified volumetric feature fusion for surround-view images. Given a set of surround-view images, our method constructs a volumetric feature map by extracting image feature maps from surround-view images and fuse the feature maps into a shared, unified 3D voxel space. The volumetric feature map then can be used for estimating a depth map at each surround view by projecting it into an image coordinate. A volumetric feature contains 3D information at its local voxel coordinate; thus our method can also synthesize a depth map at arbitrary rotated viewpoints by projecting the volumetric feature map into the target viewpoints. Furthermore, assuming static camera extrinsics in the multi-camera system, we propose to estimate a canonical camera motion from the volumetric feature map. Our method leverages 3D spatio- temporal context to learn metric-scale depth and the canonical camera motion in a self-supervised manner. Our method outperforms the prior arts on DDAD and nuScenes datasets, especially estimating more accurate metric-scale depth and consistent depth between neighboring views. In this paper, we study the problem of unsupervised object segmentation from single images. We do not introduce a new algorithm, but systematically investigate the effectiveness of existing unsupervised models on challenging real-world images. We firstly introduce four complexity factors to quantitatively measure the distributions of object- and scene-level biases in appearance and geometry for datasets with human annotations. With the aid of these factors, we empirically find that, not surprisingly, existing unsupervised models catastrophically fail to segment generic objects in real-world images, although they can easily achieve excellent performance on numerous simple synthetic datasets, due to the vast gap in objectness biases between synthetic and real images. By conducting extensive experiments on multiple groups of ablated real-world datasets, we ultimately find that the key factors underlying the colossal failure of existing unsupervised models on real-world images are the challenging distributions of object- and scene-level biases in appearance and geometry. Because of this, the inductive biases introduced in existing unsupervised models can hardly capture the diverse object distributions. Our research results suggest that future work should exploit more explicit objectness biases in the network design. Henger Li · Xiaolin Sun · Zizhan Zheng We propose a model-based reinforcement learning framework to derive untargeted poisoning attacks against federated learning (FL) systems. Our framework first approximates the distribution of the clients' aggregated data using model updates from the server. The learned distribution is then used to build a simulator of the FL environment, which is utilized to learn an adaptive attack policy through reinforcement learning. Our framework is capable of learning strong attacks automatically even when the server adopts a robust aggregation rule. We further derive an upper bound on the attacker's performance loss due to inaccurate distribution estimation. Experimental results on real-world datasets demonstrate that the proposed attack framework significantly outperforms state-of-the-art poisoning attacks. This indicates the importance of developing adaptive defenses for FL systems. Meihua Dang · Anji Liu · Guy Van den Broeck Probabilistic circuits (PCs) are a tractable representation of probability distributions allowing for exact and efficient computation of likelihoods and marginals. There has been significant recent progress on improving the scale and expressiveness of PCs. However, PC training performance plateaus as model size increases. We discover that most capacity in existing large PC structures is wasted: fully-connected parameter layers are only sparsely used. We propose two operations: pruning and growing, that exploit the sparsity of PC structures. Specifically, the pruning operation removes unimportant sub-networks of the PC for model compression and comes with theoretical guarantees. The growing operation increases model capacity by increasing the dimensions of latent states. By alternatingly applying pruning and growing, we increase the capacity that is meaningfully used, allowing us to significantly scale up PC learning. Empirically, our learner achieves state-of-the-art likelihoods on MNIST-family image datasets and an Penn Tree Bank language data compared to other PC learners and less tractable deep generative models such as flow-based models and variational autoencoders (VAEs). Robert Hu · Siu Lun Chau · Jaime Ferrando Huertas · Dino Sejdinovic While preference modelling is becoming one of the pillars of machine learning, the problem of preference explanation remains challenging and underexplored. In this paper, we propose \textsc {Pref-SHAP}, a Shapley value-based model explanation framework for pairwise comparison data. We derive the appropriate value functions for preference models and further extend the framework to model and explain \emph{context specific} information, such as the surface type in a tennis game. To demonstrate the utility of \textsc{Pref-SHAP}, we apply our method to a variety of synthetic and real-world datasets and show that richer and more insightful explanations can be obtained over the baseline. Marc-Etienne Brunet · Ashton Anderson · Richard Zemel There has been a significant research effort focused on explaining predictive models, for example through post-hoc explainability and recourse methods. Most of the proposed techniques operate upon a single, fixed, predictive model. However, it is well-known that given a dataset and a predictive task, there may be a multiplicity of models that solve the problem (nearly) equally well. In this work, we investigate the implications of this kind of model indeterminacy on the post-hoc explanations of predictive models. We show how it can lead to explanatory multiplicity, and we explore the underlying drivers. We show how predictive multiplicity, and the related concept of epistemic uncertainty, are not reliable indicators of explanatory multiplicity. We further illustrate how a set of models showing very similar aggregate performance on a test dataset may show large variations in their local explanations, i.e., for a specific input. We explore these effects for Shapley value based explanations on three risk assessment datasets. Our results indicate that model indeterminacy may have a substantial impact on explanations in practice, leading to inconsistent and even contradicting Hengyuan Hu · Samuel Sokota · David Wu · Anton Bakhtin · Andrei Lupu · Brandon Cui · Jakob Foerster Fully cooperative, partially observable multi-agent problems are ubiquitous in the real world. In this paper, we focus on a specific subclass of coordination problems in which humans are able to discover self-explaining deviations (SEDs). SEDs are actions that deviate from the common understanding of what reasonable behavior would be in normal circumstances. They are taken with the intention of causing another agent or other agents to realize, using theory of mind, that the circumstance must be abnormal. We motivate this idea with a real world example and formalize its definition. Next, we introduce an algorithm for improvement maximizing SEDs (IMPROVISED). Lastly, we evaluate IMPROVISED both in an illustrative toy setting and the popular benchmark setting Hanabi, where we show that it can produce so called finesse plays. Samuel Yang-Zhao · Tianyu Wang · Kee Siong Ng We propose a practical integration of logical state abstraction with AIXI, a Bayesian optimality notion for reinforcement learning agents, to significantly expand the model class that AIXI agents can be approximated over to complex history-dependent and structured environments. The state representation and reasoning framework is based on higher-order logic, which can be used to define and enumerate complex features on non-Markovian and structured environments. We address the problem of selecting the right subset of features to form state abstractions by adapting the $\Phi$-MDP optimisation criterion from state abstraction theory. Exact Bayesian model learning is then achieved using a suitable generalisation of Context Tree Weighting over abstract state sequences. The resultant architecture can be integrated with different planning algorithms. Experimental results on controlling epidemics on large-scale contact networks validates the agent's performance. Paterne GAHUNGU · Christopher Lanyon · Mauricio A Álvarez · Engineer Bainomugisha · Michael T Smith · Richard Wilkinson Linear systems occur throughout engineering and the sciences, most notably as differential equations. In many cases the forcing function for the system is unknown, and interest lies in using noisy observations of the system to infer the forcing, as well as other unknown parameters. In differential equations, the forcing function is an unknown function of the independent variables (typically time and space), and can be modelled as a Gaussian process (GP). In this paper we show how the adjoint of a linear system can be used to efficiently infer forcing functions modelled as GPs, after using a truncated basis expansion of the GP kernel. We show how exact conjugate Bayesian inference for the truncated GP can be achieved, in many cases with substantially lower computation than would be required using MCMC methods. We demonstrate the approach on systems of both ordinary and partial differential equations, and show that the basis expansion approach approximates well the true forcing with a modest number of basis vectors. Finally, we show how to infer point estimates for the non-linear model parameters, such as the kernel length-scales, using Bayesian optimisation. Quentin Bertrand · Quentin Klopfenstein · Pierre-Antoine Bannier · Gauthier Gidel · Mathurin Massias We propose a new fast algorithm to estimate any sparse generalized linear model with convex or non-convex separable penalties. Our algorithm is able to solve problems with millions of samples and features in seconds, by relying on coordinate descent, working sets and Anderson acceleration. It handles previously unaddressed models, and is extensively shown to improve state-of-art algorithms. We provide a flexible, scikit-learn compatible package, which easily handles customized datafits and penalties. Jung Yeon Park · Lawson Wong Behavior cloning of expert demonstrations can speed up learning optimal policies in a more sample-efficient way over reinforcement learning. However, the policy cannot extrapolate well to unseen states outside of the demonstration data, creating covariate shift (agent drifting away from demonstrations) and compounding errors. In this work, we tackle this issue by extending the region of attraction around the demonstrations so that the agent can learn how to get back onto the demonstrated trajectories if it veers off-course. We train a generative backwards dynamics model and generate short imagined trajectories from states in the demonstrations. By imitating both demonstrations and these model rollouts, the agent learns the demonstrated paths and how to get back onto these paths. With optimal or near-optimal demonstrations, the learned policy will be both optimal and robust to deviations, with a wider region of attraction. On continuous control domains, we evaluate the robustness when starting from different initial states unseen in the demonstration data. While both our method and other imitation learning baselines can successfully solve the tasks for initial states in the training distribution, our method exhibits considerably more robustness to different initial states. Victor Zhong · Jesse Mu · Luke Zettlemoyer · Edward Grefenstette · Tim Rocktäschel Recent work has shown that augmenting environments with language descriptions improves policy learning. However, for environments with complex language abstractions, learning how to ground language to observations is difficult due to sparse, delayed rewards. We propose Language Dynamics Distillation (LDD), which pretrains a model to predict environment dynamics given demonstrations with language descriptions, and then fine-tunes these language-aware pretrained representations via reinforcement learning (RL). In this way, the model is trained to both maximize expected reward and retain knowledge about how language relates to environment dynamics. On SILG, a benchmark of five tasks with language descriptions that evaluate distinct generalization challenges on unseen environments (NetHack, ALFWorld, RTFM, Messenger, and Touchdown), LDD outperforms tabula-rasa RL, VAE pretraining, and methods that learn from unlabeled demonstrations in inverse RL and reward shaping with pretrained experts. In our analyses, we show that language descriptions in demonstrations improve sample-efficiency and generalization across environments, and that dynamics modeling with expert demonstrations is more effective than with non-experts. Hang Gao · Ruilong Li · Shubham Tulsiani · Bryan Russell · Angjoo Kanazawa We study the recent progress on dynamic view synthesis (DVS) from monocular video. Though existing approaches have demonstrated impressive results, we show a discrepancy between the practical capture process and the existing experimental protocols, which effectively leaks in multi-view signals during training. We define effective multi-view factors (EMFs) to quantify the amount of multi-view signal present in the input capture sequence based on the relative camera-scene motion. We introduce two new metrics: co-visibility masked image metrics and correspondence accuracy, which overcome the issue in existing protocols. We also propose a new iPhone dataset that includes more diverse real-life deformation sequences. Using our proposed experimental protocol, we show that the state-of-the-art approaches observe a 1-2 dB drop in masked PSNR in the absence of multi-view cues and 4-5 dB drop when modeling complex motion. Code and data can be found at http://hangg7.com/ Eleonora Misino · Giuseppe Marra · Emanuele Sansone We present VAEL, a neuro-symbolic generative model integrating variational autoencoders (VAE) with the reasoning capabilities of probabilistic logic (L) programming. Besides standard latent subsymbolic variables, our model exploits a probabilistic logic program to define a further structured representation, which is used for logical reasoning. The entire process is end-to-end differentiable. Once trained, VAEL can solve new unseen generation tasks by (i) leveraging the previously acquired knowledge encoded in the neural component and (ii) exploiting new logical programs on the structured latent space. Our experiments provide support on the benefits of this neuro-symbolic integration both in terms of task generalization and data efficiency. To the best of our knowledge, this work is the first to propose a general-purpose end-to-end framework integrating probabilistic logic programming into a deep generative model. Yossi Gandelsman · Yu Sun · Xinlei Chen · Alexei Efros Test-time training adapts to a new test distribution on the fly by optimizing a model for each test input using self-supervision.In this paper, we use masked autoencoders for this one-sample learning problem.Empirically, our simple method improves generalization on many visual benchmarks for distribution shifts.Theoretically, we characterize this improvement in terms of the bias-variance Jayesh Gupta · Sai Vemprala · Ashish Kapoor Complex systems are often decomposed into modular subsystems for engineering tractability. Although various equation based white-box modeling techniques make use of such structure, learning based methods have yet to incorporate these ideas broadly. We present a modular simulation framework for modeling homogeneous multibody dynamical systems, which combines ideas from graph neural networks and neural differential equations. We learn to model the individual dynamical subsystem as a neural ODE module. Full simulation of the composite system is orchestrated via spatio-temporal message passing between these modules. An arbitrary number of modules can be combined to simulate systems of a wide variety of coupling topologies. We evaluate our framework on a variety of systems and show that message passing allows coordination between multiple modules over time for accurate predictions and in certain cases, enables zero-shot generalization to new system configurations. Furthermore, we show that our models can be transferred to new system configurations with lower data requirement and training effort, compared to those trained from scratch. Bálint Máté · Samuel Klein · Tobias Golling · François Fleuret The two key characteristics of a normalizing flow is that it is invertible (in particular, dimension preserving) and that it monitors the amount by which it changes the likelihood of data points as samples are propagated along the network. Recently, multiple generalizations of normalizing flows have been introduced that relax these two conditions \citep{nielsen2020survae,huang2020augmented}. On the other hand, neural networks only perform a forward pass on the input, there is neither a notion of an inverse of a neural network nor is there one of its likelihood contribution. In this paper we argue that certain neural network architectures can be enriched with a stochastic inverse pass and that their likelihood contribution can be monitored in a way that they fall under the generalized notion of a normalizing flow mentioned above. We term this enrichment \emph{flowification}. We prove that neural networks only containing linear and convolutional layers and invertible activations such as LeakyReLU can be flowified and evaluate them in the generative setting on image datasets. Jianwei Yang · Chunyuan Li · Xiyang Dai · Jianfeng Gao We propose focal modulation networks (FocalNets in short), where self-attention (SA) is completely replaced by a focal modulation module for modeling token interactions in vision. Focal modulation comprises three components: $(i)$ hierarchical contextualization, implemented using a stack of depth-wise convolutional layers, to encode visual contexts from short to long ranges, $(ii)$ gated aggregation to selectively gather contexts for each query token based on its content, and $(iii)$ element-wise modulation or affine transformation to fuse the aggregated context into the query. Extensive experiments show FocalNets outperform the state-of-the-art SA counterparts (e.g., Swin and Focal Transformers) with similar computational cost on the tasks of image classification, object detection, and semantic segmentation. Specifically, FocalNets with tiny and base size achieve 82.3% and 83.9% top-1 accuracy on ImageNet-1K. After pretrained on ImageNet-22K, it attains 86.5% and 87.3% top-1 accuracy when finetuned with resolution 224$^2$ and 384$^2$, respectively. When transferred to downstream tasks, FocalNets exhibit clear superiority. For object detection with Mask R-CNN, FocalNet base trained with 1$\times$ outperforms the Swin counterpart by 2.1 points and already surpasses Swin trained with 3$\times$ schedule (49.0 v.s. 48.5). For semantic segmentation with UPerNet, FocalNet base at single-scale outperforms Swin by 2.4, and beats Swin at multi-scale (50.5 v.s. 49.7). Using large FocalNet and mask2former, we achieve 58.5 mIoU for ADE20K semantic segmentation, and 57.9 PQ for COCO Panoptic Segmentation. These results render focal modulation a favorable alternative to SA for effective and efficient visual modeling. Code is available at: https: Zitai Wang · Qianqian Xu · Zhiyong Yang · Yuan He · Xiaochun Cao · Qingming Huang Traditional machine learning follows a close-set assumption that the training and test set share the same label space. While in many practical scenarios, it is inevitable that some test samples belong to unknown classes (open-set). To fix this issue, Open-Set Recognition (OSR), whose goal is to make correct predictions on both close-set samples and open-set samples, has attracted rising attention. In this direction, the vast majority of literature focuses on the pattern of open-set samples. However, how to evaluate model performance in this challenging task is still unsolved. In this paper, a systematic analysis reveals that most existing metrics are essentially inconsistent with the aforementioned goal of OSR: (1) For metrics extended from close-set classification, such as Open-set F-score, Youden's index, and Normalized Accuracy, a poor open-set prediction can escape from a low performance score with a superior close-set prediction. (2) Novelty detection AUC, which measures the ranking performance between close-set and open-set samples, ignores the close-set performance. To fix these issues, we propose a novel metric named OpenAUC. Compared with existing metrics, OpenAUC enjoys a concise pairwise formulation that evaluates open-set performance and close-set performance in a coupling manner. Further analysis shows that OpenAUC is free from the aforementioned inconsistency properties. Finally, an end-to-end learning method is proposed to minimize the OpenAUC risk, and the experimental results on popular benchmark datasets speak to its Jeff Z. HaoChen · Colin Wei · Ananya Kumar · Tengyu Ma Contrastive learning is a highly effective method for learning representations from unlabeled data. Recent works show that contrastive representations can transfer across domains, leading to simple state-of-the-art algorithms for unsupervised domain adaptation. In particular, a linear classifier trained to separate the representations on the source domain can also predict classes on the target domain accurately, even though the representations of the two domains are far from each other. We refer to this phenomenon as linear transferability. This paper analyzes when and why contrastive representations exhibit linear transferability in a general unsupervised domain adaptation setting. We prove that linear transferability can occur when data from the same class in different domains (e.g., photo dogs and cartoon dogs) are more related with each other than data from different classes in different domains (e.g., photo dogs and cartoon cats) are. Our analyses are in a realistic regime where the source and target domains can have unbounded density ratios and be weakly related, and they have distant representations across domains. Michael Zhang · Christopher Ré While large pretrained foundation models (FMs) have shown remarkable zero-shot classification robustness to dataset-level distribution shifts, their robustness to subpopulation or group shifts is relatively underexplored. We study this problem, and find that foundation models such as CLIP may not be robust to various group shifts. Across 9 robustness benchmarks, zero-shot classification with their embeddings results in gaps of up to 80.7 percentage points (pp) between average and worst-group accuracy. Unfortunately, existing methods to improve robustness require retraining, which can be prohibitively expensive on large foundation models. We also find that efficient ways to improve model inference (e.g. via adapters, lightweight networks that transform FM embeddings) do not consistently improve and can sometimes hurt group robustness compared to zero-shot. We therefore develop an adapter training strategy to effectively and efficiently improve FM group robustness. Our motivating observation is that while poor robustness results from groups in the same class being embedded far apart in the foundation model "embedding space," standard adapter training may not actually bring these points closer together. We thus propose contrastive adapting, which contrastively trains adapters to bring sample embeddings close to both their ground-truth class embeddings and same-class sample embeddings. Across the 9 robustness benchmarks, contrastive adapting consistently improves group robustness, raising worst-group accuracy by 8.5 to 56.0 pp over zero-shot. Our approach is also efficient, doing so without any FM finetuning and only a fixed set of FM embeddings. On popular benchmarks such as Waterbirds and CelebA, this leads to worst-group accuracy comparable to state-of-the-art methods, while only training <1% of the model parameters. Naoki Nishikawa · Taiji Suzuki · Atsushi Nitanda · Denny Wu Analysis of neural network optimization in the mean-field regime is important as the setting allows for feature learning. Existing theory has been developed mainly for neural networks in finite dimensions, i.e., each neuron has a finite-dimensional parameter. However, the setting of infinite-dimensional input naturally arises in machine learning problems such as nonparametric functional data analysis and graph classification. In this paper, we develop a new mean-field analysis of two-layer neural network in an infinite-dimensional parameter space. We first give a generalization error bound, which shows that the regularized empirical risk minimizer properly generalizes when the data size is sufficiently large, despite the neurons being infinite-dimensional. Next, we present two gradient-based optimization algorithms for infinite-dimensional mean-field networks, by extending the recently developed particle optimization framework to the infinite-dimensional setting. We show that the proposed algorithms converge to the (regularized) global optimal solution, and moreover, their rates of convergence are of polynomial order in the online setting and exponential order in the finite sample setting, respectively. To our knowledge this is the first quantitative global optimization guarantee of neural network on infinite-dimensional input and in the presence of feature learning. Bohang Zhang · Du Jiang · Di He · Liwei Wang Designing neural networks with bounded Lipschitz constant is a promising way to obtain certifiably robust classifiers against adversarial examples. However, the relevant progress for the important $\ ell_\infty$ perturbation setting is rather limited, and a principled understanding of how to design expressive $\ell_\infty$ Lipschitz networks is still lacking. In this paper, we bridge the gap by studying certified $\ell_\infty$ robustness from a novel perspective of representing Boolean functions. We derive two fundamental impossibility results that hold for any standard Lipschitz network: one for robust classification on finite datasets, and the other for Lipschitz function approximation. These results identify that networks built upon norm-bounded affine layers and Lipschitz activations intrinsically lose expressive power even in the two-dimensional case, and shed light on how recently proposed Lipschitz networks (e.g., GroupSort and $\ell_\infty$-distance nets) bypass these impossibilities by leveraging order statistic functions. Finally, based on these insights, we develop a unified Lipschitz network that generalizes prior works, and design a practical version that can be efficiently trained (making certified robust training free). Extensive experiments show that our approach is scalable, efficient, and consistently yields better certified robustness across multiple datasets and perturbation radii than prior Lipschitz networks. Sung Woo Park · Hyomin Kim · Kyungjae Lee · Junseok Kwon In recent years, the neural stochastic differential equation (NSDE) has gained attention for modeling stochastic representations with great success in various types of applications. However, it typically loses expressivity when the data representation is manifold-valued. To address this issue, we suggest a principled method for expressing the stochastic representation with the Riemannian neural SDE (RNSDE), which extends the conventional Euclidean NSDE. Empirical results for various tasks demonstrate that the proposed method significantly outperforms baseline methods. Sketch design concepts are recurring patterns found in parametric CAD sketches. Though rarely explicitly formalized by the CAD designers, these concepts are implicitly used in design for modularity and regularity. In this paper, we propose a learning based approach that discovers the modular concepts by induction over raw sketches. We propose the dual implicit-explicit representation of concept structures that allows implicit detection and explicit generation, and the separation of structure generation and parameter instantiation for parameterized concept generation, to learn modular concepts by end-to-end training. We demonstrate the design concept learning on a large scale CAD sketch dataset and show its applications for design intent interpretation and auto-completion. Outstanding Paper Matt Deitke · Eli VanderBilt · Alvaro Herrasti · Luca Weihs · Kiana Ehsani · Jordi Salvador · Winson Han · Eric Kolve · Aniruddha Kembhavi · Roozbeh Mottaghi Massive datasets and high-capacity models have driven many recent advancements in computer vision and natural language understanding. This work presents a platform to enable similar success stories in Embodied AI. We propose ProcTHOR, a framework for procedural generation of Embodied AI environments. ProcTHOR enables us to sample arbitrarily large datasets of diverse, interactive, customizable, and performant virtual environments to train and evaluate embodied agents across navigation, interaction, and manipulation tasks. We demonstrate the power and potential of ProcTHOR via a sample of 10,000 generated houses and a simple neural model. Models trained using only RGB images on ProcTHOR, with no explicit mapping and no human task supervision produce state-of-the-art results across 6 embodied AI benchmarks for navigation, rearrangement, and arm manipulation, including the presently running Habitat 2022, AI2-THOR Rearrangement 2022, and RoboTHOR challenges. We also demonstrate strong 0-shot results on these benchmarks, via pre-training on ProcTHOR with no fine-tuning on the downstream benchmark, often beating previous state-of-the-art systems that access the downstream training data. Rishi Bommasani · Kathleen A. Creel · Ananya Kumar · Dan Jurafsky · Percy Liang As the scope of machine learning broadens, we observe a recurring theme of algorithmic monoculture: the same systems, or systems that share components (e.g. datasets, models), are deployed by multiple decision-makers. While sharing offers advantages like amortizing effort, it also has risks. We introduce and formalize one such risk, outcome homogenization: the extent to which particular individuals or groups experience the same outcomes across different deployments. If the same individuals or groups exclusively experience undesirable outcomes, this may institutionalize systemic exclusion and reinscribe social hierarchy. We relate algorithmic monoculture and outcome homogenization by proposing the component sharing hypothesis: if algorithmic systems are increasingly built on the same data or models, then they will increasingly homogenize outcomes. We test this hypothesis on algorithmic fairness benchmarks, demonstrating that increased data-sharing reliably exacerbates homogenization and individual-level effects generally exceed group-level effects. Further, given the current regime in AI of foundation models, i.e. pretrained models that can be adapted to myriad downstream tasks, we test whether model-sharing homogenizes outcomes across tasks. We observe mixed results: we find that for both vision and language settings, the specific methods for adapting a foundation model significantly influence the degree of outcome homogenization. We also identify societal challenges that inhibit the measurement, diagnosis, and rectification of outcome homogenization in deployed machine learning systems. Anthony Corso · Sydney Katz · Craig Innes · Xin Du · Subramanian Ramamoorthy · Mykel J Kochenderfer Modern autonomous systems rely on perception modules to process complex sensor measurements into state estimates. These estimates are then passed to a controller, which uses them to make safety-critical decisions. It is therefore important that we design perception systems to minimize errors that reduce the overall safety of the system. We develop a risk-driven approach to designing perception systems that accounts for the effect of perceptual errors on the performance of the fully-integrated, closed-loop system. We formulate a risk function to quantify the effect of a given perceptual error on overall safety, and show how we can use it to design safer perception systems by including a risk-dependent term in the loss function and generating training data in risk-sensitive regions. We evaluate our techniques on a realistic vision-based aircraft detect and avoid application and show that risk-driven design reduces collision risk by 37% over a baseline Nataniel Ruiz · Sarah Bargal · Cihang Xie · Kate Saenko · Stan Sclaroff Modern deep neural networks tend to be evaluated on static test sets. One shortcoming of this is the fact that these deep neural networks cannot be easily evaluated for robustness issues with respect to specific scene variations. For example, it is hard to study the robustness of these networks to variations of object scale, object pose, scene lighting and 3D occlusions. The main reason is that collecting real datasets with fine-grained naturalistic variations of sufficient scale can be extremely time-consuming and expensive. In this work, we present Counterfactual Simulation Testing, a counterfactual framework that allows us to study the robustness of neural networks with respect to some of these naturalistic variations by building realistic synthetic scenes that allow us to ask counterfactual questions to the models, ultimately providing answers to questions such as "Would your classification still be correct if the object were viewed from the top?" or "Would your classification still be correct if the object were partially occluded by another object?". Our method allows for a fair comparison of the robustness of recently released, state-of-the-art Convolutional Neural Networks and Vision Transformers, with respect to these naturalistic variations. We find evidence that ConvNext is more robust to pose and scale variations than Swin, that ConvNext generalizes better to our simulated domain and that Swin handles partial occlusion better than ConvNext. We also find that robustness for all networks improves with network scale and with data scale and variety. We release the Naturalistic Variation Object Dataset (NVD), a large simulated dataset of 272k images of everyday objects with naturalistic variations such as object pose, scale, viewpoint, lighting and occlusions. Project page: https://counterfactualsimulation.github.io Peizhong Ju · Xiaojun Lin · Ness Shroff In this paper, we study the generalization performance of overparameterized 3-layer NTK models. We show that, for a specific set of ground-truth functions (which we refer to as the "learnable set"), the test error of the overfitted 3-layer NTK is upper bounded by an expression that decreases with the number of neurons of the two hidden layers. Different from 2-layer NTK where there exists only one hidden-layer, the 3-layer NTK involves interactions between two hidden-layers. Our upper bound reveals that, between the two hidden-layers, the test error descends faster with respect to the number of neurons in the second hidden-layer (the one closer to the output) than with respect to that in the first hidden-layer (the one closer to the input). We also show that the learnable set of 3-layer NTK without bias is no smaller than that of 2-layer NTK models with various choices of bias in the neurons. However, in terms of the actual generalization performance, our results suggest that 3-layer NTK is much less sensitive to the choices of bias than 2-layer NTK, especially when the input dimension is large. Kaining Zhang · Liu Liu · Min-Hsiu Hsieh · Dacheng Tao Variational quantum circuits have been widely employed in quantum simulation and quantum machine learning in recent years. However, quantum circuits with random structures have poor trainability due to the exponentially vanishing gradient with respect to the circuit depth and the qubit number. This result leads to a general standpoint that deep quantum circuits would not be feasible for practical tasks. In this work, we propose an initialization strategy with theoretical guarantees for the vanishing gradient problem in general deep quantum circuits. Specifically, we prove that under proper Gaussian initialized parameters, the norm of the gradient decays at most polynomially when the qubit number and the circuit depth increase. Our theoretical results hold for both the local and the global observable cases, where the latter was believed to have vanishing gradients even for very shallow circuits. Experimental results verify our theoretical findings in quantum simulation and quantum chemistry. Biraj Dahal · Alexander Havrilla · Minshuo Chen · Tuo Zhao · Wenjing Liao Deep generative models have experienced great empirical successes in distribution learning. Many existing experiments have demonstrated that deep generative networks can efficiently generate high-dimensional complex data from a low-dimensional easy-to-sample distribution. However, this phenomenon can not be justified by existing theories. The widely held manifold hypothesis speculates that real-world data sets, such as natural images and signals, exhibit low-dimensional geometric structures. In this paper, we take such low-dimensional data structures into consideration by assuming that data distributions are supported on a low-dimensional manifold. We prove approximation and estimation theories of deep generative networks for estimating distributions on a low-dimensional manifold under the Wasserstein-1 loss. We show that the Wasserstein-1 loss converges to zero at a fast rate depending on the intrinsic dimension instead of the ambient data dimension. Our theory leverages the low-dimensional geometric structures in data sets and justifies the practical power of deep generative models. We require no smoothness assumptions on the data distribution which is desirable in practice. Muhammad Firmansyah Kasim · Yi Heng Lim The beauty of physics is that there is usually a conserved quantity in an always-changing system, known as the constant of motion. Finding the constant of motion is important in understanding the dynamics of the system, but typically requires mathematical proficiency and manual analytical work. In this paper, we present a neural network that can simultaneously learn the dynamics of the system and the constants of motion from data. By exploiting the discovered constants of motion, it can produce better predictions on dynamics and can work on a wider range of systems than Hamiltonian-based neural networks. In addition, the training progresses of our method can be used as an indication of the number of constants of motion in a system which could be useful in studying a novel physical Christian Koke · Gitta Kutyniok This work develops a flexible and mathematically sound framework for the design and analysis of graph scattering networks with variable branching ratios and generic functional calculus filters.Spectrally-agnostic stability guarantees for node- and graph-level perturbations are derived; the vertex-set non-preserving case is treated by utilizing recently developed mathematical-physics based tools. Energy propagation through the network layers is investigated and related to truncation stability. New methods of graph-level feature aggregation are introduced and stability of the resulting composite scattering architectures is established. Finally, scattering transforms are extended to edge- and higher order tensorial input. Theoretical results are complemented by numerical investigations: Suitably chosen scattering networks conforming to the developed theory perform better than traditional graph-wavelet based scattering approaches in social network graph classification tasks andsignificantly outperform other graph-based learning approaches to regression of quantum-chemical energies on QM$7$. Felix Biggs · Valentina Zantedeschi · Benjamin Guedj We study the generalisation properties of majority voting on finite ensembles of classifiers, proving margin-based generalisation bounds via the PAC-Bayes theory. These provide state-of-the-art guarantees on a number of classification tasks. Our central results leverage the Dirichlet posteriors studied recently by Zantedeschi et al. (2021) for training voting classifiers; in contrast to that work our bounds apply to non-randomised votes via the use of margins. Our contributions add perspective to the debate on the ``margins theory'' proposed by Schapire et al. (1998) for the generalisation of ensemble classifiers. Teodor Vanislavov Marinov · Mehryar Mohri · Julian Zimmert We revisit the problem of stochastic online learning with feedbackgraphs, with the goal of devising algorithms that are optimal, up toconstants, both asymptotically and in finite time. We show that,surprisingly, the notion of optimal finite-time regret is not auniquely defined property in this context and that, in general, itis decoupled from the asymptotic rate. We discuss alternativechoices and propose a notion of finite-time optimality that we argueis \emph{meaningful}. For that notion, we give an algorithm thatadmits quasi-optimal regret both in finite-time and GUOJUN XIONG · Shufan Wang · Jian Li We consider the online restless bandits with average-reward and multiple actions, where the state of each arm evolves according to a Markov decision process (MDP), and the reward of pulling an arm depends on both the current state of the corresponding MDP and the action taken. Since finding the optimal control is typically intractable for restless bandits, existing learning algorithms are often computationally expensive or with a regret bound that is exponential in the number of arms and states. In this paper, we advocate \textit{index-aware reinforcement learning} (RL) solutions to design RL algorithms operating on a much smaller dimensional subspace by exploiting the inherent structure in restless bandits. Specifically, we first propose novel index policies to address dimensionality concerns, which are provably optimal. We then leverage the indices to develop two low-complexity index-aware RL algorithms, namely, (i) GM-R2MAB, which has access to a generative model; and (ii) UC-R2MAB, which learns the model using an upper confidence style online exploitation method. We prove that both algorithms achieve a sub-linear regret that is only polynomial in the number of arms and states. A key differentiator between our algorithms and existing ones stems from the fact that our RL algorithms contain a novel exploitation that leverages our proposed provably optimal index policies for decision-makings. Provably efficient Model-Based Reinforcement Learning (MBRL) based on optimism or posterior sampling (PSRL) is ensured to attain the global optimality asymptotically by introducing the complexity measure of the model. However, the complexity might grow exponentially for the simplest nonlinear models, where global convergence is impossible within finite iterations. When the model suffers a large generalization error, which is quantitatively measured by the model complexity, the uncertainty can be large. The sampled model that current policy is greedily optimized upon will thus be unsettled, resulting in aggressive policy updates and over-exploration. In this work, we propose Conservative Dual Policy Optimization (CDPO) that involves a Referential Update and a Conservative Update. The policy is first optimized under a reference model, which imitates the mechanism of PSRL while offering more stability. A conservative range of randomness is guaranteed by maximizing the expectation of model value. Without harmful sampling procedures, CDPO can still achieve the same regret as PSRL. More importantly, CDPO enjoys monotonic policy improvement and global optimality simultaneously. Empirical results also validate the exploration efficiency of CDPO. Shubham Bharti · Xuezhou Zhang · Adish Singla · Jerry Zhu We propose a provable defense mechanism against backdoor policies in reinforcement learning under subspace trigger assumption. A backdoor policy is a security threat where an adversary publishes a seemingly well-behaved policy which in fact allows hidden triggers. During deployment, the adversary can modify observed states in a particular way to trigger unexpected actions and harm the agent. We assume the agent does not have the resources to re-train a good policy. Instead, our defense mechanism sanitizes the backdoor policy by projecting observed states to a `safe subspace', estimated from a small number of interactions with a clean (non-triggered) environment. Our sanitized policy achieves $\epsilon$ approximate optimality in the presence of triggers, provided the number of clean interactions is $O\left(\frac{D}{(1-\gamma)^4 \epsilon^2}\right)$ where $\gamma$ is the discounting factor and $D$ is the dimension of state space. Empirically, we show that our sanitization defense performs well on two Atari game environments. Linjian Ma · Edgar Solomonik This work discusses tensor network embeddings, which are random matrices ($S$) with tensor network structure. These embeddings have been used to perform dimensionality reduction of tensor network structured inputs $x$ and accelerate applications such as tensor decomposition and kernel regression. Existing works have designed embeddings for inputs $x$ with specific structures, such as the Kronecker product or Khatri-Rao product, such that the computational cost for calculating $Sx$ is efficient. We provide a systematic way to design tensor network embeddings consisting of Gaussian random tensors, such that for inputs with more general tensor network structures, both the sketch size (row size of $S$) and the sketching computational cost are low.We analyze general tensor network embeddings that can be reduced to a sequence of sketching matrices. We provide a sufficient condition to quantify the accuracy of such embeddings and derive sketching asymptotic cost lower bounds using embeddings that satisfy this condition and have a sketch size lower than any input dimension. We then provide an algorithm to efficiently sketch input data using such embeddings. The sketch size of the embedding used in the algorithm has a linear dependence on the number of sketching dimensions of the input. Assuming tensor contractions are performed with classical dense matrix multiplication algorithms, this algorithm achieves asymptotic cost within a factor of $O(\sqrt{m})$ of our cost lower bound, where $m$ is the sketch size. Further, when each tensor in the input has a dimension that needs to be sketched, this algorithm yields the optimal sketching asymptotic cost. We apply our sketching analysis to inexact tensor decomposition optimization algorithms. We provide a sketching algorithm for CP decomposition that is asymptotically faster than existing work in multiple regimes, and show the optimality of an existing algorithm for tensor train rounding. Huiwen Jia · Cong Shi · Siqian Shen We consider a price-based network revenue management problem with multiple products and multiple reusable resources. Each randomly arriving customer requests a product (service) that needs to occupy a sequence of reusable resources (servers). We adopt an incomplete information setting where the firm does not know the price-demand function for each product and the goal is to dynamically set prices of all products to maximize the total expected revenue of serving customers. We propose novel batched bandit learning algorithms for finding near-optimal pricing policies, and show that they admit a near-optimal cumulative regret bound of $\tilde{O}(J\sqrt{XT})$, where $J$, $X$, and $T$ are the numbers of products, candidate prices, and service periods, respectively. As part of our regret analysis, we develop the first finite-time mixing time analysis of an open network queueing system (i.e., the celebrated Jackson Network), which could be of independent interest. Our numerical studies show that the proposed approaches perform consistently well. Yongsen Mao · Yiming Zhang · Hanxiao Jiang · Angel Chang · Manolis Savva We introduce MultiScan, a scalable RGBD dataset construction pipeline leveraging commodity mobile devices to scan indoor scenes with articulated objects and web-based semantic annotation interfaces to efficiently annotate object and part semantics and part mobility parameters. We use this pipeline to collect 273 scans of 117 indoor scenes containing 10957 objects and 5129 parts. The resulting MultiScan dataset provides RGBD streams with per-frame camera poses, textured 3D surface meshes, richly annotated part-level and object-level semantic labels, and part mobility parameters. We validate our dataset on instance segmentation and part mobility estimation tasks and benchmark methods for these tasks from prior work. Our experiments show that part segmentation and mobility estimation in real 3D scenes remain challenging despite recent progress in 3D object segmentation. Tao Qi · Fangzhao Wu · Chuhan Wu · Lingjuan Lyu · Tong Xu · Hao Liao · Zhongliang Yang · Yongfeng Huang · Xing Xie Vertical federated learning (VFL) is a privacy-preserving machine learning paradigm that can learn models from features distributed on different platforms in a privacy-preserving way. Since in real-world applications the data may contain bias on fairness-sensitive features (e.g., gender), VFL models may inherit bias from training data and become unfair for some user groups. However, existing fair machine learning methods usually rely on the centralized storage of fairness-sensitive features to achieve model fairness, which are usually inapplicable in federated scenarios. In this paper, we propose a fair vertical federated learning framework (FairVFL), which can improve the fairness of VFL models. The core idea of FairVFL is to learn unified and fair representations of samples based on the decentralized feature fields in a privacy-preserving way. Specifically, each platform with fairness-insensitive features first learns local data representations from local features. Then, these local representations are uploaded to a server and aggregated into a unified representation for the target task. In order to learn a fair unified representation, we send it to each platform storing fairness-sensitive features and apply adversarial learning to remove bias from the unified representation inherited from the biased data. Moreover, for protecting user privacy, we further propose a contrastive adversarial learning method to remove private information from the unified representation in server before sending it to the platforms keeping fairness-sensitive features. Experiments on three real-world datasets validate that our method can effectively improve model fairness with user privacy well-protected. Yuko Ishiwaka · Xiao Zeng · Shun Ogawa · Donovan Westwater · Tadayuki Tone · Masaki Nakada Our goal is to synthesize realistic underwater scenes with various fish species in different fish cages, which can be utilized to train computer vision models to automate fish counting and sizing tasks. It is a challenging problem to prepare a sufficiently diverse labeled dataset of images from aquatic environments. We solve this challenge by introducing an adaptive bio-inspired fish simulation. The behavior of caged fish changes based on the species, size and number of fish, and the size and shape of the cage, among other variables. However, a method to autonomously achieve schooling behavior for caged fish did not exist. In this paper, we propose a method for achieving schooling behavior for any given combination of variables, using multi-agent deep reinforcement learning (DRL) in various fish cages in arbitrary environments. Furthermore, to visually reproduce the underwater scene in different locations and seasons, we incorporate a physically-based underwater simulation. Luning Sun · Daniel Huang · Hao Sun · Jian-Xun Wang Nonlinear dynamics are ubiquitous in science and engineering applications, but the physics of most complex systems is far from being fully understood. Discovering interpretable governing equations from measurement data can help us understand and predict the behavior of complex dynamic systems. Although extensive work has recently been done in this field, robustly distilling explicit model forms from very sparse data with considerable noise remains intractable. Moreover, quantifying and propagating the uncertainty of the identified system from noisy data is challenging, and relevant literature is still limited. To bridge this gap, we develop a novel Bayesian spline learning framework to identify parsimonious governing equations of nonlinear (spatio)temporal dynamics from sparse, noisy data with quantified uncertainty. The proposed method utilizes spline basis to handle the data scarcity and measurement noise, upon which a group of derivatives can be accurately computed to form a library of candidate model terms. The equation residuals are used to inform the spline learning in a Bayesian manner, where approximate Bayesian uncertainty calibration techniques are employed to approximate posterior distributions of the trainable parameters. To promote the sparsity, an iterative sequential-threshold Bayesian learning approach is developed, using the alternative direction optimization strategy to systematically approximate L0 sparsity constraints. The proposed algorithm is evaluated on multiple nonlinear dynamical systems governed by canonical ordinary and partial differential equations, and the merit/superiority of the proposed method is demonstrated by comparison with state-of-the-art methods. Rongzhe Wei · Haoteng YIN · Junteng Jia · Austin Benson · Pan Li Graph neural networks (GNNs) have shown superiority in many prediction tasks over graphs due to their impressive capability of capturing nonlinear relations in graph-structured data. However, for node classification tasks, often, only marginal improvement of GNNs has been observed in practice over their linear counterparts. Previous works provide very few understandings of this phenomenon. In this work, we resort to Bayesian learning to give an in-depth investigation of the functions of non-linearity in GNNs for node classification tasks. Given a graph generated from the statistical model CSBM, we observe that the max-a-posterior estimation of a node label given its own and neighbors' attributes consists of two types of non-linearity, the transformation of node attributes and a ReLU-activated feature aggregation from neighbors. The latter surprisingly matches the type of non-linearity used in many GNN models. By further imposing Gaussian assumption on node attributes, we prove that the superiority of those ReLU activations is only significant when the node attributes are far more informative than the graph structure, which nicely explains previous empirical observations. A similar argument is derived when there is a distribution shift of node attributes between the training and testing datasets. Finally, we verify our theory on both synthetic and real-world networks. Our code is available at https://github.com/Graph-COM/Bayesian_inference_based_GNN.git. Benjamin Coleman · Santiago Segarra · Alexander Smola · Anshumali Shrivastava Graph search is one of the most successful algorithmic trends in near neighbor search. Several of the most popular and empirically successful algorithms are, at their core, a greedy walk along a pruned near neighbor graph. However, graph traversal applications often suffer from poor memory access patterns, and near neighbor search is no exception to this rule. Our measurements show that popular search indices such as the hierarchical navigable small-world graph (HNSW) can have poor cache miss performance. To address this issue, we formulate the graph traversal problem as a cache hit maximization task and propose multiple graph reordering as a solution. Graph reordering is a memory layout optimization that groups commonly-accessed nodes together in memory. We mathematically formalize the connection between the graph layout and the cache complexity of search. We present exhaustive experiments applying several reordering algorithms to a leading graph-based near neighbor method based on the HNSW index. We find that reordering improves the query time by up to 40%, we present analysis and improvements for existing graph layout methods, and we demonstrate that the time needed to reorder the graph is negligible compared to the time required to construct the index. Avrim Blum · Omar Montasser · Greg Shakhnarovich · Hongyang Zhang We present an oracle-efficient algorithm for boosting the adversarial robustness of barely robust learners. Barely robust learning algorithms learn predictors that are adversarially robust only on a small fraction $\beta \ll 1$ of the data distribution. Our proposed notion of barely robust learning requires robustness with respect to a ``larger'' perturbation set; which we show is necessary for strongly robust learning, and that weaker relaxations are not sufficient for strongly robust learning. Our results reveal a qualitative and quantitative equivalence between two seemingly unrelated problems: strongly robust learning and barely robust learning. In this paper, we study the gyrovector space structure (gyro-structure) of matrix manifolds. Our work is motivated by the success of hyperbolic neural networks (HNNs) that have demonstrated impressive performance in a variety of applications. At the heart of HNNs is the theory of gyrovector spaces that provides a powerful tool for studying hyperbolic geometry. Here we focus on two matrix manifolds, i.e., Symmetric Positive Definite (SPD) and Grassmann manifolds, and consider connecting the Riemannian geometry of these manifolds with the basic operations, i.e., the binary operation and scalar multiplication on gyrovector spaces. Our work reveals some interesting facts about SPD and Grassmann manifolds. First, SPD matrices with an Affine-Invariant (AI) or a Log-Euclidean (LE) geometry have rich structure with strong connection to hyperbolic geometry. Second, linear subspaces, when equipped with our proposed basic operations, form what we call gyrocommutative and gyrononreductive gyrogroups. Furthermore, they share remarkable analogies with gyrovector spaces. We demonstrate the applicability of our approach for human activity understanding and question answering. ruinan Jin · Xingkang He · Lang Chen · Difei Cheng · Vijay Gupta Understanding convergence of SGD-based optimization algorithms can help deal with enormous machine learning problems. To ensure last-iterate convergence of SGD and momentum-based SGD (mSGD), the existing studies usually constrain the step size $\epsilon_{n}$ to decay as $\sum_{n=1}^{+\infty}\epsilon_{n}^{2}<+\infty$, which however is rather conservative and may lead to slow convergence in the early stage of the iteration. In this paper, we relax this requirement by studying an alternate step size for the mSGD. First, we relax the requirement of the decay on step size to $\sum_{n=1}^{+ \infty}\epsilon_{n}^{2+\eta_{0}}<+\infty\ (0\le\eta_{0}<1/2)$. This implies that a larger step size, such as $\epsilon_{n}=\frac{1}{\sqrt{n}}$ can be utilized for accelerating the mSGD in the early stage. Under this new step size and some common conditions, we prove that the gradient norm of mSGD for non-convex loss functions asymptotically decays to zero. In addition, we show that this step size can indeed help make the convergence into a neighborhood of the stationary points quicker in the early stage. In addition, we establish the convergence of mSGD under a constant step size $\ epsilon_n\equiv\epsilon>0$ by removing the common requirement in the literature on the strong convexity of the loss function. Some experiments are given to illustrate the developed results. Madeline Chantry · Shruti Vyas · Hamid Palangi · Yogesh Rawat · Vibhav Vineet Joint visual and language modeling on large-scale datasets has recently shown good progress in multi-modal tasks when compared to single modal learning. However, robustness of these approaches against real-world perturbations has not been studied. In this work, we perform the first extensive robustness study of video-language models against various real-world perturbations. We focus on text-to-video retrieval and propose two large-scale benchmark datasets, MSRVTT-P and YouCook2-P, which utilize 90 different visual and 35 different text perturbations. The study reveals some interesting initial findings from the studied models: 1) models are more robust when text is perturbed versus when video is perturbed, 2) models that are pre-trained are more robust than those trained from scratch, 3) models attend more to scene and objects rather than motion and action. We hope this study will serve as a benchmark and guide future research in robust video-language learning. The benchmark introduced in this study along with the code and datasets is available at https://bit.ly/3CNOly4. Seiji Maekawa · Koki Noda · Yuya Sasaki · makoto onizuka Graph Neural Networks (GNNs) have achieved great success on a node classification task. Despite the broad interest in developing and evaluating GNNs, they have been assessed with limited benchmark datasets. As a result, the existing evaluation of GNNs lacks fine-grained analysis from various characteristics of graphs. Motivated by this, we conduct extensive experiments with a synthetic graph generator that can generate graphs having controlled characteristics for fine-grained analysis. Our empirical studies clarify the strengths and weaknesses of GNNs from four major characteristics of real-world graphs with class labels of nodes, i.e., 1) class size distributions (balanced vs. imbalanced), 2) edge connection proportions between classes (homophilic vs. heterophilic), 3) attribute values (biased vs. random), and 4) graph sizes (small vs. large). In addition, to foster future research on GNNs, we publicly release our codebase that allows users to evaluate various GNNs with various graphs. We hope this work offers interesting insights for future research. Yuanfeng Ji · Haotian Bai · Chongjian GE · Jie Yang · Ye Zhu · Ruimao Zhang · Zhen Li · Lingyan Zhanng · Wanling Ma · Xiang Wan · Ping Luo Despite the considerable progress in automatic abdominal multi-organ segmentation from CT/MRI scans in recent years, a comprehensive evaluation of the models' capabilities is hampered by the lack of a large-scale benchmark from diverse clinical scenarios. Constraint by the high cost of collecting and labeling 3D medical data, most of the deep learning models to date are driven by datasets with a limited number of organs of interest or samples, which still limits the power of modern deep models and makes it difficult to provide a fully comprehensive and fair estimate of various methods. To mitigate the limitations, we present AMOS, a large-scale, diverse, clinical dataset for abdominal organ segmentation. AMOS provides 500 CT and 100 MRI scans collected from multi-center, multi-vendor, multi-modality, multi-phase, multi-disease patients, each with voxel-level annotations of 15 abdominal organs, providing challenging examples and test-bed for studying robust segmentation algorithms under diverse targets and scenarios. We further benchmark several state-of-the-art medical segmentation models to evaluate the status of the existing methods on this new challenging dataset. We have made our datasets, benchmark servers, and baselines publicly available, and hope to inspire future research. Information can be found at https://amos22.grand-challenge.org. Archit Bansal · Danny Stoll · Maciej Janowski · Arber Zela · Frank Hutter The past few years have seen the development of many benchmarks for Neural Architecture Search (NAS), fueling rapid progress in NAS research. However, recent work, which shows that good hyperparameter settings can be more important than using the best architecture, calls for a shift in focus towards Joint Architecture and Hyperparameter Search (JAHS). Therefore, we present JAHS-Bench-201, the first collection of surrogate benchmarks for JAHS, built to also facilitate research on multi-objective, cost-aware and (multi) multi-fidelity optimization algorithms. To the best of our knowledge, JAHS-Bench-201 is based on the most extensive dataset of neural network performance data in the public domain. It is composed of approximately 161 million data points and 20 performance metrics for three deep learning tasks, while featuring a 14-dimensional search and fidelity space that extends the popular NAS-Bench-201 space. With JAHS-Bench-201, we hope to democratize research on JAHS and lower the barrier to entry of an extremely compute intensive field, e.g., by reducing the compute time to run a JAHS algorithm from 5 days to only a few seconds. Huaxiu Yao · Caroline Choi · Bochuan Cao · Yoonho Lee · Pang Wei Koh · Chelsea Finn Distribution shifts occur when the test distribution differs from the training distribution, and can considerably degrade performance of machine learning models deployed in the real world. While recent works have studied robustness to distribution shifts, distribution shifts arising from the passage of time have the additional structure of timestamp metadata. Real-world examples of such shifts are underexplored, and it is unclear whether existing models can leverage trends in past distribution shifts to reliably extrapolate into the future. To address this gap, we curate Wild-Time, a benchmark of 5 datasets that reflect temporal distribution shifts arising in a variety of real-world applications, including drug discovery, patient prognosis, and news classification. On these datasets, we systematically benchmark 13 approaches with various inductive biases. We evaluate methods in domain-generalization, continual learning, self-supervised learning, and ensemble learning, which leverage timestamps to extract the common structure of the distribution shifts. We extend several domain-generalization methods to the temporal distribution shift setting by treating windows of time as different domains. Finally, we propose two evaluation strategies to evaluate model performance under temporal distribution shifts---evaluation with a fixed time split (Eval-Fix) and evaluation with a data stream (Eval-Stream). Eval-Fix, our primary evaluation strategy, aims to provide a simple evaluation protocol for the broader machine learning community, while Eval-Stream serves as a complementary benchmark for continual learning approaches. Our experiments demonstrate that existing methods are limited in tackling temporal distribution shift: across all settings, we observe an average performance drop of 20% from in-distribution to out-of-distribution data. Farimah Poursafaei · Shenyang Huang · Kellin Pelrine · Reihaneh Rabbany Despite the prevalence of recent success in learning from static graphs, learning from time-evolving graphs remains an open challenge. In this work, we design new, more stringent evaluation procedures for link prediction specific to dynamic graphs, which reflect real-world considerations, to better compare the strengths and weaknesses of methods. First, we create two visualization techniques to understand the reoccurring patterns of edges over time and show that many edges reoccur at later time steps. Based on this observation, we propose a pure memorization-based baseline called EdgeBank. EdgeBank achieves surprisingly strong performance across multiple settings which highlights that the negative edges used in the current evaluation are easy. To sample more challenging negative edges, we introduce two novel negative sampling strategies that improve robustness and better match real-world applications. Lastly, we introduce six new dynamic graph datasets from a diverse set of domains missing from current benchmarks, providing new challenges and opportunities for future research. Our code repository is accessible at https://github.com/fpour/DGB.git. Joshua Albrecht · Abraham Fetterman · Bryden Fogelman · Ellie Kitanidis · Bartosz Wróblewski · Nicole Seo · Michael Rosenthal · Maksis Knutins · Zack Polizzi · James Simon · Kanjun Qiu Despite impressive successes, deep reinforcement learning (RL) systems still fall short of human performance on generalization to new tasks and environments that differ from their training. As a benchmark tailored for studying RL generalization, we introduce Avalon, a set of tasks in which embodied agents in highly diverse procedural 3D worlds must survive by navigating terrain, hunting or gathering food, and avoiding hazards. Avalon is unique among existing RL benchmarks in that the reward function, world dynamics, and action space are the same for every task, with tasks differentiated solely by altering the environment; its 20 tasks, ranging in complexity from eat and throw to hunt and navigate, each create worlds in which the agent must perform specific skills in order to survive. This setup enables investigations of generalization within tasks, between tasks, and to compositional tasks that require combining skills learned from previous tasks. Avalon includes a highly efficient simulator, a library of baselines, and a benchmark with scoring metrics evaluated against hundreds of hours of human performance, all of which are open-source and publicly available. We find that standard RL baselines make progress on most tasks but are still far from human performance, suggesting Avalon is challenging enough to advance the quest for generalizable RL. Christopher Bamford · Minqi Jiang · Mikayel Samvelyan · Tim Rocktäschel Progress in reinforcement learning (RL) research is often driven by the design of new, challenging environments---a costly undertaking requiring skills orthogonal to that of a typical machine learning researcher. The complexity of environment development has only increased with the rise of procedural-content generation (PCG) as the prevailing paradigm for producing varied environments capable of testing the robustness and generalization of RL agents. Moreover, existing environments often require complex build processes, making reproducing results difficult. To address these issues, we introduce GriddlyJS, a web-based Integrated Development Environment (IDE) based on the Griddly engine. GriddlyJS allows researchers to easily design and debug arbitrary, complex PCG grid-world environments, as well as visualize, evaluate, and record the performance of trained agent models. By connecting the RL workflow to the advanced functionality enabled by modern web standards, GriddlyJS allows publishing interactive agent-environment demos that reproduce experimental results directly to the web. To demonstrate the versatility of GriddlyJS, we use it to quickly develop a complex compositional puzzle-solving environment alongside arbitrary human-designed environment configurations and their solutions for use in a automatic curriculum learning and offline RL context. The GriddlyJS IDE is open source and freely available at https://griddly.ai. Li Siyao · Yuhang Li · Bo Li · Chao Dong · Ziwei Liu · Chen Change Loy Visual correspondence of 2D animation is the core of many applications and deserves careful study. Existing correspondence datasets for 2D cartoon suffer from simple frame composition and monotonic movements, making them insufficient to simulate real animations. In this work, we present a new 2D animation visual correspondence dataset, AnimeRun, by converting open source 3D movies to full scenes in 2D style, including simultaneous moving background and interactions of multiple subjects. Statistics show that our proposed dataset not only resembles real anime more in image composition, but also possesses richer and more complex motion patterns compared to existing datasets. With this dataset, we establish a comprehensive benchmark by evaluating several existing optical flow and segment matching methods, and analyze shortcomings of these methods on animation data. Data are available at https://lisiyao21.github.io/projects/AnimeRun. Kaizhi Zheng · Xiaotong Chen · Odest Chadwicke Jenkins · Xin Wang Benefiting from language flexibility and compositionality, humans naturally intend to use language to command an embodied agent for complex tasks such as navigation and object manipulation. In this work, we aim to fill the blank of the last mile of embodied agents---object manipulation by following human guidance, e.g., “move the red mug next to the box while keeping it upright.” To this end, we introduce an Automatic Manipulation Solver (AMSolver) system and build a Vision-and-Language Manipulation benchmark (VLMbench) based on it, containing various language instructions on categorized robotic manipulation tasks. Specifically, modular rule-based task templates are created to automatically generate robot demonstrations with language instructions, consisting of diverse object shapes and appearances, action types, and motion constraints. We also develop a keypoint-based model 6D-CLIPort to deal with multi-view observations and language input and output a sequence of 6 degrees of freedom (DoF) actions. We hope the new simulator and benchmark will facilitate future research on language-guided robotic manipulation. Piera Riccio · Bill Psomas · Francesco Galati · Francisco Escolano · Thomas Hofmann · Nuria Oliver Augmented Reality or AR filters on selfies have become very popular on social media platforms for a variety of applications, including marketing, entertainment and aesthetics. Given the wide adoption of AR face filters and the importance of faces in our social structures and relations, there is increased interest by the scientific community to analyze the impact of such filters from a psychological, artistic and sociological perspective. However, there are few quantitative analyses in this area mainly due to a lack of publicly available datasets of facial images with applied AR filters. The proprietary, close nature of most social media platforms does not allow users, scientists and practitioners to access the code and the details of the available AR face filters. Scraping faces from these platforms to collect data is ethically unacceptable and should, therefore, be avoided in research. In this paper, we present OpenFilter, a flexible framework to apply AR filters available in social media platforms on existing large collections of human faces. Moreover, we share FairBeauty and B-LFW, two beautified versions of the publicly available FairFace and LFW datasets and we outline insights derived from the analysis of these beautified datasets. Andy Zou · Tristan Xiao · Ryan Jia · Joe Kwon · Mantas Mazeika · Richard Li · Dawn Song · Jacob Steinhardt · Owain Evans · Dan Hendrycks Forecasting future world events is a challenging but valuable task. Forecasts of climate, geopolitical conflict, pandemics and economic indicators help shape policy and decision making. In these domains, the judgment of expert humans contributes to the best forecasts. Given advances in language modeling, can these forecasts be automated? To this end, we introduce Autocast, a dataset containing thousands of forecasting questions and an accompanying news corpus. Questions are taken from forecasting tournaments, ensuring high quality, real-world importance, and diversity. The news corpus is organized by date, allowing us to precisely simulate the conditions under which humans made past forecasts (avoiding leakage from the future). Motivated by the difficulty of forecasting numbers across orders of magnitude (e.g. global cases of COVID-19 in 2022), we also curate IntervalQA, a dataset of numerical questions and metrics for calibration. We test language models on our forecasting task and find that performance is far below a human expert baseline. However, performance improves with increased model size and incorporation of relevant information from the news corpus. In sum, Autocast poses a novel challenge for large language models and improved performance could bring large practical benefits. Laurent Jospin · Allen Antony · Lian Xu · Hamid Laga · Farid Boussaid · Mohammed Bennamoun In stereo vision, self-similar or bland regions can make it difficult to match patches between two images. Active stereo-based methods mitigate this problem by projecting a pseudo-random pattern on the scene so that each patch of an image pair can be identified without ambiguity. However, the projected pattern significantly alters the appearance of the image. If this pattern acts as a form of adversarial noise, it could negatively impact the performance of deep learning-based methods, which are now the de-facto standard for dense stereo vision. In this paper, we propose the Active-Passive SimStereo dataset and a corresponding benchmark to evaluate the performance gap between passive and active stereo images for stereo matching algorithms. Using the proposed benchmark and an additional ablation study, we show that the feature extraction and matching modules of a selection of twenty selected deep learning-based stereo matching methods generalize to active stereo without a problem. However, the disparity refinement modules of three of the twenty architectures (ACVNet, CascadeStereo, and StereoNet) are negatively affected by the active stereo patterns due to their reliance on the appearance of the input images. Carl Doersch · Ankush Gupta · Larisa Markeeva · Adria Recasens · Lucas Smaira · Yusuf Aytar · Joao Carreira · Andrew Zisserman · Yi Yang Generic motion understanding from video involves not only tracking objects, but also perceiving how their surfaces deform and move. This information is useful to make inferences about 3D shape, physical properties and object interactions. While the problem of tracking arbitrary physical points on surfaces over longer video clips has received some attention, no dataset or benchmark for evaluation existed, until now. In this paper, we first formalize the problem, naming it tracking any point (TAP). We introduce a companion benchmark,TAP-Vid, which is composed of both real-world videos with accurate human annotations of point tracks, and synthetic videos with perfect ground-truth point tracks. Central to the construction of our benchmark is a novel semi-automatic crowdsourced pipeline which uses optical flow estimates to compensate for easier, short-term motion like camera shake, allowing annotators to focus on harder sections of the video. We validate our pipeline on synthetic data and propose a simple end-to-end point tracking model, TAP-Net, showing that it outperforms all prior methods on our benchmark when trained on synthetic data. Gyubok Lee · Hyeonji Hwang · Seongsu Bae · Yeonsu Kwon · Woncheol Shin · Seongjun Yang · Minjoon Seo · Jong-Yeup Kim · Edward Choi We present a new text-to-SQL dataset for electronic health records (EHRs). The utterances were collected from 222 hospital staff, including physicians, nurses, insurance review and health records teams, and more. To construct the QA dataset on structured EHR data, we conducted a poll at a university hospital and templatized the responses to create seed questions. Then, we manually linked them to two open-source EHR databases—MIMIC-III and eICU—and included them with various time expressions and held-out unanswerable questions in the dataset, which were all collected from the poll. Our dataset poses a unique set of challenges: the model needs to 1) generate SQL queries that reflect a wide range of needs in the hospital, including simple retrieval and complex operations such as calculating survival rate, 2) understand various time expressions to answer time-sensitive questions in healthcare, and 3) distinguish whether a given question is answerable or unanswerable based on the prediction confidence. We believe our dataset, EHRSQL, could serve as a practical benchmark to develop and assess QA models on structured EHR data and take one step further towards bridging the gap between text-to-SQL research and its real-life deployment in healthcare. EHRSQL is available at https://github.com/glee4810/EHRSQL. Ruocheng Wang · Yunzhi Zhang · Jiayuan Mao · Ran Zhang · Chin-Yi Cheng · Jiajun Wu Human-designed visual manuals are crucial components in shape assembly activities. They provide step-by-step guidance on how we should move and connect different parts in a convenient and physically-realizable way. While there has been an ongoing effort in building agents that perform assembly tasks, the information in human-design manuals has been largely overlooked. We identify that this is due to 1) a lack of realistic 3D assembly objects that have paired manuals and 2) the difficulty of extracting structured information from purely image-based manuals. Motivated by this observation, we present IKEA-Manual, a dataset consisting of 102 IKEA objects paired with assembly manuals. We provide fine-grained annotations on the IKEA objects and assembly manuals, including decomposed assembly parts, assembly plans, manual segmentation, and 2D-3D correspondence between 3D parts and visual manuals. We illustrate the broad application of our dataset on four tasks related to shape assembly: assembly plan generation, part segmentation, pose estimationand 3D part assembly. Konstantin Schürholt · Diyar Taskiran · Boris Knyazev · Xavier Giró-i-Nieto · Damian Borth In the last years, neural networks (NN) have evolved from laboratory environments to the state-of-the-art for many real-world problems. It was shown that NN models (i.e., their weights and biases) evolve on unique trajectories in weight space during training. Following, a population of such neural network models (referred to as model zoo) would form structures in weight space. We think that the geometry, curvature and smoothness of these structures contain information about the state of training and can reveal latent properties of individual models. With such model zoos, one could investigate novel approaches for (i) model analysis, (ii) discover unknown learning dynamics, (iii) learn rich representations of such populations, or (iv) exploit the model zoos for generative modelling of NN weights and biases. Unfortunately, the lack of standardized model zoos and available benchmarks significantly increases the friction for further research about populations of NNs. With this work, we publish a novel dataset of model zoos containing systematically generated and diverse populations of NN models for further research. In total the proposed model zoo dataset is based on eight image datasets, consists of 27 model zoos trained with varying hyperparameter combinations and includes 50’360 unique NN models as well as their sparsified twins, resulting in over 3’844’360 collected model states. Additionally, to the model zoo data we provide an in-depth analysis of the zoos and provide benchmarks for multiple downstream tasks. The dataset can be found at Sizhe An · Yin Li · Umit Ogras The ability to estimate 3D human body pose and movement, also known as human pose estimation (HPE), enables many applications for home-based health monitoring, such as remote rehabilitation training. Several possible solutions have emerged using sensors ranging from RGB cameras, depth sensors, millimeter-Wave (mmWave) radars, and wearable inertial sensors. Despite previous efforts on datasets and benchmarks for HPE, few dataset exploits multiple modalities and focuses on home-based health monitoring. To bridge the gap, we present mRI, a multi-modal 3D human pose estimation dataset with mmWave, RGB-D, and Inertial Sensors. Our dataset consists of over 160k synchronized frames from 20 subjects performing rehabilitation exercises and supports the benchmarks of HPE and action detection. We perform extensive experiments using our dataset and delineate the strength of each modality. We hope that the release of mRI can catalyze the research in pose estimation, multi-modal learning, and action understanding, and more importantly facilitate the applications of home-based health monitoring. Wonseok Hwang · Dongjun Lee · Kyoungyeon Cho · Hanuhl Lee · Minjoon Seo The recent advances of deep learning have dramatically changed how machine learning, especially in the domain of natural language processing, can be applied to legal domain. However, this shift to the data-driven approaches calls for larger and more diverse datasets, which are nevertheless still small in number, especially in non-English languages. Here we present the first large-scale benchmark of Korean legal AI datasets, LBOX OPEN, that consists of one legal corpus, two classification tasks, two legal judgement prediction (LJP) tasks, and one summarization task. The legal corpus consists of 147k Korean precedents (259M tokens), of which 63k are sentenced in last 4 years and 96k are from the first and the second level courts in which factual issues are reviewed. The two classification tasks are case names (11.3k) and statutes (2.8k) prediction from the factual description of individual cases. The LJP tasks consist of (1) 10.5k criminal examples where the model is asked to predict fine amount, imprisonment with labor, and imprisonment without labor ranges for the given facts, and (2) 4.7k civil examples where the inputs are facts and claim for relief and outputs are the degrees of claim acceptance. The summarization task consists of the Supreme Court precedents and the corresponding summaries (20k). We also release realistic variants of the datasets by extending the domain (1) to infrequent case categories in case name (31k examples) and statute (17.7k) classification tasks, and (2) to long input sequences in the summarization task (51k). Finally, we release LCUBE, the first Korean legal language model trained on the legal corpus from this study. Given the uniqueness of the Law of South Korea and the diversity of the legal tasks covered in this work, we believe that LBOX OPEN contributes to the multilinguality of global legal research. LBOX OPEN and LCUBE will be publicly available. Maribeth Rauh · John Mellor · Jonathan Uesato · Po-Sen Huang · Johannes Welbl · Laura Weidinger · Sumanth Dathathri · Amelia Glaese · Geoffrey Irving · Iason Gabriel · William Isaac · Lisa Anne Large language models produce human-like text that drive a growing number of applications. However, recent literature and, increasingly, real world observations, have demonstrated that these models can generate language that is toxic, biased, untruthful or otherwise harmful. Though work to evaluate language model harms is under way, translating foresight about which harms may arise into rigorous benchmarks is not straightforward. To facilitate this translation, we outline six ways of characterizing harmful text which merit explicit consideration when designing new benchmarks. We then use these characteristics as a lens to identify trends and gaps in existing benchmarks. Finally, we apply them in a case study of the Perspective API, a toxicity classifier that is widely used in harm benchmarks. Our characteristics provide one piece of the bridge that translates between foresight and effective evaluation. Lixin Zou · Haitao Mao · Xiaokai Chu · Jiliang Tang · Wenwen Ye · Shuaiqiang Wang · Dawei Yin The unbiased learning to rank (ULTR) problem has been greatly advanced by recent deep learning techniques and well-designed debias algorithms. However, promising results on the existing benchmark datasets may not be extended to the practical scenario due to some limitations of existing datasets. First, their semantic feature extractions are outdated while state-of-the-art large-scale pre-trained language models like BERT cannot be utilized due to the lack of original text. Second, display features are incomplete; thus in-depth study on ULTR is impossible such as the displayed abstract for analyzing the click necessary bias. Third, synthetic user feedback has been adopted by most existing datasets and real-world user feedback is greatly missing. To overcome these disadvantages, we introduce the Baidu-ULTR dataset. It involves randomly sampled 1.2 billion searching sessions and 7,008 expert annotated queries(397,572 query document pairs). Baidu-ULTR is the first billion-level dataset for ULTR. Particularly, it offers: (1)the original semantic features and pre-trained language models of different sizes; (2)sufficient display information such as position, displayed height, and displayed abstract, enabling the comprehensive study of multiple displayed biases; and (3)rich user feedback on search result pages (SERPs) like dwelling time, allowing for user engagement optimization and promoting the exploration of multi-task learning in ULTR. Furthermore, we present the design principle of Baidu-ULTR and the performance of representative ULTR algorithms on Baidu-ULTR. The Baidu-ULTR dataset and corresponding baseline implementations are available at https://github.com/ChuXiaokai/baiduultrdataset. The dataset homepage is available at https://searchscience.baidu.com/dataset.html. Julien Cornebise · Ivan Oršolić · Freddie Kalaitzis Analyzing the planet at scale with satellite imagery and machine learning is a dream that has been constantly hindered by the cost of difficult-to-access highly-representative high-resolution imagery. To remediate this, we introduce here the WorldStratified dataset. The largest and most varied such publicly available dataset, at Airbus SPOT 6/7 satellites' high resolution of up to 1.5 m/ pixel, empowered by European Space Agency's Phi-Lab as part of the ESA-funded QueryPlanet project, we curate 10,000 sq km of unique locations to ensure stratified representation of all types of land-use across the world: from agriculture to ice caps, from forests to multiple urbanization densities. We also enrich those with locations typically under-represented in ML datasets: sites of humanitarian interest, illegal mining sites, and settlements of persons at risk. We temporally-match each high-resolution image with multiple low-resolution images from the freely accessible lower-resolution Sentinel-2 satellites at 10 m/pixel. We accompany this dataset with an open-source Python package to: rebuild or extend the WorldStrat dataset, train and infer baseline algorithms, and learn with abundant tutorials, all compatible with the popular EO-learn toolbox. We hereby hope to foster broad-spectrum applications of ML to satellite imagery, and possibly develop from free public low-resolution Sentinel2 imagery the same power of analysis allowed by costly private high-resolution imagery. We illustrate this specific point by training and releasing several highly compute-efficient baselines on the task of Multi-Frame Super-Resolution. License-wise, the high-resolution Airbus imagery is CC-BY-NC, while the labels, Sentinel2 imagery, and trained weights are under CC-BY, and the source code under BSD, to allow for the widest use and dissemination. The dataset is available at \url{https://zenodo.org/record/6810792} and the software package at \url{https:/ Wenhao Gao · Tianfan Fu · Jimeng Sun · Connor Coley Molecular optimization is a fundamental goal in the chemical sciences and is of central interest to drug and material design. In recent years, significant progress has been made in solving challenging problems across various aspects of computational molecular optimizations, emphasizing high validity, diversity, and, most recently, synthesizability. Despite this progress, many papers report results on trivial or self-designed tasks, bringing additional challenges to directly assessing the performance of new methods. Moreover, the sample efficiency of the optimization---the number of molecules evaluated by the oracle---is rarely discussed, despite being an essential consideration for realistic discovery applications.To fill this gap, we have created an open-source benchmark for practical molecular optimization, PMO, to facilitate the transparent and reproducible evaluation of algorithmic advances in molecular optimization. This paper thoroughly investigates the performance of 25 molecular design algorithms on 23 single-objective (scalar) optimization tasks with a particular focus on sample efficiency. Our results show that most ``state-of-the-art'' methods fail to outperform their predecessors under a limited oracle budget allowing 10K queries and that no existing algorithm can efficiently solve certain molecular optimization problems in this setting. We analyze the influence of the optimization algorithm choices, molecular assembly strategies, and oracle landscapes on the optimization performance to inform future algorithm development and benchmarking. PMO provides a standardized experimental setup to comprehensively evaluate and compare new molecule optimization methods with existing ones. All code can be found at https://github.com/ Peirong Zhang · Jiajia Jiang · Yuliang Liu · Lianwen Jin Although online handwriting verification has made great progress recently, the verification performances are still far behind the real usage owing to the small scale of the datasets as well as the limited biometric mediums. Therefore, this paper proposes a new handwriting verification benchmark dataset named Multimodal Signature and Digit String (MSDS), which consists of two subsets: MSDS-ChS (Chinese Signatures) and MSDS-TDS (Token Digit Strings), contributed by 402 users, with 20 genuine samples and 20 skilled forgeries per user per subset. MSDS-ChS consists of handwritten Chinese signatures, which, to the best of our knowledge, is the largest publicly available Chinese signature dataset for handwriting verification, at least eight times larger than existing online datasets. Meanwhile, MSDS-TDS consists of handwritten Token Digit Strings, i.e, the actual phone numbers of users, which have not been explored yet. Extensive experiments with different baselines are respectively conducted for MSDS-ChS and MSDS-TDS. Surprisingly, verification performances of state-of-the-art methods on MSDS-TDS are generally better than those on MSDS-ChS, which indicates that the handwritten Token Digit String could be a more effective biometric than handwritten Chinese signature. This is a promising discovery that could inspire us to explore new biometric traits. The MSDS dataset is available at https://github.com/HCIILAB/MSDS. Shangtong Zhang · Shimon Whiteson Emphatic Temporal Difference (TD) methods are a class of off-policy Reinforcement Learning (RL) methods involving the use of followon traces. Despite the theoretical success of emphatic TD methods in addressing the notorious deadly triad of off-policy RL, there are still two open problems. First, followon traces typically suffer from large variance, making them hard to use in practice. Second, though Yu (2015) confirms the asymptotic convergence of some emphatic TD methods for prediction problems, there is still no finite sample analysis for any emphatic TD method for prediction, much less control. In this paper, we address those two open problems simultaneously via using truncated followon traces in emphatic TD methods. Unlike the original followon traces, which depend on all previous history, truncated followon traces depend on only finite history, reducing variance and enabling the finite sample analysis of our proposed emphatic TD methods for both prediction and control. Christoffer Löffler · Christopher Mutschler Active learning prioritizes the labeling of the most informative data samples. However, the performance of active learning heuristics depends on both the structure of the underlying model architecture and the data. We propose IALE, an imitation learning scheme that imitates the selection of the best-performing expert heuristic at each stage of the learning cycle in a batch-mode pool-based setting. We use Dagger to train a transferable policy on a dataset and later apply it to different datasets and deep classifier architectures. The policy reflects on the best choices from multiple expert heuristics given the current state of the active learning process, and learns to select samples in a complementary way that unifies the expert strategies. Our experiments on well-known image datasets show that we outperform state of the art imitation learners and heuristics. YUANGANG PAN · Ivor W. Tsang · Weijie Chen · Gang Niu · Masashi Sugiyama In rank aggregation (RA), a collection of preferences from different users are summarized into a total order under the assumption of homogeneity of users. Model misspecification in RA arises since the homogeneity assumption fails to be satisfied in the complex real-world situation. Existing robust RAs usually resort to an augmentation of the ranking model to account for additional noises, where the collected preferences can be treated as a noisy perturbation of idealized preferences. Since the majority of robust RAs rely on certain perturbation assumptions, they cannot generalize well to agnostic noise-corrupted preferences in the real world. In this paper, we propose CoarsenRank, which possesses robustness against model misspecification. Specifically, the properties of our CoarsenRank are summarized as follows: (1) CoarsenRank is designed for mild model misspecification, which assumes there exist the ideal preferences (consistent with model assumption) that locate in a neighborhood of the actual preferences. (2) CoarsenRank then performs regular RAs over a neighborhood of the preferences instead of the original data set directly. Therefore, CoarsenRank enjoys robustness against model misspecification within a neighborhood. (3) The neighborhood of the data set is defined via their empirical data distributions. Further, we put an exponential prior on the unknown size of the neighborhood and derive a much-simplified posterior formula for CoarsenRank under particular divergence measures. (4) CoarsenRank is further instantiated to Coarsened Thurstone, Coarsened Bradly-Terry, and Coarsened Plackett-Luce with three popular probability ranking models. Meanwhile, tractable optimization strategies are introduced with regards to each instantiation respectively. In the end, we apply CoarsenRank on four real-world data sets. Experiments show that CoarsenRank is fast and robust, achieving consistent improvements over baseline methods. Ba-Hien Tran · Simone Rossi · Dimitrios Milios · Maurizio Filippone The Bayesian treatment of neural networks dictates that a prior distribution is specified over their weight and bias parameters. This poses a challenge because modern neural networks are characterized by a large number of parameters, and the choice of these priors has an uncontrolled effect on the induced functional prior, which is the distribution of the functions obtained by sampling the parameters from their prior distribution. We argue that this is a hugely limiting aspect of Bayesian deep learning, and this work tackles this limitation in a practical and effective way. Our proposal is to reason in terms of functional priors, which are easier to elicit, and to “tune” the priors of neural network parameters in a way that they reflect such functional priors. Gaussian processes offer a rigorous framework to define prior distributions over functions, and we propose a novel and robust framework to match their prior with the functional prior of neural networks based on the minimization of their Wasserstein distance. We provide vast experimental evidence that coupling these priors with scalable Markov chain Monte Carlo sampling offers systematically large performance improvements over alternative choices of priors and state-of-the-art approximate Bayesian deep learning approaches. We consider this work a considerable step in the direction of making the long-standing challenge of carrying out a fully Bayesian treatment of neural networks, including convolutional neural networks, a concrete possibility. Nick Rucks · Tobias Uelwer · Stefan Harmeling Fourier phase retrieval is a classical problem that deals with the recovery of an image from the amplitude measurements of its Fourier coefficients. Conventional methods solve this problem via iterative (alternating) minimization by leveraging some prior knowledge about the structure of the unknown image. The inherent ambiguities about shift and flip in the Fourier measurements make this problem especially difficult; and most of the existing methods use several random restarts with different permutations. In this paper, we assume that a known (learned) reference is added to the signal before capturing the Fourier amplitude measurements. Our method is inspired by the principle of adding a reference signal in holography. To recover the signal, we implement an iterative phase retrieval method as an unrolled network. Then we use back propagation to learn the reference that provides us the best reconstruction for a fixed number of phase retrieval iterations. We performed a number of simulations on a variety of datasets under different conditions and found that our proposed method for phase retrieval via unrolled network and learned reference provides near-perfect recovery at fixed (small) computational cost. We compared our method with standard Fourier phase retrieval methods and observed significant performance enhancement using the learned reference. Guilly Kolkman · Jan Athmer · Alex Labro · Maksymilian Kulicki In this work, the paper Strategic Classification Made Practical is evaluated through a reproduction study. We successfully reproduced the original results using the same dataset and hyperparameters. In addition, we conducted an additional experiment that tests the framework's performance a dataset containing both strategic and non-strategic users. The results show significant decrease in accuracy of linear models proportional to the number of non-strategic users. The non-linear RNN model achieves good performance regardless of the proportion of strategic users. The results provide insight into the limitations of the claims that the original approach is flexible and practical. Vid Stropnik · Maruša Oražem The studied paper proposes a novel output layer for graph neural networks (the graph edit network - GEN). The objective of this reproduction is to assess the possibility of its re-implementation in the Python programming language and the adherence of the provided code to the methodology, described in the source material. Additionally, we rigorously evaluate the functions used to create the synthetic data sets, on which the models are evaluated. Finally, we also pay attention to the claim that the proposed architecture scales well to larger graphs.
{"url":"https://neurips.cc/virtual/2022/session/64304","timestamp":"2024-11-09T02:31:50Z","content_type":"text/html","content_length":"1009520","record_id":"<urn:uuid:5b610b99-52d1-4a6b-9b2b-07735431c531>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00800.warc.gz"}
Option Payoff Excel Tutorial About This Tutorial In this Option Payoff Excel Tutorial you will learn how to calculate profit or loss at expiration for single option, as well as strategies involving multiple options, such as spreads, straddles, condors or butterflies, draw option payoff diagrams in Excel, and calculate useful statistics for evaluating option trades, such as risk-reward ratios and break-even points. During the nine parts of the tutorial, we will create a spreadsheet from scratch, starting from simple calculations and adding one feature at a time. Required Knowledge of Options The tutorial assumes at least basic understanding of how options work – you should be familiar with basic terms and concepts such as strike price, underlying price, expiration, the difference between calls and puts, and the mechanics of long and short option trades. You don't need a detailed knowledge of particular option strategies – it is enough to know that various option strategies can be built by combining different options together. Required Excel Skills To be able to go through the tutorial and successfully replicate the calculations, you only need basic Excel skills, such as entering formulas, basic arithmetics, copying, inserting and deleting cells, rows and columns, or creating simple line charts. We will also touch a few more advanced concepts, such as combo boxes, but these will be explained as we go (more advanced Excel users will be informed where it's safe to skip such parts). This tutorial will not use or teach any macros or VBA. We will use Excel functions including IF, AND, OR, MAX, MIN, SUM, ABS, SIGN, RANK.EQ, COUNTIF, INDEX, MATCH. Most readers will be already familiar with most of these, but each will be briefly introduced before we use it – at least what inputs it takes, what it returns, and how it relates to the thing we are trying to do at the moment. The more advanced ones will get a bit more detailed We will also pay attention to the issues of design, performance, and making our spreadsheets clean and user-friendly. Sometimes more than one solution to the same problem will be introduced and we will discuss why one formula or structure may be better than another, even when both lead to the same result. Many readers will find they have learned as much about Excel itself as about the option strategy calculations. Questions & Feedback If you have any questions or suggestions, please feel free to contact me. Let's Go to Part 1 Continue to part 1: Calculating Call and Put Option Payoff in Excel
{"url":"https://www.macroption.com/option-payoff-excel/","timestamp":"2024-11-03T05:41:06Z","content_type":"text/html","content_length":"18451","record_id":"<urn:uuid:c9a7e5ea-85a4-402f-87e7-81935ee1b4a5>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00823.warc.gz"}
The Stacks project Lemma 42.46.9. In Situation 42.7.1 let $X$ be locally of finite type over $S$. Let $E \in D(\mathcal{O}_ X)$ be a perfect object whose Chern classes are defined. Then $c_ i(E^\vee ) = (-1)^ i c_ i(E) $, $P_ i(E^\vee ) = (-1)^ iP_ i(E)$, and $ch_ i(E^\vee ) = (-1)^ ich_ i(E)$ in $A^ i(X)$. Comments (0) Post a comment Your email address will not be published. Required fields are marked. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). All contributions are licensed under the GNU Free Documentation License. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 0FAE. Beware of the difference between the letter 'O' and the digit '0'. The tag you filled in for the captcha is wrong. You need to write 0FAE, in case you are confused.
{"url":"https://stacks.math.columbia.edu/tag/0FAE","timestamp":"2024-11-14T14:31:10Z","content_type":"text/html","content_length":"14397","record_id":"<urn:uuid:24ffb6b8-66b5-48dc-a748-844b2340b4a7>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00077.warc.gz"}
Problem 1 1. Find the arc length of an arc with a central angle of \(\frac{2\pi}{3}\) radians and a radius of 6 units. 2. Find the arc length of an arc with a central angle of \(\frac{5\pi}{6}\) radians and a circumference of 12 units. 3. Find the measure (in radians) of a central angle of an arc with arc length 4 units and radius 3 units. Problem 2 A circle has radius 10 units. For each angle measure, find the area of a sector of this circle with that central angle. 1. \(\frac{\pi}{8}\) radians 2. \(\frac{3\pi}{4}\) radians 3. 1 radian 4. 2 radians Problem 3 Each circle has a shaded sector with a central angle measured in radians. What fraction of the circle is each sector? Problem 4 Find the radian measure of each angle. 1. 30 degrees 2. 45 degrees 3. 50 degrees Problem 5 Find the degree measure of each angle. 1. \(\frac{\pi}{3}\) radians 2. \(\frac{\pi}{2}\) radians 3. \(\frac{3\pi}{4}\) radians 4. 3 radians Problem 6 Calculate the radian measure of a 225 degree angle. Use any method you like, including sketching in the circle diagram provided. Explain or show your reasoning. Problem 7 Andre and Diego are each saving money. They each hope to save 500 dollars. They are tracking their progress on the circles in the image. Diego thinks that Andre has saved more money than Diego has saved. Andre thinks they have saved the same amount. Do you agree with either of them? Explain or show your reasoning. Problem 8 Clare missed class and Jada is teaching her how to construct the circumscribed circle of a triangle. Here are the instructions Jada wrote. “Construct all 3 perpendicular bisectors of the triangle’s sides. The point where the perpendicular bisectors intersect is called the circumcenter. Construct a circle centered at the circumcenter with radius set to the distance between the circumcenter and a vertex. If the triangle has a circumscribed circle, the circle you construct will go through all 3 vertices.” Do you agree with Jada’s instructions? Explain your reasoning.
{"url":"https://im-beta.kendallhunt.com/HS/teachers/2/7/13/practice.html","timestamp":"2024-11-04T17:43:26Z","content_type":"text/html","content_length":"90789","record_id":"<urn:uuid:4a04a29a-7120-4068-8184-9a1931bdf2ec>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00485.warc.gz"}
Typeclass resolution algorithm? Is there still any hope that unifall will be integrated into the main branch, or did people run out of ideas to unify the algorithms? A message was moved here from #Coq users > Typeclass resolution algorithm? by Karl Palmskog. moving this here since more of a Coq dev or plugin question IIRC, the last response was "we need to sort out primitive projections and drop the compatibility layer for those" (maybe because unifall caused too many changes of those, and giving a sensible spec to primitive projections with the compatibility layer isn't really possible) What needs to be sorted out about primitive projections? I was thinking of https://coq.zulipchat.com/#narrow/stream/237664-math-comp-users/topic/Quotients/near/288811182. I'm not a Coq dev and don't have updates on the unifall plans (I only answered because nobody else had), but I can try to say sth about primitive projections. Complicated details follow, but the TLDR is still "this is too messy for unifall to deal with". 1. Records with primitive projections are fundamentally different from normal records: primitive projections cannot be partially applied. They have to be wrapped in functions ("compatibility constants") which can be partially applied (and Coq performs this wrapping). Moreover, reducing normal projections takes _two_ reduction steps instead of one: delta-reduce the projection to unfold it and expose a match, and iota-reduce the match against the input records. 2. Coq goes to extreme lengths to hide this difference, but doesn't succeed 100%. In particular, a primitive projection proj <parameters> record can be represented in three forms: as a compatibility constant, as a "folded" primitive projection, and as an unfolded one; those are all syntactically different, sometimes will not unify, and Set Printing All does not show whether a primitive projection is folded or unfolded. Reducing a folded primitive projection takes _two_ steps instead of one — the folded primitive projection must first be delta-reduced/"unfolded" to an unfolded primitive projections, and only then can it iota-reduce. 3. Coq devs agree things need to change — at least, by collapsing folded and unfolded primitive projections together. https://github.com/coq/ceps/pull/57 and https://github.com/coq/coq/issues/5698 have some extra information. Aside, I like this paper on unfolding and cubical type theory by my colleagues (https://arxiv.org/abs/2210.05420). It would be great if a more principled understanding of unfolding could help here, but I haven't given it much thought. I like that paper too! It introduces a type-safe alternative to Opaque that _actually_ disables conversion, which is pretty impressive! Does this mean an a posteriori Qed then? It's not quite the same thing as Qed, rather, it's a type-enforced kind of opacity. You'd see in the type of expressions whether you need a body to unfold for the expression to type-check. (I'm personally convinced that the kind of "partial elements" that proved to be critical to the cubical approach to be the next big thing in type theory.) Their system is pretty different, but I want to say it gives you what I'd ask from a posteriori Qed... I find the whole paper very accessible (almost no CT required) :+1: Moreover, while their theory is pretty different, the key idea for preserving subject reduction seems to be unsealing must be _transitive_: for instance, unfolding vector concatenation must also unfold natural addition to preserve subject reduction. I fear switching Coq to their theory might be hard, but I wonder if it'd be easier to lift this idea + term-level with_transparent [definitions] t term constructors. Last updated: Oct 13 2024 at 01:02 UTC
{"url":"https://coq.gitlab.io/zulip-archive/stream/237656-Coq-devs-.26-plugin-devs/topic/Typeclass.20resolution.20algorithm.3F.html","timestamp":"2024-11-13T20:37:05Z","content_type":"text/html","content_length":"13113","record_id":"<urn:uuid:7c58dbc2-d829-4169-a454-93ad01b8bdf4>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00832.warc.gz"}
36854 BWN HWH Chord Local Train Time Table If we closely look at the 36854 train time table, travelling by 36854 BWN HWH Chord Local gives us a chance to explore the following cities in a quick view as they come along the route. 1. Barddhaman It is the 1st station in the train route of 36854 BWN HWH Chord Local. The station code of Barddhaman is BWN. The departure time of train 36854 from Barddhaman is 20:07. The next stopping station is Gangpur at a distance of 7km. 2. Gangpur It is the 2nd station in the train route of 36854 BWN HWH Chord Local at a distance of 7 Km from the source station Barddhaman. The station code of Gangpur is GRP. The arrival time of 36854 at Gangpur is 20:12. The departure time of train 36854 from Gangpur is 20:13. The total halt time of train 36854 at Gangpur is 1 minutes. The previous stopping station, Barddhaman is 7km away. The next stopping station is Saktigarh at a distance of 5km. 3. Saktigarh It is the 3rd station in the train route of 36854 BWN HWH Chord Local at a distance of 12 Km from the source station Barddhaman. The station code of Saktigarh is SKG. The arrival time of 36854 at Saktigarh is 20:17. The departure time of train 36854 from Saktigarh is 20:18. The total halt time of train 36854 at Saktigarh is 1 minutes. The previous stopping station, Gangpur is 5km away. The next stopping station is Palla Road at a distance of 5km. 4. Palla Road It is the 4th station in the train route of 36854 BWN HWH Chord Local at a distance of 17 Km from the source station Barddhaman. The station code of Palla Road is PRAE. The arrival time of 36854 at Palla Road is 20:22. The departure time of train 36854 from Palla Road is 20:23. The total halt time of train 36854 at Palla Road is 1 minutes. The previous stopping station, Saktigarh is 5km away. The next stopping station is Chanchai at a distance of 3km. 5. Chanchai It is the 5th station in the train route of 36854 BWN HWH Chord Local at a distance of 20 Km from the source station Barddhaman. The station code of Chanchai is CHC. The arrival time of 36854 at Chanchai is 20:25. The departure time of train 36854 from Chanchai is 20:26. The total halt time of train 36854 at Chanchai is 1 minutes. The previous stopping station, Palla Road is 3km away. The next stopping station is Masagram at a distance of 3km. 6. Masagram It is the 6th station in the train route of 36854 BWN HWH Chord Local at a distance of 23 Km from the source station Barddhaman. The station code of Masagram is MSAE. The arrival time of 36854 at Masagram is 20:28. The departure time of train 36854 from Masagram is 20:29. The total halt time of train 36854 at Masagram is 1 minutes. The previous stopping station, Chanchai is 3km away. The next stopping station is Nabagram at a distance of 3km. 7. Nabagram It is the 7th station in the train route of 36854 BWN HWH Chord Local at a distance of 26 Km from the source station Barddhaman. The station code of Nabagram is NBAE. The arrival time of 36854 at Nabagram is 20:31. The departure time of train 36854 from Nabagram is 20:32. The total halt time of train 36854 at Nabagram is 1 minutes. The previous stopping station, Masagram is 3km away. The next stopping station is Jaugram at a distance of 4km. 8. Jaugram It is the 8th station in the train route of 36854 BWN HWH Chord Local at a distance of 30 Km from the source station Barddhaman. The station code of Jaugram is JRAE. The arrival time of 36854 at Jaugram is 20:35. The departure time of train 36854 from Jaugram is 20:36. The total halt time of train 36854 at Jaugram is 1 minutes. The previous stopping station, Nabagram is 4km away. The next stopping station is Jhapandanga at a distance of 2km. 9. Jhapandanga It is the 9th station in the train route of 36854 BWN HWH Chord Local at a distance of 32 Km from the source station Barddhaman. The station code of Jhapandanga is JPQ. The arrival time of 36854 at Jhapandanga is 20:38. The departure time of train 36854 from Jhapandanga is 20:39. The total halt time of train 36854 at Jhapandanga is 1 minutes. The previous stopping station, Jaugram is 2km away. The next stopping station is Gurap at a distance of 5km. 10. Gurap It is the 10th station in the train route of 36854 BWN HWH Chord Local at a distance of 37 Km from the source station Barddhaman. The station code of Gurap is GRAE. The arrival time of 36854 at Gurap is 20:43. The departure time of train 36854 from Gurap is 20:44. The total halt time of train 36854 at Gurap is 1 minutes. The previous stopping station, Jhapandanga is 5km away. The next stopping station is Hajigarh at a distance of 2km. 11. Hajigarh It is the 11th station in the train route of 36854 BWN HWH Chord Local at a distance of 39 Km from the source station Barddhaman. The station code of Hajigarh is HIH. The arrival time of 36854 at Hajigarh is 20:46. The departure time of train 36854 from Hajigarh is 20:47. The total halt time of train 36854 at Hajigarh is 1 minutes. The previous stopping station, Gurap is 2km away. The next stopping station is Sibaichandi at a distance of 4km. 12. Sibaichandi It is the 12th station in the train route of 36854 BWN HWH Chord Local at a distance of 43 Km from the source station Barddhaman. The station code of Sibaichandi is SHBC. The arrival time of 36854 at Sibaichandi is 20:50. The departure time of train 36854 from Sibaichandi is 20:51. The total halt time of train 36854 at Sibaichandi is 1 minutes. The previous stopping station, Hajigarh is 4km away. The next stopping station is Dhaniakhali Halt at a distance of 3km. 13. Dhaniakhali Halt It is the 13th station in the train route of 36854 BWN HWH Chord Local at a distance of 46 Km from the source station Barddhaman. The station code of Dhaniakhali Halt is DNHL. The arrival time of 36854 at Dhaniakhali Halt is 20:53. The departure time of train 36854 from Dhaniakhali Halt is 20:54. The total halt time of train 36854 at Dhaniakhali Halt is 1 minutes. The previous stopping station, Sibaichandi is 3km away. The next stopping station is Belmuri at a distance of 2km. 14. Belmuri It is the 14th station in the train route of 36854 BWN HWH Chord Local at a distance of 48 Km from the source station Barddhaman. The station code of Belmuri is BMAE. The arrival time of 36854 at Belmuri is 20:56. The departure time of train 36854 from Belmuri is 20:57. The total halt time of train 36854 at Belmuri is 1 minutes. The previous stopping station, Dhaniakhali Halt is 2km away. The next stopping station is Porabazar at a distance of 2km. 15. Porabazar It is the 15th station in the train route of 36854 BWN HWH Chord Local at a distance of 50 Km from the source station Barddhaman. The station code of Porabazar is PBZ. The arrival time of 36854 at Porabazar is 20:58. The departure time of train 36854 from Porabazar is 20:59. The total halt time of train 36854 at Porabazar is 1 minutes. The previous stopping station, Belmuri is 2km away. The next stopping station is Chandanpur at a distance of 4km. 16. Chandanpur It is the 16th station in the train route of 36854 BWN HWH Chord Local at a distance of 54 Km from the source station Barddhaman. The station code of Chandanpur is CDAE. The arrival time of 36854 at Chandanpur is 21:03. The departure time of train 36854 from Chandanpur is 21:04. The total halt time of train 36854 at Chandanpur is 1 minutes. The previous stopping station, Porabazar is 4km away. The next stopping station is Madhu Sudanpur at a distance of 5km. 17. Madhu Sudanpur It is the 17th station in the train route of 36854 BWN HWH Chord Local at a distance of 59 Km from the source station Barddhaman. The station code of Madhu Sudanpur is MDSE. The arrival time of 36854 at Madhu Sudanpur is 21:07. The departure time of train 36854 from Madhu Sudanpur is 21:08. The total halt time of train 36854 at Madhu Sudanpur is 1 minutes. The previous stopping station, Chandanpur is 5km away. The next stopping station is Kamarkundu at a distance of 2km. 18. Kamarkundu It is the 18th station in the train route of 36854 BWN HWH Chord Local at a distance of 61 Km from the source station Barddhaman. The station code of Kamarkundu is KQU. The arrival time of 36854 at Kamarkundu is 21:11. The departure time of train 36854 from Kamarkundu is 21:12. The total halt time of train 36854 at Kamarkundu is 1 minutes. The previous stopping station, Madhu Sudanpur is 2km away. The next stopping station is Balarambati at a distance of 2km. 19. Balarambati It is the 19th station in the train route of 36854 BWN HWH Chord Local at a distance of 63 Km from the source station Barddhaman. The station code of Balarambati is BLAE. The arrival time of 36854 at Balarambati is 21:14. The departure time of train 36854 from Balarambati is 21:15. The total halt time of train 36854 at Balarambati is 1 minutes. The previous stopping station, Kamarkundu is 2km away. The next stopping station is Mirzapur Bnkipr at a distance of 2km. 20. Mirzapur Bnkipr It is the 20th station in the train route of 36854 BWN HWH Chord Local at a distance of 65 Km from the source station Barddhaman. The station code of Mirzapur Bnkipr is MBE. The arrival time of 36854 at Mirzapur Bnkipr is 21:16. The departure time of train 36854 from Mirzapur Bnkipr is 21:17. The total halt time of train 36854 at Mirzapur Bnkipr is 1 minutes. The previous stopping station, Balarambati is 2km away. The next stopping station is Baruipara at a distance of 3km. 21. Baruipara It is the 21st station in the train route of 36854 BWN HWH Chord Local at a distance of 68 Km from the source station Barddhaman. The station code of Baruipara is BRPA. The arrival time of 36854 at Baruipara is 21:19. The departure time of train 36854 from Baruipara is 21:20. The total halt time of train 36854 at Baruipara is 1 minutes. The previous stopping station, Mirzapur Bnkipr is 3km away. The next stopping station is Begumpur at a distance of 4km. 22. Begumpur It is the 22nd station in the train route of 36854 BWN HWH Chord Local at a distance of 72 Km from the source station Barddhaman. The station code of Begumpur is BPAE. The arrival time of 36854 at Begumpur is 21:23. The departure time of train 36854 from Begumpur is 21:24. The total halt time of train 36854 at Begumpur is 1 minutes. The previous stopping station, Baruipara is 4km away. The next stopping station is Janai Road at a distance of 2km. 23. Janai Road It is the 23rd station in the train route of 36854 BWN HWH Chord Local at a distance of 74 Km from the source station Barddhaman. The station code of Janai Road is JOX. The arrival time of 36854 at Janai Road is 21:25. The departure time of train 36854 from Janai Road is 21:26. The total halt time of train 36854 at Janai Road is 1 minutes. The previous stopping station, Begumpur is 2km away. The next stopping station is Gobra at a distance of 3km. 24. Gobra It is the 24th station in the train route of 36854 BWN HWH Chord Local at a distance of 77 Km from the source station Barddhaman. The station code of Gobra is GBRA. The arrival time of 36854 at Gobra is 21:29. The departure time of train 36854 from Gobra is 21:30. The total halt time of train 36854 at Gobra is 1 minutes. The previous stopping station, Janai Road is 3km away. The next stopping station is Dankuni at a distance of 3km. 25. Dankuni It is the 25th station in the train route of 36854 BWN HWH Chord Local at a distance of 80 Km from the source station Barddhaman. The station code of Dankuni is DKAE. The arrival time of 36854 at Dankuni is 21:33. The departure time of train 36854 from Dankuni is 21:34. The total halt time of train 36854 at Dankuni is 1 minutes. The previous stopping station, Gobra is 3km away. The next stopping station is Belanagar at a distance of 4km. 26. Belanagar It is the 26th station in the train route of 36854 BWN HWH Chord Local at a distance of 84 Km from the source station Barddhaman. The station code of Belanagar is BZL. The arrival time of 36854 at Belanagar is 21:37. The departure time of train 36854 from Belanagar is 21:38. The total halt time of train 36854 at Belanagar is 1 minutes. The previous stopping station, Dankuni is 4km away. The next stopping station is Belur at a distance of 4km. 27. Belur It is the 27th station in the train route of 36854 BWN HWH Chord Local at a distance of 88 Km from the source station Barddhaman. The station code of Belur is BEQ. The arrival time of 36854 at Belur is 21:43. The departure time of train 36854 from Belur is 21:44. The total halt time of train 36854 at Belur is 1 minutes. The previous stopping station, Belanagar is 4km away. The next stopping station is Liluah at a distance of 2km. 28. Liluah It is the 28th station in the train route of 36854 BWN HWH Chord Local at a distance of 90 Km from the source station Barddhaman. The station code of Liluah is LLH. The arrival time of 36854 at Liluah is 21:46. The departure time of train 36854 from Liluah is 21:47. The total halt time of train 36854 at Liluah is 1 minutes. The previous stopping station, Belur is 2km away. The next stopping station is Howrah Jn at a distance of 5km. 29. Howrah Jn It is the 29th station in the train route of 36854 BWN HWH Chord Local at a distance of 95 Km from the source station Barddhaman. The station code of Howrah Jn is HWH. The arrival time of 36854 at Howrah Jn is 22:10. The previous stopping station, Liluah is 5km away. Trainspnrstatus is one of the best website for checking trains running status. You can find the 36854 BWN HWH Chord Local running status here. Trainspnrstatus is one stop best portal for checking pnr status. You can find the 36854 BWN HWH Chord Local IRCTC and Indian Railways PNR status here. All you have to do is to enter your 10 digit PNR number in the form. PNR number is printed on the IRCTC ticket. Train number of BWN HWH Chord Local is 36854. You can check entire BWN HWH Chord Local train schedule here. with important details like arrival and departure time. 36854 train schedule BWN HWH Chord Local train time table BWN HWH Chord Local ka time table BWN HWH Chord Local kitne baje hai BWN HWH Chord Local ka number36854 train time table
{"url":"https://www.trainspnrstatus.com/train-schedule/36854","timestamp":"2024-11-02T06:10:44Z","content_type":"text/html","content_length":"53220","record_id":"<urn:uuid:817fb03c-905e-496a-aced-3fbebcf6ba5a>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00445.warc.gz"}
CAS Calculus Information Below are the prerequisites and placement policies for calculus courses. For more information about the courses, please refer to our undergraduate course descriptions. A note on the Math for Economics Sequence and the Calculus Requirement Students who have declared or plan to declare an Economics major or Joint Math/Economics Major can use the MATH-UA 131, 132, and 133 Math for Economics I - III sequence to substitute for the MATH-UA 121, 122 and 123 Calculus I - III requirements. You will have to get this approved by the Math Department. All other students must use the Calculus sequence. Students cannot mix-and-match, combine, our double count between the Math for Economics and Calculus sequences - no exceptions! If you have any questions regarding the Math for Economics and Calculus requirements, please contact the Math ACT and SAT scores must be submitted to NYU and posted in your academic records in order to satisfy the prerequisite. We do not accept screenshots of ACT or SAT scores. To register for MATH-UA 120 Discrete Math, MATH-UA 121 Calculus I, MATH-UA 131 Math for Economics I, or MATH-UA 140 Linear Algebra, students must satisfy only one (1) of the following prerequisites: • SAT score of 670 or higher on mathematics portion • ACT/ACTE Math score of 30 or higher • Valid AP Score: □ AP Precalculus score of 4 or higher □ AP Calculus AB score of 3 or higher □ AP Calculus BC score of 3 or higher • A Level Maths score of C or higher □ Students who took A Level Further Maths should contact the Math Department • AS Level Maths score of B or higher • IB Exam Result from 2021 - 2027 □ IB Analysis and Approaches HL score of 5 or higher □ IB Applications and Interpretations HL score of 5 or higher □ IB Analysis and Approaches SL score of 7 • IB Exam Result from 2014 - 2020 □ IB Mathematics HL score of 5 or higher □ IB Mathematics SL score of 6 or higher □ IB Mathematical Studies SL score of 7 • Completion of MATH-UA 009 Algebra, Trigonometry and Functions with a grade of C or higher □ Grades of Incompete do NOT satisfy the prerequisite • Passing Calculus/MFE I placement exam STUDENTS WHO WISH TO ENROLL IN MATH-UA 122 CALCULUS II, MUST MEET ONE OF THE FOLLOWING PREREQUISITES: • MATH-UA 121 Calculus I with a C or better • AB score of 4 or higher • BC score of 4 or higher • A-level Maths score of B or higher (anyone who took Further Maths should contact the math department as it varies depending on the exam board) IB Prerequisites for MATH-UA 122 Calculus II 2021 - 2027 • IB Analysis and Approaches HL score of 6 • IB Applications and Interpretations HL score of 6 IB Prerequisites for MATH-UA 122 Calculus II 2014 - 2020 • IB Mathematics HL score of 6 or higher (no Topic 9) STUDENTS WHO WISH TO ENROLL IN MATH-UA 123 CALCULUS III, MUST MEET ONE OF THE FOLLOWING PREREQUISITES: • MATH-UA 122 Calculus II with a C or higher • BC score of 5 • SEAB A-Level H-2 score of B or higher • Certain A-Level Further Maths Exams with score of B or higher (anyone who took Further Maths should contact the math department as it varies depending on the exam board) IB Prerequisites for MATH-UA 123 Calculus III 2021 - 2027 • IB Analysis and Approaches HL score of 7 IB Prerequisites for MATH-UA 123 Calculus III 2014 - 2020 • IB Mathematics HL score of 6 or higher (with Topic 9) • IB Further Mathematics HL score of 6 or higher Students who receive a 4 or better on the AB test will receive 4 points of credit for MATH-UA 121 Calculus I. Students who receive a 4 on the BC will receive 4 points of credit for MATH-UA 121 Calculus I; those who received a 5 on the BC will receive 8 point of credit for MATH-UA 121 Calculus I and MATH-UA 122 Calculus II. If you received AP credits for a calculus course, and you register for the same course, you will forfeit your AP credits. A score of 4 or 5 on the AP Precalculus examination satisfies the prerequisite for MATH-UA 121, 131, 120, and 140 (no placement exam needed). These AP credits can never count toward any UA major/ minor or to any UACORE requirement (they are elective credits toward the baccalaureate degree). **These guidelines are for students enrolled in CAS. For all other schools please check with your advisor.
{"url":"https://math.nyu.edu/dynamic/undergrad/ba-cas/calculus-information/","timestamp":"2024-11-06T09:44:22Z","content_type":"text/html","content_length":"54737","record_id":"<urn:uuid:78d02efc-9ce3-4341-995f-e843e67eb9b9>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00422.warc.gz"}
From WikiEducator Title: Slope of a Line Between Two Points Why: Lines can be used to help us define relationships and even make predictions. But, we need to be able to find the equation of a line. And, part of the equation of a line is its slope. Learning Objectives: 1. Learners will read text about finding the slope of a line between two points. 2. Learners will find the slope of several lines give two points on the line. Success Criteria: After completion of this module, learners will be able to: 1. Find the slope of a line between two points. 2. Identify whether the calculated slope makes sense.
{"url":"https://wikieducator.org/Mathematical_Journey/Skill_Development/Slope_of_a_Line_Between_Two_Points","timestamp":"2024-11-03T20:36:51Z","content_type":"text/html","content_length":"23232","record_id":"<urn:uuid:c9fbd6ca-512f-4ecd-a1a4-0d50f1ee3129>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00728.warc.gz"}
Two Numbers? - PuzzlersWorld.com There is a one person who have two numbers, he tells sum to the person S and product of those numbers to P. Now there is this conversion between S and P. S: I don’t know what are the numbers. P: I also don’t know what are the numbers. S: Now i know what are the numbers. P: Now i also know what are the numbers. Assuming S and P to be very wise and good in mathematics, What are those two numbers? Note: Numbers are greater than 0. Click here to See Solution Numbers are 2 and 2. S would have got 4, which means 1+3 or 2+2. so that’s why he was not sure of the numbers. P would have also got 4, possibilities 1*4 and 2*2. Now, had the numbers be 1 and 3, P would have got 3 and he would have been sure of the numbers, but that was not the case, So S became sure that numbers are 2 and 2. Now, P knows that numbers cant be 1 and 4, because there are two possibilities of getting the sum as 5, 1+4, 2+3, and in both these cases S cant guess the number depending on P’s earlier answer, as for both product 4 and 6 there are more than 1 possibilities. Thus P also knows that the numbers are 2 and 2. 1. Sahil Pareek says When 1st says i don’t know that illustrates the sum is greater than 3; when 2nd says i don’t know that illustrates the number is not a prime number; Since as the turn of 1st comes again, he knew the answer this implies that there were only 2 combinations of sum possible previously and now he eliminated one of the pair whose product comes out to be prime, so he left with one. Since the 2 combinations are only possible if : Sum =4; pairs are (3,1) and (2,2) and one of the pairs multiplication answer is 4 which is not prime. So, the answer is (2,2). 2. Saket says Even 1 and 4 just works fine on the same logic. 3. Gurbakhsish Singh says I think the question is incorrect and so is the answer, please look into this for the original question and a logical answer http://www.qbyte.org/puzzles/p003s.html 4. srikar says its 2 and 2 , sum is 4 and it can be divided into 3,1 and 2,2 but 4 has two sets of products so p is not sure 5. rishi says 1 and 2 6. Amit says why it cant be 4 and 5 7. krishna says 8. Raghava G D says the two numbers are 2 and 2. Leave a Comment Cancel reply
{"url":"https://puzzlersworld.com/interview-puzzles/two-numbers/","timestamp":"2024-11-14T07:31:20Z","content_type":"text/html","content_length":"106788","record_id":"<urn:uuid:7641f364-e088-4cf8-93d8-c534b2156afd>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00825.warc.gz"}
Probability theory. Probability Distributions Statistical Entropy - online presentation Probability theory. Probability Distributions Statistical Entropy MEPhI General Physics Molecular Physics. Statistics Lecture 08 Probability theory. Probability Distributions Statistical Entropy Thermodynamics and Physical Statistics Thermodynamic Approach - defines correlations between the observed physical quantities - relies mostly on the experimentally established - allows generally to consider the physical essence of the problem - does not require precise information about the microscopic structure of matter. Thermodynamics and Physical Statistics Statistical Approach Thermodynamic Approach - defines correlations between the observed physical quantities - relies mostly on the experimentally established - allows generally to consider the physical essence of the problem - does not require precise information about the microscopic structure of matter. Based on certain models of the micro-structure of matter defines correlations between the observed physical quantities, starting from the laws of motion of molecules using the methods of probability theory and mathematical allows to understand the random fluctuations of macroscopic Thermodynamics and Physical Statistics Statistical Approach Thermodynamic Approach - defines correlations between the observed physical quantities - relies mostly on the experimentally established - allows generally to consider the physical essence of the problem - does not require precise information about the microscopic structure of matter. Based on certain models of the micro-structure of matter defines correlations between the observed physical quantities, starting from the laws of motion of molecules using the methods of probability theory and mathematical allows to understand the random fluctuations of macroscopic Statistical approach strongly complements thermodynamics. BUT! To implement it effectively we need proper mathematical tools – first of all – the probability theory. This will be the focus of today’s lecture!… Probability = the quantitative measure of possibility for certain event to occur. EXAMPLE 1. Eagle and Tails Game: Test = throw of a coin; Results of a test (events); eagle. or tail If the game is honest – possibilities to obtain each result must be equal. Quantitative measure of possibility = probability: Pe=0,5 – eagle; Pt=0,5 - tail. P = 1 is the probability of a “valid” event, which will occur for sure. Pe + Pt=1 – either eagle or tail will fall out for sure (if rib is excluded ). The rule of adding probabilities P (1 OR 2) = P(1) + P(2) The normalization rule: the sum of probabilities for all possible results of a test is evidently valid and thus is equal to 1. Probability = he quantitative measure of possibility for certain event to occur. EXAMPLE 2. Dice game. Test = throw of a dice; Results of a test (events); 1, 2, 3, 4, 5, 6 If the game is “honest”: P(1) = P(2) = P(3) = P(4) = P(5) = P(6) = 1/6. Example 2.1: Probability to obtain the odd number as the result of a test: P(1) + P(3) + P(5) = 1/6 + 1/6 + 1/6 = 1/2 Example 2.2: Probability to obtain “double 6”: from the first throw P(6) = 1/6, and then – out of this probability only 1/6 of chances that 6 will occur again. Finally P( 6 AND 6) = P(6)P(6) = 1/6*1/6 = 1/36 The rule of multiplying probabilities P (1 AND 2) = P(1)P(2) Distribution of Probabilities EXAMPLE 3. Rotating of a top toy. If initially the red mark was exactly to the right from the top toy’s axis (in direction X), after rotation stops - it may stop at any angle . The question: what is the probability that the toy wil stop EXACTLY turned by the angle φ from the initial orientation? The answer: P(φ) = 0 (!) – as absolutely precise experiments are impossible in principle. There always will be some error, or deviation, or uncertainty. The CORRECT question: what is the probability for the top toy to stop within certain range of angles: between φ and (φ + dφ)? Evidently, this probability MAY depend on φ and MUST be proportional to dφ: dP(φ to φ + dφ) = f(φ)dφ Here f(φ) is the probability distribution function. Distribution of Probabilities SO: the PROBABILITY for the top toy to stop between φ and (φ + dφ) equals: dP(φ to φ + dφ) = f(φ)dφ f(φ) is the probability distribution function. The Normalization Rule: the integration of the probability distribution function over the range of all possible results MUST give 1 In our example: 2π∫0 f(φ)dφ = 1 The Even Distribution: if all possible results are equally probable – the distribution function equals In our example: f(φ) = Const = 1/ 2π Even Distribution on the Plane EXAMPLE 4: a point is dropped (randomly) on the table with the area S. What is the probability that it will fall within the little square within dS? f(x,y) = C: the even probability distribution function. The Normalization Rule: ∫S Cdxdy = CS = 1 => C = 1/S The result: dP(dS) = f(x,y)dS = dS/S Even Distribution in Physics EXAMPLE 5: Molecules in gas – at the state of thermal equilibrium, when the concentration of molecules is the same all throw the volume are evenly distributed over all the coordinates: f(x,y,z) = Const = 1/V = dP(x,y,z) = f(x,y,z)dV = dxdydz/V Probability Distribution and Average Values Knowing the distribution of a random x we may calculate its average value <x>: x xdPx xf ( x)dx, Moreover, we may calculate the average for any function ψ(x): ( x) ( x)dPx ( x) f ( x)dx • Average of the sum of two values equals to the sum of their averages <x + y> = <x> + <y> • Average of the product of two values equals to the product of their averages ONLY in case if those two values DO NOT depend on each other <xy> = <x><y> only if x and y are independent variables Probability Distribution and Average Values Examples: in case of even distribution of molecules over certain spherical volume V (balloon with radius R): x xdPx xf ( x)dx, = 0 <x +y> = 0; <x y> = 0 <x2> = R2/5 >0 <r2> = < x2 +y2 +z2> = 3R2/5 > <r>2 <r> = <(x2 +y2 +z2)1/2> = 3R/4 Calculation – ‘live”… Different Kinds of Averages <r> = <(x2 +y2 +z2)1/2> = 3R/4 - average (<r2>)1/2 = 0,61/2R > <r> - squared average <x> = 0; <x2>1/2 = R/51/2 >0 Median average rmed ; the quantity of molecules with r<rmed equals to the quantity of molecules Z with r>rmed rmed=R/21/3=0,7937R > (<r2>)1/2 =0,7756R > This all is about even distribution of molecules over space in spherical balloon. What about the distribution of molecules over velocities and energies? It can be spherically symmetric, but it can not be even as formally there is no upper limit of velocity… Normal Distribution Eagle and Tails game Eagles and Tails Game EXAMPLE 1: One throw of a coin = one Test (N = 1) Test Results: type 1 – “Tail”; If we do very many tests (throws) Pi lim N N = 0,5 type 0 – “Eagle” both for types 1 and 0. NT – number of tests (throws), Ni – number of tests with the result of the type i Рi – probability to obtain the result of the type i Pi lim N N Eagles and Tails Game EXAMPLE 2: 1 Test (or one series of tests) = 2 throws of a coin (N = 2) Test results (types): 0 –2 Eagles 1 –1 Eagle and 1 Tail 2 –2 Tails Pi lim N N = 0,25 probability of types 0 and 2 = 0,5 – probability of type 1 N – the length of a test series (in this case N = 2) , NT – total number of tests Ni – number of tests with result of the type i Рi – probability to obtain the result of the type i The product of probabilities: The first test – the probability of result “1” - P1 The second test – the probability of result “2” = P2. The probability to obtain the result ‘1’ at first attempt and then to obtain result ‘2’ at second attempt equals to the product of probabilities P(1 AND 2) = P(1 ∩ 2) = P1P2 EXAMPLE: Eagle AND Tail = ½ * ½ = ¼ The SUM of probabilities: One test – the probability of result “1” - P1; the probability of result “2” = P2. The probability to obtain the result ‘1’ OR ‘2’ equals to the SUM of probabilities: P(1TOR 2) = P(1 U 2) = P1 + P2 EXAMPLE: Eagle OR Tail = ½ + ½ = 1 EXAMPLE 2: (Eagle AND Tail) OR (Tail AND Eagle) = ¼ + ¼ = ½ Eagles and Tails Game General Case: The test length = N (N throws = 1 test) All the types of test results can be numbered i from 0 to N by the number of the tailspins in each test. Each type of results will have the probability, proportional to the number of variants, that produce that result Number of variants producing this type of result 1 (all the throws = eaglespins) N (only one there was a tailspin (either at first throw, or at the 2nd, or at the 3rd, … or at the last) N(N -1)/2 N(N-1)(N-2)….(N-k+1)/k! = N!/k!(N-k)! i = N-1 1 (all the throws = tailspins) Eagles and Tails Game The test series has the length N (N throws = attempts) Number of variants to obtain the result of the type k (with k tailspins): Ωk= N!/k!(N-k)! ΣΩk= 2N Probability to obtain the result of the type k: Pk= N!/k!(N-k)!2N If N>>1 we may apply the Sterling's approximation: ln(n!) ~= n ln(n/e) Pk= (2/πN)1/2exp(-2(k-N/2)2/N) Для N = 10 here n = (k - N/2) – is the deviation of the 50 obtained result from the average N/2. Probabilities are noticeable when n<~N1/2 << N SOME MATHEMATICS: the probability distribution Smooth CHART – distribution function Area =dPx area = x x+a dPx f ( x)dx x x+dx dN x Px dPx f ( x) lim N Ndx x x dP f ( x)dx 1 N – number pf tests, Ni – number of results of the type i Рi – probability of the result of the type i Pi lim N N For continuously distributed quantities X: differential probability to find the random value X within the range from X to X + dX dP(x) = f(x)dx f(x) – is the probability distribution function Probability to find the random value X within x2 the range from x1 to x2 : P( x1 x x2 ) f ( x)dx For continuously distributed quantities thex1probability to have exactly some value x0, is equal to zero P(x = x0)=0. Normal distribution Pk=(2/πN)1/2exp(-2(k-N/2)2/N) => The normal (or Gauss) distribution – is the smooth approximation of the Newton’s binomial formula Parameters of Gauss distribution: x – some random value μ — the most probable (or expected) value of the random (the maximum of the distribution function) σ — dispersion of the random value. In case of the Eagles and Tails game: μ = N/2, σ = (N/2)1/2 << N Gauss Distribution The normal distribution is very often found in nature. • Eagle and Tails game • Target striking • the deviations of experimental results from the average (the dispersion of results = the experimental error) MEPhI General Physics Normal (Gauss) Distribution and Entropy Normal Distribution. EXAMPLE: Eagles and Tails Game Number of variants leasing to the result k (if N =10): Ωk= N!/k!(N-k)! ΣΩk= 2N Realizations (variants) 0010000000 … 0000000001 0100110011 1101000101 … - total 252 variants. If N>>>1 – figures may be exponentially high N = 10 1. The more realizations are possible – the higher is the probability of the result. 2. The more realizations are possible – the less looks the “degree of order” in this result The Entropy of Probability The more realizations are possible – the more is the probability of it and the less is the “degree of order” in this result. For long series of tests (N>>>1) numbers of variants of realizations may be exponentially high and it may be reasonable to apply logarithmic functions) The definition of the entropy of probability: S(k) = ln(Ωk) - If N = 10 S(0) = S(10) = ln(1) = 0 S(5) = ln(252) = ~5,6 N = 10 S(k) = Aln(Pk) = Aln(Ωk) - ANln2 The higher is the “order” – the lower is entropy The higher is probability – the higher is entropy P(k и i) = PkPi => S(k и i) = S(k)+S(i) Entropy is the additive function! Entropy in Informatics Type of result 1000000000 0100000000 ... 0100110011 1101000101 … Any information or communication can be coded as a string of zeroes and units: 0110010101110010010101111110001010111…. (binary code) To any binary code with length N (consisting of N digits, k of which are units and (N-k) – are zeroes) we may assign a value of entropy: S(N,k) = ln(ΩN,k), where ΩN,k – is the number of variants how we may compose a string out of k units and (N-k) zeroes • Communications, looking like 00000000.. 111111111… S = 0 • Communications with equal number of units and zeroes have maximal entropy . • Entropy of 2 communications equals to the sum of their entropies. Entropy in Informatics Any information or communication can be coded as a string of zeroes and units: 0110010101110010010101111110001010111…. (binary code) The entropy of information or communication, containing k units and (N-k) zeroes is defined like: S(N,k) = ln(ΩN,k), where ΩN,k – is the number of variants how we may compose a string out of k units and (N-k) zeroes • Communications, looking like 00000000.. 111111111… S = 0 … the informational value of such ‘communication’ is also close to zero • Communications with equal number of units and zeroes have maximal entropy . Most probably they are the sets of randomly distributed symbols, thus also having practically no informational value • Entropy of 2 communications equals to the sum of their entropies. Real informative communications as a rule • do include fragments with noticeable predominance of either zeroes or units, and • Have the entropy noticeably different from both maximum and minimum Entropy in Informatics The deffinition of entropy as the measure of disorder (or the measure of informational value) in informatics was first introduced by Claude Shannon in his article «A Mathematical Theory of Communication», published in Bell System Technical Journal in 1948 Those ideas still serve as the base for the theory of connunications, tmethods on encoding and decoding, are used in linguistics etc… Claude Elwood Shannon 1916 - 2001 MEPhI General Physics Statistical Entropy in Physics Distribution of Molecules over possible Imagine that we have К possible “states” and we have N “molecules” that can be somehow distributed over those “states” The row of numbers (n1, n2, …nK) = n(i) forms what is called the «macrostate» of the system. Each macro-state can be realized by the tremendously great number of variants Ω(n(i)). The number of variants of realization is called the “statistic weight” of the macro-state. The more is the “statistic weight” – the greater is the probability to realize this state. Even distribution of molecules over space has the biggest statistical weight and thus id usually associated with thermo-dynamical equilibrium. Statistical Entropy in Physics. Statistical Entropy in Molecular Physics: the logarithm of the number of possible micro-realizations of a state with certain macro-parameters, multiplied by the Boltzmann constant. S k ln J/K In the state of thermodynamic equilibrium, the entropy of a closed system has the maximum possible value (for a given energy). If the system (with the help of external influence)) is derived from the equilibrium state - its entropy can become smaller. BUT… If a nonequilibrium system is left to itself - it relaxes into an equilibrium state and its entropy increases The entropy of an isolated system for any processes does not decrease, i.e. ΔS > 0 , as the spontaneous transitions from more probable (less ordered) to less probable (more ordered) states in molecular systems have negligibly low probability The entropy is the measure of disorder in molecular systems. Statistical Entropy in Physics. For the state of the molecular system with certain macroscopic parameters we may introduce the definition of Statistical Entropy as the logarithm of the number of possible micro-realizations (the statistical weight of a state Ώ) of a this state, multiplied by the Boltzmann constant. S k ln J/К Entropy is the additive quantity. p p1 p2 ... pN 1 2 ... N S k ln k ln 1 ln 2 ... ln N S Si i 1 Statistical Entropy and the Entropy of Ideal Gas Not a strict proof, but plausible considerations. • the number of variants of realization of a state (the statistical weight of a state) shall be higher, if the so called phase volume, available for each molecule (atom), is higher: Phase volume Ω1 ~Vp3~VE3/2 ~VT3/2 • the phase volume for N molecules shall be raised to the power N: Ω ~ VNT3N/2. For multy-atomic molecules, taking into account the possibilities of rotations and oscillations, we shall substitute 3 by i : Ω ~ VNT iN/2 • As molecules are completely identical, their permutations do not change neither the macro-state, nor the micro-states of the system. Thus we have to reduce the statistical weight of the state by the factor ~ N! (the number of permutations for N molecules) Ω~ VNTiN/2 /N!; S=k ln Ω =kNln(VTi/2/NC)= = v(Rln(V/v) + cVlnT +s0) Statistical Entropy and the Entropy of Ideal Gas Ω~ VNTiN/2 /N!; S=k ln Ω =kNln(VTi/2/NC)= = v(Rln(V/v) + cVlnT +s0) The statistical entropy proves to be the same physical quantity, as was earlier defined in thermodynamics without even referring to the molecular structure of matter and heat! 2nd Law of Thermodynamics and the ‘Time arrow’ “The increase of disorder, or the increase of the Entropy, of the Universe over time is one of the possibilities to define the so-called “Time arrow”, that is the direction of time, or the ability to distinguish the past from the future, Stephen Hawking MEPhI General Physics The Distributions of Molecules over Velocities and Energies Maxwell and Boltzmann Distributions That will be the Focus of the next lecture! Thank You for Attention!
{"url":"https://en.ppt-online.org/351912","timestamp":"2024-11-13T08:47:40Z","content_type":"text/html","content_length":"60919","record_id":"<urn:uuid:b9317722-2f18-4a27-ae96-80bfe34afa60>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00007.warc.gz"}
Deterministic Mean Field Games with jumps and mixed variational structure Submitted Paper Inserted: 28 jun 2024 Last Updated: 28 jun 2024 Year: 2024 We consider in this paper some Mean field Game problems where the agents have an individual cost composed of an action penalizing their motion and a running cost depending on the density of all players. This running cost involves two parts: one which is variational but possibly non-smooth (typically, a local cost of the form $f'(\rho(x))$ where $\rho(x)$ is the density of the distribution of players at point $x$) and one which is smooth but non-variational (typically, a non-local cost involving convolutions). In order to show that this is a very general framework, we consider both the case where the action is of kinetic energy type, i.e. it is an integral of a power of the velocity, and the case where it counts the number of jumps of the curve, more adapted to some real estate and urban planning models. We provide an existence result of a ``formal'' equilibrium by Kakutani's fixed point theorem. ``Formal'' means that we do not prove that such an equilibrium is a measure on curves concentrated on optimal curves but only that it solves a variational problem whose optimality conditions formally correspond to this. We then rigorously prove that optimizers of this variational problem are indeed concentrated on optimal curves for an individual problem. Both the existence and these optimality conditions for the minimizers are studied in the kinetic and in the jumps case. We then prove that the opimization among measures on curves (the Lagrangian framework) can be reduced to a minimization among curves of measures (the Eulerian framework) by proving a representation of the action functional. This is classical in the kinetic case and involves the Wasserstein distances $W_p$, while it lets the total variation norm appear in the jump case. We then prove that, under some assumptions, the solution of the variational problem expressed in Eulerian language depends in a Lipschitz continuous way on the data, which can prove that the fixed point argument for the equilibrium can be reformulated as the fixed point of a Lipschitz uni-valued map. Under smallness assumptions on some data, this becomes a contraction and the equilibrium can be found by Banach fixed point. This allows for efficient numerical computations, based on the solution of a non-smooth convex optimization problem, which we present at the end of the paper.
{"url":"https://cvgmt.sns.it/paper/6640/","timestamp":"2024-11-07T23:13:02Z","content_type":"text/html","content_length":"10069","record_id":"<urn:uuid:db592ff0-99a0-48d9-bd8f-2fb8f68f9782>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00881.warc.gz"}
Topics: Combinatorics In General > s.a. discrete geometry; mathematics [finite mathematics]. * Idea: Combinatorial theory is the branch of mathematics concerned with discrete problems of counting (how many elements there are in sets that are known to be finite), selection, arrangement, permutation, etc. > Online resources: MathWorld pages. Branches > s.a. Coloring; Combinatorial Group Theory; Combinatorial Topology. * Enumeration theory: Its purpose is to determine, given a system {S[i] of finite sets, the cardinality of each S[i], or counting function N(i); Examples: For S[n] = P{1,...,n}, N(n) = 2^n; For S[n] = {divisors of n}, N(n) = d(n). * Other: It includes Ramsey theory, combinatorial designs, codes, graphs, networks, finite Boolean Algebras, game theory, finite probability theory, combinatorial geometry, lattices, Matroids, posets Algebraic Combinatorics @ References: Stanley BAMS(03) [progress]. Combinatorial Geometry > s.a. Geometric Topology. $ Def: A matroid in which all single points and pairs are independent sets. @ Texts: Crapo & Rota 70; Pach & Agarwal 95. @ References: Goodman & O'Rourke 04 [discrete methods in geometry]. Combinatorial Topology > see cell complexes. Probabilistic Combinatorics > s.a. graphs; phase transitions. @ Texts: Erdős & Spencer 74; Alon & Spencer 00; Beck 09 [inevitable randomness in discrete mathematics]. @ And physics: Scott & Sokal JSP(05)cm/03 [repulsive lattice gas]. Other Concepts and Results > s.a. Generating Function; partitions; Species [combinatorial species]. * Combinatorial numbers: The best known ones are binomial numbers; Other examples are Rook, Bell and Stirling numbers, which find applications in quantum field theory (normal ordering of operators). @ References: Dorlas et al a1902 [identity relating collections of m complex numbers and partitions of {1, ..., m}]. > And other areas: see grassmann numbers. References > s.a. graphs. @ General: Vilenkin 71; Comtet 74; Street & Wallis 77; Rota 78; Aigner 79; Stanley 83; Penner 99 [proof techniques]; Bóna 02 [II/II]; West 20. @ Books, I: Niven 65; Honsberger 73, 76 [problems]. @ Books, II: Cen & Koh 92 [problems]; Andreescu & Feng 02 [International Mathematical Olympiad problems], 03 [counting strategies]; Bóna 11 [enumeration and graph theory]; Vermani & Vermani 12 [discrete mathematics]; Koh & Tay 13 [counting]. @ Computational: Pemmaraju & Skiena 03 [Mathematica]; > s.a. quantum computing. In Physics > s.a. Polymers; probability in physics; states in statistical mechanics [partition function]; tilings. * Idea: Traditionally, among physicists combinatorics was identified with enumeration theory and probabilistic combinatorics, but lattice theory (from quantum mechanics) and graph and poset theory (from quantum gravity, for example) are becoming more important and better known. @ General references: Duchamp & Cheballah a0901 [open problems in combinatorial physics]. @ In quantum field theory: Bender et al qp/06 [and integer sequences]; Tanasa SLC-a1102; > s.a. quantum field theory formalism. main page – abbreviations – journals – comments – other sites – acknowledgements send feedback and suggestions to bombelli at olemiss.edu – modified 15 jul 2020
{"url":"https://www.phy.olemiss.edu/~luca/Topics/math/combin.html","timestamp":"2024-11-12T02:13:32Z","content_type":"text/html","content_length":"9403","record_id":"<urn:uuid:3d63e384-034b-4f1f-a963-36bbb5cf4bf7>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00070.warc.gz"}
1.4 How derivatives and integrals relate to one another in Calculus First, let us review some of the properties of differentials and derivatives, referencing the expression and graph shown below: • A differential is an infinitesimal increment of change (difference) in some continuously-changing variable, represented either by a lower-case Roman letter d or a lower-case Greek letter “delta” (δ). Such a change in time would be represented as dt; a similar change in temperature as dT; a similar change in the variable x as dx. • A derivative is always a quotient of differences: a process of subtraction (to calculate the amount each variable changed) followed by division (to calculate the rate of one change to another • The units of measurement for a derivative reflect this final process of division: one unit divided by some other unit (e.g. gallons per minute, feet per second). • Geometrically, the derivative of a function is its graphical slope (its “rise over run”). • When computing the value of a derivative, we must specify a single point along the function where the slope is to be calculated. • The tangent line matching the slope at that point has a “rise over run” value equal to the derivative of the function at that point. Next, let us review some of the properties of integrals, referencing the expression and graph shown below: • An integral is always a sum of products: a process of multiplication (to calculate the product of two variables) followed by addition (to sum those quantities into a whole). • The units of measurement for an integral reflect this initial process of multiplication: one unit times some other unit (e.g. kilowatt-hours, foot-pounds, volt-seconds). • When computing the value of an integral, we must specify both the starting and ending points along the function defining the interval of integration (a and b). • Geometrically, the integral of a function is the graphical area enclosed by the function and the interval boundaries. • The area enclosed by the function may be thought of as an infinite sum of extremely narrow rectangles, each rectangle having a height equal to one variable (y) and a width equal to the differential of another variable (dx). Just as division and multiplication are inverse mathematical functions (i.e. one “un-does” the other), differentiation and integration are also inverse mathematical functions. The two examples of propane gas flow and mass measurement highlighted in the previous sections illustrates this complementary relationship. We may use differentiation with respect to time to convert a mass measurement ( m) into a mass flow measurement (W, or dm/dt ). Conversely, we may use integration with respect to time to convert a mass flow measurement (W, or dm/dt ) into a measurement of mass gained or lost (Δm Likewise, the common examples of position (x), velocity (v), and acceleration (a) used to illustrate the principle of differentiation are also related to one another by the process of integration. Reviewing the derivative relationships: Now, expressing position and velocity as integrals of velocity and acceleration, respectively^9 : Differentiation and integration may be thought of as processes transforming these quantities into one another. Note the transformation of units with each operation – differentiation always divides while integration always multiplies: The inverse nature of these two calculus operations is codified in mathematics as the Fundamental Theorem of Calculus, shown here: What this equation tells us is that the derivative of the integral of any continuous function is that original function. In other words, we can take any mathematical function of a variable that we know to be continuous over a certain range – shown here as f(x), with the range of integration starting at a and ending at b – integrate that function over that range, then take the derivative of that result and end up with the original function. By analogy, we can take the square-root of any quantity, then square the result and end up with the original quantity, because these are inverse functions as well. A feature of this book which may be helpful to your understanding of derivatives, integrals, and their relation to each other is found in an Appendix section . In this section, a series of illustrations provides a simple form of animation you may “flip” through to view the filling and emptying of a water storage tank, with graphs showing stored volume (V ) and volumetric flow rate (Q). Since flow rate is the time-derivative of volume (Q = dV /dt ) and volume change is the time-integral of volumetric flow rate (ΔV = ∫ Q dt), the animation demonstrates both concepts in action.
{"url":"https://www.technocrazed.com/1-4-how-derivatives-and-integrals-relate-to-one-another","timestamp":"2024-11-08T17:56:01Z","content_type":"text/html","content_length":"88467","record_id":"<urn:uuid:08b39a72-6168-466d-a511-5a777df8997c>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00376.warc.gz"}
National Institute of Technology Srinagar Computational Laboratory The Department has a full-fledged computational laboratory to meet the requirements of the B.Tech. students, research scholars and the faculty members. Presently there are 20 systems in the Computation lab with advanced configuration. The students gain exposure in handling problems through mathematical software like MATLAB, Mathematica etc.
{"url":"https://old.nitsri.ac.in/Department/DisplayDeptPage.aspx?page=oakkc&ItemID=kaime&nDeptID=o","timestamp":"2024-11-02T10:46:42Z","content_type":"text/html","content_length":"63815","record_id":"<urn:uuid:097efdb9-f65e-4399-895a-4c491e2c3a9c>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00081.warc.gz"}
mp_arc 97-136 97-136 Amadeu Delshams, Arnau Mir Psi-Series of Quadratic Vector Fields on the Plane (80K, LaTeX 2.09 and PicTeX) Mar 19, 97 Abstract , Paper (src), View paper (auto. generated ps), Index of related papers Abstract. Psi-series (i.e., logarithmic series) for the solutions of quadratic vector fields on the plane are considered. Its existence and convergence is studied, and an algorithm for the location of logarithmic singularities is developed. Moreover, the relationship between psi-series and non-integrability is stressed and in particular it is proved that quadratic systems with psi-series that are not Laurent series do not have an algebraic first integral. Besides, a criterion about non-existence of an analytic first integral is given. Files: 97-136.tex
{"url":"http://kleine.mat.uniroma3.it/mp_arc-bin/mpa?yn=97-136","timestamp":"2024-11-10T07:36:58Z","content_type":"text/html","content_length":"1612","record_id":"<urn:uuid:19b837de-4c69-409b-bd69-ce21e70c9e12>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00848.warc.gz"}
A uniform rod AC of length &#039;l&#039; and mass m is kept on a horizontal smooth plane. A particle of the same mass m moving on the plane with velocity &#039;v&#039; strikes the rod at point B making an angle 37° with the rod. The collision is elastic. After collision: A) The angular velocity of the rod will be 72v/55l B) The center of the rod will travel a distance fl/3 in the time in which it makes half rotation C) Impulse of the impact force is 24mv/55 D) None of these A uniform rod AC of length 'l' and mass m is kept on a horizontal smooth plane. A particle of the same mass m moving on the plane with velocity 'v' strikes the rod at point B makin... A uniform rod AC of length 'l' and mass m is kept on a horizontal smooth plane. A particle of the same mass m moving on the plane with velocity 'v' strikes the rod at point B making an angle 37° with the rod. The collision is elastic. After collision: A) The angular velocity of the rod will be 72v/55l B) The center of the rod will travel a distance fl/3 in the time in which it makes half rotation C) Impulse of the impact force is 24mv/55 D) None of these Understand the Problem The question presents a scenario involving a uniform rod and a particle collision, asking for specific outcomes after the collision, such as angular velocity, distance traveled, and impulse of the impact force. It requires an application of principles from mechanics. The angular velocity is $\frac{72v}{55l}$, distance traveled is $\frac{l}{3}$, impulse is $\frac{24mv}{55}$. Answer for screen readers The angular velocity of the rod will be $\frac{72v}{55l}$, the center of the rod will travel a distance $\frac{l}{3}$ in the time it makes half rotation, and the impulse of the impact force is $\frac {24mv}{55}$. Thus the correct answers are A, B, and C. Steps to Solve 1. Set Up the System Assume a uniform rod AC of length $l$ and mass $m$ is on a smooth surface. A particle of the same mass $m$ strikes the rod at point $B$, making an angle of $37^\circ$ with the vertical. The initial velocity of the particle is $v$. 2. Determine the Components of Velocity The velocity of the particle can be resolved into two components: horizontal ($v_x$) and vertical ($v_y$). [ v_x = v \cos(37^\circ) = v \cdot \frac{4}{5} ] [ v_y = v \sin(37^\circ) = v \cdot \frac{3}{5} ] 3. Angular Momentum Conservation For an elastic collision, angular momentum about the point of contact before and after the collision must be conserved. The initial angular momentum $L_i$ about point A (the end of the rod) is given by: [ L_i = m \cdot v_x \cdot \frac{l}{2} = m \cdot \frac{4v}{5} \cdot \frac{l}{2} ] 4. Calculate Final Angular Velocity Let $\omega$ be the angular velocity after the collision. According to the conservation of angular momentum: [ L_i = I \omega ] Where $I$ is the moment of inertia of the rod about point A: [ I = \frac{1}{3}ml^2 ] We equate and solve for $\omega$: [ m \cdot \frac{4v}{5} \cdot \frac{l}{2} = \frac{1}{3} ml^2 \omega ] Cancel $m$ and solve for $\omega$: [ \frac{4v l}{10} = \frac{l^2 \omega}{3} ] [ \omega = \frac{4v \cdot 3}{10l} = \frac{12v}{10l} = \frac{72v}{55l} ] 5. Horizontal Distance Travelled by Center of Mass The center of mass of the rod will translate. It takes time $t$ to complete half a rotation, given by: [ t = \frac{\pi}{\omega} = \frac{\pi \cdot 55l}{72v} ] The distance traveled by the center of mass is: [ d = v_x \cdot t = v \cdot \frac{4}{5} \cdot \frac{\pi \cdot 55l}{72v} ] [ = \frac{4l\pi \cdot 55}{360} = \frac{22l\pi}{72} ] 6. Impulse of the Impact Force Impulse $J$ in an elastic collision can be found: [ J = \Delta p = mv - mv' \implies J = m \left( v_x - 0 \right) \text{ (since particle comes to rest)} ] [ J = m \cdot \frac{4v}{5} ] After calculating conditions for the impulse impact due to change in velocity, we express using total momentum formulas. The angular velocity of the rod will be $\frac{72v}{55l}$, the center of the rod will travel a distance $\frac{l}{3}$ in the time it makes half rotation, and the impulse of the impact force is $\frac {24mv}{55}$. Thus the correct answers are A, B, and C. More Information In an elastic collision, both momentum and kinetic energy are conserved. The calculations here demonstrate how to apply these principles in an idealized scenario. • Forgetting to resolve velocity into components can lead to incorrect calculations of angular momentum or impulse. • Neglecting the conservation of angular momentum for rotational systems can result in erroneous angular velocity calculations. AI-generated content may contain errors. Please verify critical information
{"url":"https://quizgecko.com/q/a-uniform-rod-ac-of-length-l-and-mass-m-is-kept-on-a-horizontal-smooth-plane-ittla","timestamp":"2024-11-13T18:07:01Z","content_type":"text/html","content_length":"177421","record_id":"<urn:uuid:84461dc9-7cd3-417c-9edb-220e9b720044>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00164.warc.gz"}
gas meter How do you size a gas meter? How do you size a gas meter? Sizing Gas Meters When a gas utility provider installs a gas meter at a residence, technicians will determine its size based on the total capacity of the home’s gas appliances. The gas meter’s label will indicate its capacity in cubic feet per hour (cf/h). Common meter sizes range from 175 cf/h to 275 cf/h. What is a U25 gas meter? A U25 is a medium size diaphragm meter generally used for commercial applications and occasionally for domestic properties. The U number is used to define all Diaphragm Gas Meters and relates to the meter capacity in Standard Cubic Metres per Hour (SCMH). How big is a U16 gas meter? Meter Type: Low Pressure (Diaphragm Meters) Flow Rate kW (kilowatts Capacity (SCMH) Standard cubic metres per U6 0 – 65 kW 6 m³/hr U16 66 – 173 kW 16 m³/hr U25 174 – 271 kW 25 m³/hr U40 272 – 430 kW 40 m³/hr How do you read a 2 dial gas meter? How to Read a Natural Gas Meter 1. Read the dials left to right. 2. If the hand is between two numbers, always select the lower number. 3. When the hand is between “9” and “0,” then “9” is considered the lower number. 4. When the hand looks as though it is DIRECTLY on the number, look at the dial to the right. How many kw can a domestic gas meter supply? The standard gas meters (also known as G4/U6 meters) have a maximum output of 64.6 kWh, in order to confirm that your current supply will need an upgrade you’ll need a gas safe engineer to calculate your ‘Service Pipe Energy Value’ (SPEV) in kWh which covers the total amount of gas used by all appliances (or planned … How do I know if I have a U16 gas meter? Using the above principles, it should be easy to identify your meter size and model. A meter displaying Qmax = 16 m3/h will identify the meter as a U16 with a peak flow of 16 m3/h. How do I know what type of gas meter I have? The first thing to do is establish exactly what type of meter you have. If your meter only has 4 numbers to the left of any numbers in red then your meter is an older imperial-type meter. If your meter has 5 numbers to the left of a decimal point or space then you have a newer-style metric meter. Can I work on a U16 gas meter? If you have domestic qualifications, you can work on up to a U16 meter, 35mm copper or 1¼” steel, and an installation volume of 0.035m3. Anything over this requires non-domestic qualifications. How do you read a gas dial meter? To read the meter: 1. Read the first 4 dials from left to right – ignore the large dials or red dials. 2. If the pointer is between two numbers, write down the lower number – if it’s between 9 and 0, write down 9. 3. If the pointer is directly over a number, write down that number. What do the numbers on my gas meter mean? All gas meters display a single four or five digit number indicating the number of gas units you’ve used. You can work out how many units you’ve used by subtracting your previous reading from an up-to-date one. What is the largest gas meter a domestic engineer can work on? How do I know if I need a bigger gas meter? To determine if your gas meter is undersized, you first have to know the total gas demand for your building. The easiest way to do this is to add up the “British thermal units/hour” (Btu/h) ratings of all the gas appliances. Each appliance will have a date plate with this information, though some can be hard to find. What are the different types of gas meters? There are four natural gas meter types often used for flow measurement. They are mass flow meters, velocity flow meters, differential pressure, and PD meters. How do you read meter units? Dial meter 1. Stand directly in front of your meter. 2. Read the dial on the left first. (Ignore the dial underneath). 3. Look at the two numbers the pointer is between and record the lowest number. (If the pointer is between 9 and 0, record 9.) 4. Do the same with each dial, reading left to right. What size gas meter can a domestic gas engineer work on? If you have domestic qualifications, you can work on up to a U16 meter, 35mm copper or 1¼” steel, and an installation volume of 0.035m3. Anything over this requires non-domestic qualifications. There is no equipotential bonding on the meter but the electrician says it’s fine. What if gas meter is too small? If a gas meter is undersized, the attached gas appliances could be starved for gas – especially when the major appliances are running at the same time. How do I know what type of meter I have? You’re looking for your meter box, which is most likely white. If you live in a flat or an apartment, you might find your meter on the ground floor. Each meter should be labelled with the corresponding flat – if not you’ll need to contact your landlord and they’ll be able to tell you where it is. What should a gas reading look like? To read the meter: Read the first 4 dials from left to right – ignore the large dials or red dials. If the pointer is between two numbers, write down the lower number – if it’s between 9 and 0, write down 9. If the pointer is directly over a number, write down that number. How do you read a gas Smart Meter? The way you take a reading from a smart gas meter depends on the type of meter you have….To read the meter: 1. press either of the buttons on your meter. 2. wait until you see ’01’ with numbers followed by ‘M3’ 3. write down the number from left to right. 4. ignore any zeroes at the beginning and any numbers after the decimal point. How do you read a Gas Smart Meter? How do you read a kWh meter? Always read the dials from the right to the left, starting from Dial A to Dial E. Read the number by the pointer of the dial. When the pointer is between two (2) numbers, the lower number is recorded. To compute your electric consumption, simply subtract the previous reading from the present reading. What is the best diaphragm gas meter for home use? The Honeywell American Meter AC250 is the utility preferred residential diaphragm gas meter around the country. Its u nrivaled longevity and accuracy retention make it a superior economical choice for small gas loads. What size gas meter do I need for my home? A multipurpose 3/4″ or 1″ diaphragm gas meter capable of gas loads up to 250 MBH (0.6 SG N.G. @ 7″ WC). IN STOCK! FREE SHIPPING! The Honeywell American Meter AC250 is the utility preferred residential diaphragm gas meter around the country. What is the capacity of a 3/4 gas meter? A multipurpose 3/4″ or 1″ diaphragm gas meter capable of gas loads up to 250 MBH (0.6 SG N.G. @ 7″ WC). IN STOCK!
{"url":"https://bigsurspiritgarden.com/2022/10/25/how-do-you-size-a-gas-meter/","timestamp":"2024-11-10T21:02:52Z","content_type":"text/html","content_length":"55844","record_id":"<urn:uuid:8988539e-63ca-4bb6-a7c9-542e3b9231ce>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00174.warc.gz"}
Natural Numbers List, Definition and Examples, for Class 10 & 11 Natural Numbers As we know Number system is the fundamental stone of Mathematics. Natural Number is a part of the Number system. Positive numbers from 1 to infinity make up Natural Numbers, it does not contain zero and negative integers. Natural numbers are also known as counting numbers Because we use natural numbers while counting something. In this article, we will get a clear and detailed analysis of Natural Numbers and solve some numerical problems. Natural Numbers Definition and Examples The collection of numbers known as “Natural numbers” is made up of positive integers or numbers from 1 to infinity. Natural numbers do not include 0, negative integers (such as -1, -35, etc.), fractions, or decimals. These natural numbers are often used while counting anything, transacting money, or determining temperature, time, or amount; thus, they are also known as counting numbers. 3 bicycles, 5 buses, Natural numbers from 1 to 10 are: 1, 2, 3, 5, 6, 7, 8, 9, 10. Natural numbers” is made up of positive integers or numbers from 1 to infinity.Some examples of Natural Numbers are 1,10, 45, 29,99,275,360 ,30000,1538296 and so on. Natural Numbers kya hote hai in Hindi “प्राकृतिक संख्या” के रूप में जानी जाने वाली संख्याओं का संग्रह धनात्मक पूर्णांकों या 1 से अनंत तक की संख्याओं से बना होता है। प्राकृत संख्याओं में 0, ऋणात्मक पूर्णांक (जैसे -1, -35, आदि), भिन्न या दशमलव शामिल नहीं हैं। इन प्राकृतिक संख्याओं का उपयोग अक्सर कुछ भी गिनते समय, पैसे का लेन-देन करते समय, या तापमान, समय, या राशि का निर्धारण करते समय किया जाता है; इस प्रकार, उन्हें गिनती संख्या के रूप में भी जाना जाता है। 3 साइकिलें, 5 बसें, आदि। 1 से 10 तक की प्राकृत संख्याएँ हैं: 1, 2, 3, 5, 6, 7, 8, 9, 10। Natural Numbers Sum Numbers Sum 1-10 55 1-100 5050 1-1000 500500 1-10000 50005000 1-100000 5000050000 1-1000000 500000500000 Sum of n Natural Numbers For AP of natural numbers, a = 1 and d = 1, the Sum of n terms S[n] of this AP can be found using the formula- Sn = n/2[2×1+(n-1)1] S[n] = n(n+1)/2 Smallest Natural Number The smallest Natural Number is Zero [ 0 ]. Natural Numbers List From 1 to 50 Natural integers between 1 and 50 are 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49,and 50 What are Natural Numbers Symbol? A set of Natural Numbers defines as a collection of natural numbers. In the field of Mathematics set of natural numbers is represented by N and written as {1,2,3,…….}. Thus, the Set of Natural Numbers is N = {1,2,3,4,5,6,7,8,9,10…∞} A group of Natural Numbers can be expressed in a variety of ways. Such as, N = Set of all numbers starting from 1 to infinity (∞) [Statement Form] N = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10,……,) [Roaster Form] N = {x : x is an integer starting from 1} [Set Builder Form] Natural Numbers Odd Number set Numbers that are divisible only by one and themselves and can’t be divided into two equal parts are called Odd Numbers. Set of Natural Odd Numbers is N ={ 1,3,5,7,9,11,13, ……….} Natural Numbers Even Number set Numbers that are divisible by 2 and can be divided into two equal parts are called Even Numbers.Set of Natural Odd Numbers is N ={ 2,4,6,8,10,12,14, ……….} All Natural Numbers are Whole Numbers Yes, All the natural numbers are also whole numbers. Natural Numbers are Whole Numbers: Difference Before We know about the differences between whole numbers and Natural numbers. First, Let’s what is a whole number. The set of natural numbers, which includes zero, is referred to as the set of whole numbers in the number system. The set of whole numbers in mathematics is represented by the symbol W, which is written as { 0, 1, 2, 3, …….} W = {0,1,2,3,4,5,6,7,8,9,10…∞} Set of Natural Numbers is N = {1,2,3,4,5,6,7,8,9,10…∞} The Main differences between Natural numbers and whole numbers are pointed out below – Natural Number Whole Number 1. The range of whole numbers is zero to infinity. 1. Numbers from 1 to infinity are referred to as “natural numbers.” W = {0,1,2,3,4,5,6,7,8,9,10…∞} N = {1,2,3,4,5,6,7,8,9,10…∞} 2)Natural numbers do not include 0 (Zero) 2) Whole numbers include 0 (zero) 3)Natural Numbers are a subset of Whole Numbers. 3) But Whole Numbers are not a subset of Natural Numbers, Natural numbers are a subset of whole numbers-Is it True? Yes. It is possible to define Natural Number as a subset of a Whole Number. because the entire set of numbers runs from 0 to infinity. As opposed to the Natural Number Set, which runs from 1 to infinity. It is simple to state that the whole number contains all natural numbers. Natural numbers are therefore a subset of the entire set of numbers. Natural Numbers Start from in Number Line Natural Numbers are all positive numbers from 1 to infinity; they do not include zero or negative integers.On a number line, natural numbers are represented as follows: Natural Numbers on Number Line Zero: Is it a Natural Number? No. Zero is not a Natural Number .it is a whole Number. The Value of Zero is null. Natural Number set starts from 1 to infinity. As we know Natural numbers are also called Counting Numbers because we use these numbers while counting. We Can’t use Zero in counting. Zero is a part of the whole Number but not a natural number. Natural Numbers Starts from Natural numbers starts from 1 and continue indefinitely. They are a set of positive whole numbers that do not include zero or any negative numbers. So, the sequence of natural numbers begins with 1 and goes on as follows: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, and so forth, with no end in sight. Natural Numbers Examples Q. What is the average of the first 10 natural numbers? The first 10 natural numbers are -1,2,3,4,5,6,7,8,9,10. The Sum of the first ‘n” natural numbers is 1+2+3+4+……..+n = n(n+1)/2. The sum of the first 10 natural numbers is -[10(10+1)]/2 = (10×11)/2 =5×11 = 55 Average = ( Total sum)/ (Total number of numbers) = 55/10= 5.5. Q, What is the sum of the first 50 natural numbers? We can easily find out the sum of the first 50 natural numbers by following the formula, The Sum of the first ‘n” natural numbers is 1+2+3+4+……..+n = n(n+1)/2. In the given problem n = 50 so,The Sum of the first 50 natural numbers is= [50(50+1)]/2 = (50×51)/2 =25×51 = 1275 Q. Find out the natural numbers from the given list. -45, 89, -999,0,112,-78,18,8.76,¾,1.23 The natural numbers present in the list are – 89,18,112 Q. What is the sum of the first 20 natural numbers? we can easily find out the sum of the first 20 natural numbers by following the formula, The Sum of the first ‘n” natural numbers is 1+2+3+4+……..+n = n(n+1)/2. In the given problem n = 20 so,The Sum of the first 20 natural numbers is= [20(20+1)]/2 = (20×11)/2 =20×21 = 420 Related Post:
{"url":"https://www.adda247.com/school/natural-numbers/","timestamp":"2024-11-12T16:07:29Z","content_type":"text/html","content_length":"660221","record_id":"<urn:uuid:1a59d4c4-ada4-4441-b223-0117c9f7dee3>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00574.warc.gz"}
Top-down and Bottom-up parsing The topic of parsing is a rather complex one, so I've decided to post some examples of two parsing strategies: top-down parsing and bottom-up parsing. Top-down parsing is determining how to interpret a particular piece of data by splitting around nonterminals first and sorting these into a hierarchy that ends with terminals. Bottom-up parsing is determining how to interpret a particular piece of data by resolving sets of terminals into terminals themselves, and braching up until the whole set has been resolved. Let's take an example that most people are familiar with: mathematical expressions. We'll use this expression: 2 + 3 - 4 * 5 + 6 This is usually interpreted as (2 + 3) - ((4 * 5) + 6). So how would a computer arrive at that conclusion using each of the parsing methods above? Let's take top-down parsing first. If we're using top-down parsing, then we'd look at the lowest-precedence operator that we have. In this case, it's the minus sign. We'd then split the expression around the minus sign, so we now have a new expression: (2 + 3) - (4 * 5 + 6) Now we analyze each of those in the same manner. In this case, 2 + 3 is now a terminal expression, so we can look up what we're supposed to do with the + operator (the answer is that we're supposed to add the number, if you didn't know that already) and do it, replacing the expression with the result. So in this case, we resolve 2 + 3 to 5, for a new expresion that looks like this: (5) - (4 * 5 + 6) Now we'll analyze 4 * 5 + 6. + is a lower precedence operator than *, so we'll split around + first. Now we have this expression: (5) - ((4 * 5) + (6)) 6 doesn't need resolving at this point, 4 * 5 is a terminal expression, so we can resolve that to 20. Now we have: (5) - ((20) + (6)) 20 + 6 is now a terminal expression, so we can resolve it to: (5) - (26) 5 - 26 is also a terminal expression, so we can likewise resolve it to: which is the answer. Now let's see how we'd go about resolving the expression using bottom-up parsing. Here's the expression again: 2 + 3 - 4 * 5 + 6 With bottom-up parsing, we find each occurrence of the highest-precedence operator, and resolve it first. In this case, * is the highest precedence operator. We'd split this out, then, into: 4 * 5 And then resolve it to And then insert it back into the expression, for a new expression of 2 + 3 - 20 + 6 The next highest precedence operator is +. We'll work left-to-right, so the first one we're going to resolve is 2 + 3 This resolves to So our expression now looks like 5 - 20 + 6 We'll do the same thing to the other + character, splitting it out to 20 + 6 and resolving it to for a new expression of 5 - 26 The next highest precedence operator (and the only one left) is the - sign. We take the expression 5 - 26 and resolve it to which is our final answer. Both top-down parsing and bottom-up parsing work for most applications. It's up to you to decide which one you want to use, although in my opinion bottom-up parsing is a lot simpler. You can just add some sort of while loop that checks to see if there are any operands in an expression, and if so, finds the highest one, then the leftmost one out of the appearances of the highest one, then resolves the members immediately surrounding it. This would be a little expensive, but it's very simple to implement.
{"url":"http://me.opengroove.org/2009/02/top-down-and-bottom-up-parsing.html","timestamp":"2024-11-10T21:43:56Z","content_type":"text/html","content_length":"46361","record_id":"<urn:uuid:ec9a0e8f-1df4-49f7-9774-dd9c1a3e8b40>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00725.warc.gz"}
CAMMI Roberto CAMMI Roberto Professore di I fascia Settore scientifico disciplinare Chimica fisica • Curriculum Vitae • Teaching • Appointments • Research Curriculum Vitae: Roberto Cammi address: Dipartimento di Scienze Chimiche, della Vita e della Sostenibilità Ambientale, Parco Area delle Scienze, 17A, 43100 Parma, Italy email: roberto.cammi@unipr.it; phone: +39 0521 905442; fax: +39 0521 905557 Born: December 27, 1954; Busseto (Italy) Nationality: Italian Education: University of Parma: Doctor in Chemisty 1979. Positions: • Researcher University of Parma 1983-1999. • Associate Professor University of Parma 1999-2002. • Full Professor University of Parma 2002 – today. Scientific activity Main fields of research: • Molecular interactions (1984 - 1992 ) • Group theory in Chemistry (1984 - 1992) • Quantum chemical solvation models (1986 - today) • Theory of electric and magnetic response properties of molecules in dense phases (1994 - today) Major scientific achievements: Systematic development of electronic structure theories for the study of properties and processes of molecular systems in condensed phases: • theory for the analytical derivatives of the electronic energy of molecular solutes various methods (Hartree-Fock, MC-SCF, coupled-cluster and DFT) • extension of the Dirac-Frenkel time-dependent variational principle to the non-linear Hamiltonians for the description of molecular solutes • theory of the linear response theory for molecules in solution • theory of the second and higher order electro-optic response functions for molecules in solution and more complex environments for various methods (Hartree-Fock, MC-SCF, coupled-cluster and DFT) • theory of the magnetic properties of solvated molecules for Hartree-Fock and DFT methods • theory for the calculation of vibrational properties of molecules in solutions • generalization of the Onsager theory for the calculations of the local field in molecular solutes • theory of the macroscopic susceptibilities and birefringences from calculations on molecular solutes • theories and computational methods for the study of properties of excited states of solvated chromophores, including analytical gradients, and non-equilibrium solvation for several quantum chemical levels, MC-SCF, TD-DFT and coupled-clusters (SACCI, EOM-CC). • theory of the electronic excitation energy transfer between chromophores in solution. • theory of Extreme-Pressure Polarizable Continuum Model (XP-PCM) for the study the effects of extreme-high-pressure (p>1GPa) on the molecular properties • theory of the effect of the pressure on the energy profile of chemical reactions (in collaboration with Prof. Roald Hoffmann, Nobel prize in Chemistry 1981) • theories for the real-time study of the quantum dynamics of the electrons of molecules in solution. These theories have found a full acceptance in the international scientific community and have been implemented in the most popular computational quantum chemistry software (Gaussian,Gamess, Dalton,…), As a result, these theories have significantly contributed to extend the fields of application of the computational quantum chemistry as it is testified by order of magnitude of the citations: > 20000 according to WoS/Scopus. Scientific achievements have regarded other fields of the quantum and theoretical chemistry, as the formulation of the counterpoise correction for the Kitaura-Morokuma analysis of the intermolecular interaction energy, and the formulation of the method for the “Measurement of the Approximated Symmetries” in the analysis of the geometries of molecular systems. Scientific visits: University of Copenhagen, Jan-Feb. 1997; CERMICS (Institute of Applied Mathematics), April. 1997; University of Tromso (Norway), Nov. 2002; University of Karlshrue, March 2003; Notre-Dame University (USA), July 2005; Institute of Applied Mathematics (IMA), University of Minnesota, Nov.-Dec. 2008; Quantum Chemistry Research Institute (QCRI), Kyoto, Feb., June 2009; Institute of Molecular Science (IMS), Okazaki, Japan, several months during 2009-2016; Cornell University, Ithaca, USA, March 2016. Publications and citation statistics: • 131 articles in peer-reviewed international journals and 14 book chapters, one monograph. Citations: more than: 22000 citations on 131 articles in ISI-indexed journals since 1985, (according to Web of Science (WoS) 04.02.2021), twenty seven articles cited more than 200 times, seven articles cited more than 400 times, and one article with more than 10000 citations. Hirsch index 50 (Google Scholar)/49(SCOPUS)/48(ISI WoS). - Representative publications (past 10-Years) 1. R. Cammi, V. Verdolino, B. Mennucci; J., Tomasi, Towards the elaboration of a QM method to describe molecular solutes under the effect of a very high pressure. Chem. Phys. 344, 135-141 (2008) 2. R. Cammi Quantum cluster theory for the polarizable continuum model. I. The CCSD level with analytical first and second derivatives., J. Chem.Phys. 131, 164104 (2009). 3. R. Cammi, R. Fukuda, M. Ehara, H. Nakatsuji, ``Symmetry-adapted cluster and symmetry-adapted cluster-configuration interaction method in the polarizable continuum model: Theory of the solvent effect on the electronic excitation of molecules in solution'', J. Chem. Phys, 133, 024104 (2010) 4. R. Cammi, “Coupled-Cluster Theories for the Polarizable Continuum Model. II. Analytical Gradients for Excited States of Molecular Solutes by the Equation of Motion Coupled-Cluster Method”. Int. J. Quantum Chem., 110, 3040 (2010) 5 R. Fukuda; M. Ehara; H. Nakatsuji; R. Cammi, " Nonequilibrium solvation for vertical photoemission and photoabsorption processes using the symmetry-adapted cluster-configuration interaction method in the polarizable continuum model", J. Chem. Phys. 134, 104109 (2011) 6. R. Cammi, “Coupled-cluster theory for the polarizable continuum model. III. A response theory for molecules in solution“, Int. J. Quantum Chem., 112, 2547 (2012) 7. R. Cammi, “Recent Advances in the Coupled-Cluster Analytical Derivatives Theory for Molecules in Solution Described Within the Polarizable Continuum Model (PCM) Method. Adv. Quantum Chem., 64, 1-29 (2012) 8. R. Cammi, C. Cappelli, B. Mennucci, J. Tomasi, Calculation and analysis of the harmonic vibrational frequencies in molecules at extreme pressure: Methodology and diborane as a test case J. Chem.Phys. 137, 154112 (2012). 9. M. Pagliai, G. Cardini, R. Cammi, ``Vibrational Frequencies of Fullerenes C60and C70under Pressure Studied with a Quantum Chemical Model Including Spatial Confinement Effects'', J. Phys. Chem. A, 118, 5098 (2014). 10. R. Cammi, “The Virial Theorem for the Polarizable Continuum Model”, J. Chem. Phys. 140, 084112(2014) 11. S. Pipolo, S. Corni, R. Cammi, “The cavity electromagnetic field within the polarizable continuum model of solvation.”, J. Chem. Phys., 140, 164114 (2014) 12. S. Pipolo, S. Corni, R. Cammi, "The Cavity Electromagnetic Field within the Polarizable Continuum Model of Solvation: an application to the Real-Time Time-Dependent Density Functional Comp.Theor. Chem. 1040-1041, 112 (2014) 13. S. Pipolo, S. Corni, R. Cammi, “Equation of Motion for the Solvent Polarization Apparent Charges in the Polarizable Continuum Model: Application to Real-Time TDDFT . “, J. Phs. Chem. A, 119, 5405 14. R. Fukuda, M. Ehara, R. Cammi, ``Modeling Molecular Systems at Extreme Pressure by an Extension of the Polarizable Continuum Model (PCM) Based on the Symmetry-Adapted Cluster-Configuration Interaction (SAC–CI) Method: Confined Electronic Excited States of Furan as a Test Case'', J. Chem. Theo. Comp., 11, 2063 (2015) 15. R. Cammi, " A New Extension of the Polarizable Continuum Model: Toward a Quantum Chemical Description of Chemical Reactions at Extreme High Pressure”, J. Comp. Chem. 36, 2246 (2015). 16. B. Chen, R. Hoffmann, R. Cammi, The Effect of Pressure on Organic Reactions in Fluids-a New Theoretical Perspective, Angew. Chem. Int. Ed., 56, 11126 (2017) 17 R. Cammi, The Quantum Chemical Study of Chemical Reactions at Extreme High Pressure by Means of the Extreme-Pressure Polarizable Continuum Model, Ann. Rep. Comp. Chem., 13, 117 (2017) 18. R. Cammi, J. Tomasi, Quantum Cluster Theory for the Polarizable Continuum Model (PCM)”. In Handbook of Computational Chemistry (2nd Ed.) Springer, (2017) 19, R. Cammi, "Quantum Chemistry at the High Pressures: The eXtreme Pressure Polarizable Continuum Model (XP-PCM, In Frontiers of Quantum Chemistry, pp. 273-288, Springer, (2018) Books (past 10-Years) 1. R. Cammi, Molecular response functions for the Polarizable Continuum Model, Springer , 2013. Organization Duties Editorial:1. Co-Editor of the book Continuum solvation models in Chemical Physics; from theory to applications, Wiley, New York, 2008. 2. Regular referee of several international journals (J. Chem. Phys., J. Phys.Chem.A,B,C, Chem. Phys., Chem. Phys. Lett., J. Comp. Chem., Phys.Chem.Chem.Phys.) University Organization: 1. President of the “Consiglio di Corso di Laurea in Chimica” 2004 -2007. 2. Head of the Ph.D. course in Chemical Science, 2013-2017 3. Member of the committee for the “Abilitazione Scientifica Nazionale” (SSD CHIM/02) during the period 2013-2016. Invited Presentations (past 10-Years) 1. 16th Conference on Current Trends in Computational Chemistry, 2-3 Nov, 20007, Jackson, MS, USA 2. International Conference of Computational Methods in Science and Engineering 2008, Crete 3. Theories of Solvation with Quantum Chemistry workshop, Institute of Mathematics and Applications, December 2008, Minneapolis 4. International Conference of Computational Methods in Science and Engineering 2009, Crete 5. XVI International Workshop on Quantum Systems in Chemistry and Physics, Sept. 6-10, 2011, Kanazawa, Japan 6. 20th Conference on Current Trends in Computational Chemistry, 2-3 November 2011, Jackson, MS, USA; 7. 8th International Symposium on Theoretical Physical Chemistry, 24-3o August 2013, Budapest, Hungary; 8. ACS Fall Meeting, Boston, August 2015; 9. Cornell University, Ithaca, USA, August 2015 and March 2016 Scientific collaborations Prof. Roald Hoffmann and Dr. Bo Chen, Cornell University, Ithaca, USA) Prof. Martin Rahm, Chalmers University, Sweden Prof. M. Ehara, Institute Molecular Science, Okazaki, Japan Prof. R. Fukuda, University of Kyoto, Japan Prof. H. Nakatsuji, Institute Quantum Chemical Research, Kyoto, Japan Prof. S. Corni, University of Padua, Italy Prof. G. Cardini, Prof. M. Pagliai, and Prof. V. Schettino, University of Florence, Italy Prizes and Recognitions: 1) Academic Collaborators of Gaussian Inc., USA, computational chemistry software company, since 1996. 2) Visiting professor at the Institute of Mathematics and Applications, November-December 2008, Minneapolis, USA Responsibility for financial grants: 1) Local PI for several financed PRIN national projects : (years 2007-2005-2003-2001) 2) Local PI for a financed FIRB 2001 national project Anno accademico di erogazione: 2024/2025 • Course year: 1 - Second cycle degree - CHEMISTRY - A.Y.: 2024/2025 • Course year: 2 - First cycle degree (DM 270) - Chemistry - A.Y.: 2023/2024 Anno accademico di erogazione: 2023/2024 • Course year: 1 - Second cycle degree - CHEMISTRY - A.Y.: 2023/2024 • Course year: 2 - First cycle degree (DM 270) - Chemistry - A.Y.: 2022/2023 Anno accademico di erogazione: 2022/2023 • Course year: 1 - Second cycle degree - CHEMISTRY - A.Y.: 2022/2023 • Course year: 2 - First cycle degree (DM 270) - Chemistry - A.Y.: 2021/2022 Anno accademico di erogazione: 2021/2022 • Course year: 1 - Second cycle degree - CHEMISTRY - A.Y.: 2021/2022 • Course year: 2 - First cycle degree (DM 270) - CHEMISTRY - A.Y.: 2020/2021 Anno accademico di erogazione: 2020/2021 • Course year: 1 - Second cycle degree - CHEMISTRY - A.Y.: 2020/2021 • Course year: 2 - First cycle degree (DM 270) - CHEMISTRY - A.Y.: 2019/2020 Anno accademico di erogazione: 2019/2020 • Course year: 1 - Second cycle degree - CHEMISTRY - A.Y.: 2019/2020 • Course year: 2 - First cycle degree (DM 270) - CHEMISTRY - A.Y.: 2018/2019 Anno accademico di erogazione: 2018/2019 • Course year: 1 - Second cycle degree - CHEMISTRY - A.Y.: 2018/2019 • Course year: 2 - First cycle degree (DM 270) - CHEMISTRY - A.Y.: 2017/2018 Anno accademico di erogazione: 2017/2018 • Course year: 1 - Second cycle degree - CHEMISTRY - A.Y.: 2017/2018 • Course year: 2 - First cycle degree (DM 270) - CHEMISTRY - A.Y.: 2016/2017 Anno accademico di erogazione: 2016/2017 • Course year: 1 - Second cycle degree - CHEMISTRY - A.Y.: 2016/2017 • Course year: 3 - First cycle degree (DM 270) - CHEMISTRY - A.Y.: 2014/2015 • Course year: 2 - First cycle degree (DM 270) - CHEMISTRY - A.Y.: 2015/2016 Anno accademico di erogazione: 2015/2016 • Course year: 1 - Second cycle degree - CHEMISTRY - A.Y.: 2015/2016 • Course year: 3 - First cycle degree (DM 270) - Chemistry - A.Y.: 2013/2014 • Course year: 2 - First cycle degree (DM 270) - CHEMISTRY - A.Y.: 2014/2015 Anno accademico di erogazione: 2014/2015 • Course year: 1 - Second cycle degree - Chemistry - A.Y.: 2014/2015 • Course year: 2 - First cycle degree (DM 270) - Chemistry - A.Y.: 2013/2014 Anno accademico di erogazione: 2013/2014 • Course year: 1 - Second cycle degree - Chemistry - A.Y.: 2013/2014 • Year: 2024 Author/s: Cammi Roberto, Chen Bo • Year: 2024 Author/s: Tesi Marina, Cammi Roberto, Granucci Giovanni, Persico Maurizio • Year: 2023 Author/s: Rosa M, Dall'Osto G, Cammi R, Corni S • Year: 2022 Author/s: Shiraogawa T, Dall'Osto G, Cammi R, Ehara M, Corni S • Year: 2022 Author/s: Eeckhoudt J, Bettens T, Geerlings P, Cammi R, Chen B, Alonso M, De Proft F Office location Campus Scienze e Tecnologie - Padiglione 01 - Plesso di Chimica Parco Area delle Scienze, 17/A 43124 PARMA
{"url":"https://personale.unipr.it/en/ugovdocenti/person/20994","timestamp":"2024-11-03T03:25:06Z","content_type":"text/html","content_length":"61342","record_id":"<urn:uuid:189ef70c-b4e0-48f9-b453-384b3632855c>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00693.warc.gz"}
BCNF in DBMS | Scaler Topics BCNF in DBMS BCNF (Boyce-Codd Normal Form) in DBMS, introduced by R.F. Boyce and E.F. Codd in the 1970s, is a normalization technique that eliminates table redundancy and anomalies for enhanced data integrity. While 2NF and 3NF address certain dependencies, BCNF addresses additional constraints that can persist, causing redundancy even in 3NF relations. Despite 3NF's adequacy, BCNF offers a more robust solution by specifically addressing cases where 3NF falls short in eliminating 100% redundancy due to certain functional dependencies. What is BCNF in DBMS? BCNF (Boyce Codd Normal Form) is an advanced version of the third normal form (3NF), and often, it is also known as the 3.5 Normal Form. 3NF doesn't remove 100% redundancy in the cases where for a functional dependency (say, A->B), A is not the candidate key of the table. To deal with such situations, BCNF was introduced. BCNF is based on functional dependencies, and all the candidate keys of the relation are taken into consideration. BCNF is stricter than 3NF and has some additional constraints along with the general definition of 3NF. A table or relation is said to be in BCNF in DBMS if the table or the relation is already in 3NF, and also, for every functional dependency (let's say, X->Y), X is either the super key or the candidate key. In simple terms, for any case (let's say, X->Y), X can't be a non-prime attribute. To find the highest normalization form of any relation R with functional dependencies, we first need to check whether the relation is in BCNF or not. If relation R is found to be in BCNF, it simply means that the relation R is also in 3NF, 2NF, and 1NF as the hierarchy shown in the above image. Similarly, if the relation is found to be in 3NF, it is also in 2NF and 1NF. The 3NF in DBMS has more restrictions and strict constraints than the first two normal forms, but it is less strict than the BCNF. This shows that the restriction always increases as we traverse down the hierarchy. Rules for BCNF in DBMS A table or relation is said to be in BCNF (Boyce Codd Normal Form) if it satisfies the following two conditions that we have already studied in its definition: • It should satisfy all the conditions of the Third Normal Form (3NF). • For any functional dependency (A->B), A should be either the super key or the candidate key. In simple words, it means that A can't be a non-prime attribute if B is given as a prime attribute. Example 1: In this example, we have to find the highest normalization form, and for that, we are given a relation R(A, B, C, D, E) with functional dependencies as follows: { BC->D, AC->BE, B->E } • As we can see, (AC)+={A, C, B, E, D} and also, none of its subsets can determine all the attributes of the relation. There is another point to be noted that A or C can’t be derived from any other attribute of the relation, and therefore, there is only one candidate key, {AC}. • Prime attributes in DBMS are always part of the candidate keys, and for this relation R, prime attributes are: {A, C} while non-prime attributes are: {B, E, D}. • Clearly, there is no multi-valued attribute in the relation R, and hence, it is at least in 1NF. • BC->D is in 2NF because BC is not a proper subset of the candidate key AC. AC->BE is also in 2NF because AC itself is a candidate key, and lastly, B->E is again in 2NF. For 2NF, there must not be any partial dependency present in the table, and hence, relation R here is in 2NF. • The relation R is not in 3NF because BC->D at the start is not in 3NF (BC is not a candidate key, and also, D is not a prime attribute). Hence, the relation R has 2NF as the highest normalization Example 2: In this example, we have to again find the highest normalization form, and for that, we are given a relation R(A, B, C) with functional dependencies as follows: {AB ->C, C ->B, AB ->B} Candidate Key (given): {AB} • Clearly, prime attributes for Relation R are: {A,B} while non-prime attributes are: {C}. • For this particular example, let us start from the order of hierarchy with higher restrictions, and firstly, we will check for BCNF here. • Clearly, {AB->C} and {AB->B} are in BCNF because AB is the candidate key present on the LHS of both dependencies. The second dependency, {C->B}, however, is not in BCNF because C is neither a super key nor a candidate key. • C->B is, however, present in 3NF because B is a prime attribute that satisfies the conditions of 3NF. Hence, relation R has 3NF as the highest normalization form. Example 3: In this example, we have a relation R with three columns: Id, Subject, and Professor. We have to find the highest normalization form, and also, if it is not in BCNF, we have to decompose it to satisfy the conditions of BCNF. Id Subject Professor 101 Java Mayank 101 C++ Kartik 102 Java Sarthak 103 C# Lakshay 104 Java Mayank Interpreting the table: • One student can enroll in more than one subject. □ Example: student with Id 101 has enrolled in Java and C++. • A professor is assigned to the student for a specified subject, and there is always a possibility that there can be multiple professors teaching a particular subject. Finding the solution: • Using Id and Subject together, we can find all unique records and also the other columns of the table. Hence, the Id and Subject together form the primary key. • The table is in 1NF because all the values inside a column are atomic and of the same domain. • We can't uniquely identify a record solely with the help of either the Id or the Subject name. As there is no partial dependency, the table is also in 2NF. • There is no transitive dependency because the non-prime attribute i.e., Professor, is not deriving any other non-prime attribute column in the table. Hence, the table is also in 3NF. • There is a point to be noted that the table is not in BCNF (Boyce-Codd Normal Form). Why is the table not in BCNF? As we know each professor teaches only one subject, but one subject may be taught by multiple professors. This shows that there is a dependency between the subject & the professor, and the subject is always dependent on the professor (professor -> subject). As we know the professor column is a non-prime attribute, while the subject is a prime attribute. This is not allowed in BCNF in DBMS. For BCNF, the deriving attribute (professor here) must be a prime attribute. How to satisfy BCNF? In Example 3, we will decompose the table into two tables: the Student table and the Professor table to satisfy the conditions of BCNF. Student Table P_Id S_Id Professor 1 101 Mayank 2 101 Kartik 3 102 Sarthak 4 103 Lakshay 5 104 Mayank Professor Table Professor Subject Mayank Java Kartik C++ Sarthak Java Lakshay C# Mayank Java Professor is now the primary key and the prime attribute column, deriving the subject column. Hence, it is in BCNF. Example 4: Let's take another general example to understand the concept of decomposition in detail: We have a relation R(A, B, C, D) that is already in 3NF. Candidate Keys: {A, BC} Prime Attributes: {A, B, C} Non-Prime Attributes: {D} Functional dependencies are as follows: {A->BCD, BC->AD, D->B} The above relation is not in BCNF because {D->B} is not in BCNF as {D} is neither a candidate key nor a prime attribute. Hence, we will decompose the relation R into R1{A, D, C} and R2{D, B}. • BCNF (Boyce Codd Normal Form) is an advanced version of the third normal form (3NF), and often, it is also known as the 3.5 Normal Form. • A relation is said to be in BCNF in DBMS if the relation is already in 3NF, and also, for every functional dependency (say, X->Y), X is either the super key or the candidate key. In simple terms, X can't be a non-prime attribute. • If ant relation R is found to be in BCNF, it simply means that the relation R is also in 3NF, 2NF, and 1NF. • The 3NF in DBMS has more restrictions and strict constraints than the first two normal forms, but it is less strict than the BCNF. • This shows that the restriction always increases as we traverse down the hierarchy.
{"url":"https://www.scaler.com/topics/bcnf-in-dbms/","timestamp":"2024-11-09T13:35:52Z","content_type":"text/html","content_length":"108198","record_id":"<urn:uuid:0898679a-1a3e-4417-b87d-b4b028d24fc0>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00777.warc.gz"}
How to Sort by Length in Excel? (2 Easy Ways) How to Sort by Length in Excel? (Formula) When working with Microsoft Excel, you may often come across a situation where you need to sort data based on the length of characters in a cell. For instance, let’s say you want to sort the products available in an inventory based on their character’s length where the product name with the least characters would appear on the top and the one with the most characters would be at the bottom. In this Excel tutorial, I will show you two easy methods to sort by length of text in cells in Excel. Method 1: Sort by Length Using the LEN Function In this method, I will show you how you can sort the data based on the length of text in the cells using the LEN function. LEN is a built-in Excel function that returns the number of characters in the input string. For demonstration purposes, I will use a sample dataset where there are different products listed in column A and their respective quantities in column B. You can see that the products shown in the dataset above are not sorted by length. Let’s see how we can sort them by character length using the method shown below. 1. Insert a column to the right of column A. This will be our helper column. To determine the length of characters in a given cell, use the following formula: To determine the length of characters of the first product (Stapler) the formula would be: 2. Write the formula in Excel as shown in the screenshot. 3. Now copy the formula in the entire helper column to get the length of characters for each product. To copy the formula, click and drag the Fill Handle (plus icon) that appears when moving the cursor to the bottom-right of the selected cell. 4. Select all cells from cell A2 to cell C13 (excluding column headings). To select a range, select a cell and then drag over the other cells with the left mouse button pressed (or select a cell and then use the shortcut Control + A to select the entire dataset). 5. Click on the Data tab in the ribbon 6. In the Sort & Filter section, click on the Sort option 7. The Sort window will appear as shown below. 8. Select Column B from the drop-down menu under Sort by. Column B is where we have the helper column and we will be using this to sort the entire dataset 9. Select ‘Values‘ from the drop-down menu under ‘Sort On‘ 10. Select Smallest to Largest from the drop-down menu under Order. In case you want to sort in descending order, select Largest to Smallest. 11. Click on OK. This will sort the products based on the length of characters in ascending order as shown below. You may delete the additional / helper column if you wish or you can keep it and hide it. Also read: How to Filter by Color in Excel? Method 2: Sort by Length Using the SORTBY Function This method uses the SORTBY function to sort a given set of data. The SORTBY function is part of the new Excel Dynamic Arrays family. Dynamic Arrays are only available in Office 365 and Excel versions 2021 and above. Excel versions 2019 and below will not have the Dynamic Array functions, so if you’re using an older version of Excel, you won’t be able to use this method. We will use the same dataset as we did in Method 1 (shown below), and sort this data based on the length of the text in column A. You can see that the products shown in the dataset above are not sorted by length. Let’s see how we can sort them by character length using the second method shown below. 1. Create an identical dataset of the original dataset without the contents as shown. To sort a given dataset using the SORTBY function, the formula is as follows: SORTBY(array, by_array_1, [sort_order1], [by_array2, sort_order2],...) This formula has two main arguments; array & by_array_1. Arguments inside square brackets are optional. I will explain these arguments below. • array – This argument is the array of values to be sorted. In this case, this would be the products from “Stapler” to “Bulldog clip” i.e. range A2 to A13. • by_array_1 – This argument is the array on which you want to sort the first array. This is the criteria based on which we need to sort the products dataset. Since I want to sort based on the length of the name, I will use the LEN function to first calculate the length of the product name in each cell and then use that as the criteria to sort the products. • sort_order1 – This argument is optional. If not provided, the sorting will be done in ascending order. In case you want to sort in descending order, use the value -1. So the formula would become like so: SORTBY(A2:B13, LEN(A2:A13)) 2. Enter the formula in the cell where you want the result (cell D2 in this example) You will see all the products sorted in the new dataset as shown. Since you have used a formula, in case you change the original data set, the sorted data set would automatically update. This is great in case you are creating reports, summaries, or dashboards and you want to show the sorted data in another worksheet which is based on the data in some other sheet A couple of important things to know when using the SORTBY function (which is a dynamic array function) • Make sure that the cells that will get the result of the SORTBY function are empty. if the cells are not empty, you will not get the result of the sort by function and would get the SPILL error instead. in case this happens, check the cells to make sure they’re empty, and delete the content in any cell that is not empty • You cannot edit or change individual elements of the result that you get from a dynamic array formula. For example, in this case, you cannot delete or edit any specific element of the resulting output data but can delete the entire data set. In case you need to edit the cells in the formula output, you need to first convert the formula into values So in this article, we have seen two different methods to sort a given dataset based on the length of characters. Method 1 sorts the original dataset using a helper column whereas Method 2 generates an additional (sorted) dataset. If you have a large dataset and creating an additional dataset may not seem feasible/manageable, go with Method 1. If your dataset is small and either keeping or deleting the original dataset is not a problem, go with Method 2 as it is short and quick. Also, note that if you are using an older version of Microsoft Excel (version 2019 or below) you will not be able to use the SORTBY function. In that case, you will have to stick with Method 1. Other Excel articles you may also like: Leave a Comment
{"url":"https://spreadsheetplanet.com/sort-by-length-excel-formula/","timestamp":"2024-11-01T21:12:43Z","content_type":"text/html","content_length":"129526","record_id":"<urn:uuid:18e7876d-5d90-4145-be49-50488cf31659>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00851.warc.gz"}
Exponential Quantum Space Advantage for Approximating Maximum Directed Cut in the Streaming Model Nadezhda Voronova (Boston University) Thursday, October 24, 2024 - 11:00am ATL 3100A and Virtual Via Zoom While the search for quantum advantage typically focuses on speedups in execution time, quantum algorithms also offer the potential for advantage in space complexity. Previous work has shown such advantages for data stream problems, in which elements arrive and must be processed sequentially without random access, but these have been restricted to specially-constructed problems [Le Gall, SPAA `06] or polynomial advantage [Kallaugher, FOCS `21]. We show an exponential quantum space advantage for the maximum directed cut problem. This is the first known exponential quantum space advantage for any natural streaming problem. This also constitutes the first unconditional exponential quantum resource advantage for approximating a discrete optimization problem in any setting. Our quantum streaming algorithm 0.4844-approximates the value of the largest directed cut in a graph stream with n vertices using polylog(n) space, while previous work by Chou, Golovnev, and Velusamy [FOCS '20] implies that obtaining an approximation ratio better than 4/9≈0.4444 requires Ω(√n) space for any classical streaming algorithm. Our result is based on a recent space classical streaming approach by Saxena, Singer, Sudan, and Velusamy [FOCS '23], with an additional improvement in the approximation ratio due to recent work by Singer [APPROX '23]. Additionally, our work introduces a simple quantum sketch that encompasses several results [GKK+08, Kal21, KPV24] on asymptotic quantum advantages in space complexity in the streaming model, allowing them to be derived from entirely classical algorithms using our quantum sketch as a black box. The talk is based on joint works with John Kallaugher and Ojas Parekh: - "Exponential Quantum Space Advantage for Approximating Maximum Directed Cut in the Streaming Model", https://arxiv.org/abs/2311.14123 - "How to Design a Quantum Streaming Algorithm Without Knowing Anything About Quantum Computing", in submission *We strongly encourage attendees to use their full name (and if possible, their UMD credentials) to join the zoom session.*
{"url":"https://quics.umd.edu/events/exponential-quantum-space-advantage-approximating-maximum-directed-cut-streaming-model","timestamp":"2024-11-06T23:53:23Z","content_type":"text/html","content_length":"20287","record_id":"<urn:uuid:9d06b8de-e2d2-4700-8d73-631e2e3ba4a5>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00308.warc.gz"}
The mechanics of submerged granular flows The thesis tackles the mechanics of submerged granular flows driven by gravity, focusing on the rheological formulations and on the numerical solutions of the equations that govern this type of flow. In particular, a two-phase approach is assumed. The liquid phase, usually water, is described with a Newtonian rheology. The rheology of the granular phase depends on the type of contacts among the particles. Two opposite conditions are identified: if the contacts among particles are instantaneous the regime is named collisional, while, when the contacts become long lasting and involved more particles at the same time the regime is called frictional. In the thesis a proper model for the rheology of the granular phase, able to account for both the regimes, is presented. This model is based on the fundamental evidence that the granular phase is characterized by the coexistence of the collisional regime, which dominates near the free surface, and of the frictional regime, which becomes relevant approaching the loose static bed Armanini et al. [5]. The kinetic theories of dense gases Jenkins and Savage [48] are adopted to describe the collisional regime, while for the frictional regime a new rheological formulation, dependent on the Savage number, which comes from the analysis of the force involved, is given. In addition, the model, named heuristic model [11], introduces a specific equation of state also for the frictional regime. The model is based only on a single parameter, which presumably depends on the properties of the contact forces of the material. A numerical code able to integrate the equations of the mass, momentum and energy of the two-phase, in uniform flow conditions, was developed by Armanini et al. [6] and the results are compared with the experimental data. In the applications to hyperconcentrated channel flows the effect of the side walls and of the internal stresses of the liquid phase are neglected in the momentum balance equations, therefore the drag force is balanced by the weight of the liquid phase. The heuristic model is able to predict in a satisfactory way the distributions across the flow depth of the velocity, concentration, granular temperature and stresses and in particular, it allows to discriminate between the collisional and the frictional components of the shear and of the normal stresses. Another important issue addressed in the thesis concerns the balances of the energy of the granular phase. The model is able to describe the mechanisms of production, diffusion and dissipation of energy, relevant to both the mean component of the flow and the fluctuating component (i.e., the collisional component). In uniform flow conditions, near the static loose bed, the model predicts that the flux of the diffused fluctuating energy exceeds an order of magnitude the locally dissipated flux of fluctuating energy. This suggests that the motion of the grains, even at concentrations close to that of packing, is always accompanied by a certain degree of granular temperature as already observed by Armanini et al. [10]. Furthermore, the description of the mechanisms of exchange among the terms of the total energy balance and of the kinetic energy balance, and between the two energy balances is given. In the thesis, the role of the interaction between the liquid and the solid phase in the kinetic energy balance is analysed [59]. A specific experimental investigation to understand the difference between the drag averaged over time and the drag calculated with respect the average velocities and concentration is carried out. This difference between the two drags represents the contribution to the drag due to the correlations between the fluctuating components of the concentration and of the velocities. By integrating the heuristic model across the flow depth, it is possible, in principle, to derive a set of shallow water equations that are able to describe the behaviour of debris flows and wet avalanches. The mechanics of submerged granular flows / Nucci, Elena. - (2015), pp. 1-135. The mechanics of submerged granular flows The thesis tackles the mechanics of submerged granular flows driven by gravity, focusing on the rheological formulations and on the numerical solutions of the equations that govern this type of flow. In particular, a two-phase approach is assumed. The liquid phase, usually water, is described with a Newtonian rheology. The rheology of the granular phase depends on the type of contacts among the particles. Two opposite conditions are identified: if the contacts among particles are instantaneous the regime is named collisional, while, when the contacts become long lasting and involved more particles at the same time the regime is called frictional. In the thesis a proper model for the rheology of the granular phase, able to account for both the regimes, is presented. This model is based on the fundamental evidence that the granular phase is characterized by the coexistence of the collisional regime, which dominates near the free surface, and of the frictional regime, which becomes relevant approaching the loose static bed Armanini et al. [5]. The kinetic theories of dense gases Jenkins and Savage [48] are adopted to describe the collisional regime, while for the frictional regime a new rheological formulation, dependent on the Savage number, which comes from the analysis of the force involved, is given. In addition, the model, named heuristic model [11], introduces a specific equation of state also for the frictional regime. The model is based only on a single parameter, which presumably depends on the properties of the contact forces of the material. A numerical code able to integrate the equations of the mass, momentum and energy of the two-phase, in uniform flow conditions, was developed by Armanini et al. [6] and the results are compared with the experimental data. In the applications to hyperconcentrated channel flows the effect of the side walls and of the internal stresses of the liquid phase are neglected in the momentum balance equations, therefore the drag force is balanced by the weight of the liquid phase. The heuristic model is able to predict in a satisfactory way the distributions across the flow depth of the velocity, concentration, granular temperature and stresses and in particular, it allows to discriminate between the collisional and the frictional components of the shear and of the normal stresses. Another important issue addressed in the thesis concerns the balances of the energy of the granular phase. The model is able to describe the mechanisms of production, diffusion and dissipation of energy, relevant to both the mean component of the flow and the fluctuating component (i.e., the collisional component). In uniform flow conditions, near the static loose bed, the model predicts that the flux of the diffused fluctuating energy exceeds an order of magnitude the locally dissipated flux of fluctuating energy. This suggests that the motion of the grains, even at concentrations close to that of packing, is always accompanied by a certain degree of granular temperature as already observed by Armanini et al. [10]. Furthermore, the description of the mechanisms of exchange among the terms of the total energy balance and of the kinetic energy balance, and between the two energy balances is given. In the thesis, the role of the interaction between the liquid and the solid phase in the kinetic energy balance is analysed [59]. A specific experimental investigation to understand the difference between the drag averaged over time and the drag calculated with respect the average velocities and concentration is carried out. This difference between the two drags represents the contribution to the drag due to the correlations between the fluctuating components of the concentration and of the velocities. By integrating the heuristic model across the flow depth, it is possible, in principle, to derive a set of shallow water equations that are able to describe the behaviour of debris flows and wet avalanches. File in questo prodotto: File Dimensione Formato accesso aperto Tipologia: Tesi di dottorato (Doctoral Thesis) Licenza: Tutti i diritti riservati (All rights reserved) 4.24 MB Adobe PDF Visualizza/Apri Dimensione 4.24 MB Formato Adobe PDF I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione
{"url":"https://iris.unitn.it/handle/11572/368646","timestamp":"2024-11-09T10:55:24Z","content_type":"text/html","content_length":"66252","record_id":"<urn:uuid:cbf87a2c-1911-4539-ad9c-e32df26fccf6>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00004.warc.gz"}